Giter Club home page Giter Club logo

cn-upb / pishahang Goto Github PK

View Code? Open in Web Editor NEW
8.0 8.0 5.0 297.92 MB

Pishahang is an NFV MANO framework that manages and orchestrates services across multiple technological domains

License: Apache License 2.0

JavaScript 0.20% HTML 1.29% CSS 0.05% Ruby 0.01% Python 1.45% PHP 0.62% Smarty 0.15% Shell 3.36% Makefile 2.58% Roff 2.96% C++ 0.54% C 86.27% Yacc 0.13% Assembly 0.09% Lex 0.07% M4 0.11% Go 0.04% Awk 0.01% Objective-C 0.05% HCL 0.02%
kubernetes kubernetes-cluster openstack orchestration

pishahang's Introduction

Pishahang logo

Joint Orchestration of Network Function Chains and Distributed Cloud Applications

Build Status Documentation Status Slack

Pishahang is a framework consolidated from state-of-the-art NFV and Cloud management and orchestration tools and technologies to provide orchestration for services deployed across multiple technological domains.

Useful Links

Papers:

Demo videos:

Usage

Full documentation can be found here

Please refer to this wikipage for installing Pishahang, Devstack, and Kubernetes.

Service Deployment

This section will give a step-by-step guide how to deploy a service using the service platform.

Connect OpenStack and Kubernetes to Pishahang

This step assumes that you have connected to your Kubernetes cluster before using the kubectl command line tool.

  • Open your browser and navigate to http://public_ip
  • Open the "WIM/VIM Settings" tab
  • Add a new WAN adaptor
    • Select "Mock" WIM vendor
    • Enter any WIM name, WIM address, username and password
    • Confirm by clicking "SAVE"
  • Add OpenStack VIM adaptor
    • Chose any VIM name
    • Select the WIM adaptor you just created, enter any country and city
    • Select "Heat" VIM vendor
    • Fill in the compute and network configuration fields accordingly
  • Add Kubernetes VIM adaptor
    • Chose any VIM name
    • Select the WIM adaptor you just created, enter any country and city
    • Select "Kubernetes" VIM vendor
      • Enter the IP address of the Kubernetes master node
      • Create a Kubernetes service token with sufficient privileges (read & write). An existing service token can be retrieved by running the "kubectl describe secret" command.
      • Copy and paste the cluster’s CA certificate. The certificate must be PEM and Base64 encoded. The certificate is usually stored in the kubctl config located at ~/.kube/config
Enable Kubernetes monitoring

To enable monitoring for a Kubernetes cluster, a new Prometheus scrape job is needed. The reference config below works for common cluster setups, however, it might need to be adapted for special Kubernetes setups. Replace the placeholder credentials in the config with your cluster’s actual credentials. Now, add the scrape job to Prometheus by POSTing the configuration (transformed to JSON) to http://public_ip:9089/prometheus/configuration/jobs. To verify, open your browser and navigate to the Prometheus dashboard at http://public_ip:9090. If the metrics list shows entries starting with "container", the installation was successful.

- job_name: 'kubernetes-cadvisor'
  scheme: https
  tls_config:
    insecure_skip_verify: true
  basic_auth:
    username: username
    password: password
  kubernetes_sd_configs:
  - role: node
    api_server: https://kubernetes-master
    tls_config:
      insecure_skip_verify: true
    basic_auth:
      username: username
      password: password
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - target_label: __address__
    replacement: kubernetes-master:443
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __metrics_path__
    replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
Onboarding Descriptors

Push any descriptors using the corresponding catalogue endpoint:

Please find example CSDs and COSDs in the son-examples/complex-services folder.

Deploying a Service
  • Open your browser and navigate to http://public_ip:25001
  • Open the "Available Complex Services" tab
  • Click the "Instantiate" button of the service you want to deploy
  • Confirm the instantiate modal (ingress and egress can be empty)
Terminating a Service
  • Open your browser and navigate to http://public_ip:25001
  • Open the "Running Complex Services" tab
  • Click the "Terminate" button of the service you want to stop

Lead Developer:

Contributors:

pishahang's People

Contributors

amchuri avatar bee-t avatar bhuiyan-mitul avatar bjoluc avatar dependabot[bot] avatar hadik3r avatar orangeonblack avatar tobiasdierich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pishahang's Issues

Monitoring manager crashes on start

A monitoring manager container started from a freshly built image currently crashes with ImportError: cannot import name RequestContext. Despite that, the docker image still uses Python 2.7 and most dependencies are outdated.

Outdated readmes in `mano-framework`

The Readme files for the mano-framework components are outdated, now that we use Docker Compose for the builds and Poetry for project and dependency management. Since they all share the same setup which is also described in the "Developing Pishahang" docs, none of them needs extra information on:

  • How to run builds
  • How to run unit tests
  • Requirements (unless they need anything fancy of course)
  • Implementation (at least programming language and dependencies)

Apart from that, it looks like this readme is a copy of this one.

I would suggest to have only a description of each microservice's in its readme, so that we can include all of them in the developer documentation.

[RFC] The future of son-catalogue-repos?

son-catalogue-repos is a Ruby-based microservice that offers an HTTP REST API with CRUD operations for various entities stored in MongoDB. Internally, it validates request data using the corresponding JSON schemas.

Advantages of "the catalogue repos":

  • Being a layer between other microservices and MongoDB, it enforces schemas for MongoDB documents
  • By using an HTTP API, it does not make restrictions to the microservices' programming languages

Disadvantages:

  • Microservices have to create model classes for interaction with the catalogue-repos API themselves, which most of them don't
  • Adding models to the catalogue repos requires implementing HTTP APIs for them (which, in the current setup, is way more low-level than it could be)
  • It restricts the query options to the ones implemented in the HTTP API

Considering the fact that all database-interacting microservices are either already implemented in Python, or in the process of being ported to Python, the overhead of an HTTP API seems unnecessary to me. Instead, using a set of shared MongoEngine models embedded in the manobase package would:

  • simplify database interactions in microservices
  • allow more advanced queries by using MongoEngine instead of a REST API
  • provide model classes for microservices by default, and, by enabling in-editor type hints, help to prevent runtime validation errors
  • improve maintainability (one API less to maintain)

This approach would therefore tackle all of the above disadvantages, at the cost of loosing cross-language validation features. In my opinion, that's worth it. What do you think?

Service Lifecycle Manager does not make use of OOP

The SLM basically is a ~3000 LOC class that manages the contents of some dictionaries while interacting with other components. Understanding its code can be extremely hard, not to mention maintainability. A large amount of repetitive code (frequently passing the same parameters, dictionary accesses) could be made obsolete using OOP.

Quick Example: Services. Information on them is stored in a single dictionary; almost all service-related methods require a serv_id argument (and there are a lot of them obviously). Introducing a Service class, service-related functionalities would be coupled to service-related data, making the purpose and scope of both of them easier to understand.

MANO framework components publish invalid messages

Currently, the default content type for messages published by the ManoBrokerConnection class is application/json, while MANO plugins often use yaml.dump and json.dumps for the payloads interchangeably without specifying the content type. This results in messages that have a yaml body with a json content type. The other way around, payloads of received messages are often blindly parsed as yaml.

Since most of the MANO plugins' interaction with the messaging utilities has to be modified anyway to fix this, refactoring messaging.py to handle serialization, deserialization, and content types seems to be a clean solution to me. This way, we can also remove the redundant calls to serialization and deserialization functions throughout all MANO components.

Redundancy in schema definitions

Many of the current schemas contain redundant definitions, making it hard to understand what is unique about certain schema files, and even harder to maintain schemas.

Examples:

  • Descriptors and their records: Every record schema seems to copy the corresponding descriptor schema instead of referencing it.
  • The unit definitions at the top of almost each schema file

json-schema supports referencing across files to solve this problem.

Missing tests

The following MANO plugins do not have any automated tests:

  • OpenStack Lifecycle Manager
  • Placement Plugin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.