Giter Club home page Giter Club logo

alphartc's Introduction

AlphaRTC

main
dev
issues
commits

Motivation

AlphaRTC is a fork of Google's WebRTC project using ML-based bandwidth estimation, delivered by the OpenNetLab team. By equipping WebRTC with a more accurate bandwidth estimator, our mission is to eventually increase the quality of transmission.

AlphaRTC replaces Google Congestion Control (GCC) with two customized congestion control interfaces, PyInfer and ONNXInfer. The PyInfer provides an opportunity to load external bandwidth estimator written by Python. The external bandwidth estimator could be based on ML framework, like PyTorch or TensorFlow, or a pure Python algorithm without any dependencies. And the ONNXInfer is an ML-powered bandwidth estimator, which takes in an ONNX model to make bandwidth estimation more accurate. ONNXInfer is proudly powered by Microsoft's ONNXRuntime.

If you are preparing a publication and need to introduce OpenNetLab or AlphaRTC, kindly consider citing the following paper:

@inproceedings{eo2022opennetlab,
  title={Opennetlab: Open platform for rl-based congestion control for real-time communications},
  author={Eo, Jeongyoon and Niu, Zhixiong and Cheng, Wenxue and Yan, Francis Y and Gao, Rui and Kardhashi, Jorina and Inglis, Scott and Revow, Michael and Chun, Byung-Gon and Cheng, Peng and Xiong, Yongqiang},
  booktitle={Proceedings of the 6th Asia-Pacific Workshop on Networking},
  pages={70--75},
  year={2022}
}

Environment

We recommend you directly fetch the pre-provided Docker images from opennetlab.azurecr.io/alphartc or Github release

From docker registry

docker pull opennetlab.azurecr.io/alphartc
docker image tag opennetlab.azurecr.io/alphartc alphartc

From github release

wget https://github.com/OpenNetLab/AlphaRTC/releases/latest/download/alphartc.tar.gz
docker load -i alphartc.tar.gz

Ubuntu 18.04 or 20.04 is the only officially supported distro at this moment. For other distros, you may be able to compile your own binary, or use our pre-provided Docker images.

Compilation

Option 1: Docker (recommended)

To compile AlphaRTC, please refer to the following steps

  1. Prerequisites

    Make sure Docker is installed on your system and add user to docker group.

    # Install Docker
    curl -fsSL get.docker.com -o get-docker.sh
    sudo sh get-docker.sh
    sudo usermod -aG docker ${USER}
  2. Clone the code

    git clone https://github.com/OpenNetLab/AlphaRTC.git
  3. Build Docker images

    cd AlphaRTC
    make all

    You should then be able to see two Docker images, alphartc and alphartc-compile using sudo docker images

Option 2: Compile from Scratch

If you don't want to use Docker, or have other reasons to compile from scratch (e.g., you want a native Windows build), you may use this method.

Note: all commands below work for both Linux (sh) and Windows (pwsh), unless otherwise specified

  1. Grab essential tools

    You may follow the guide here to obtain a copy of depot_tools

  2. Clone the repo

    git clone https://github.com/OpenNetLab/AlphaRTC.git
  3. Sync the dependencies

    cd AlphaRTC
    gclient sync
    mv src/* .
  4. Generate build rules

    Windows users: Please use x64 Native Tools Command Prompt for VS2017. The clang version comes with the project is 9.0.0, hence incompatible with VS2019. In addition, environmental variable DEPOT_TOOLS_WIN_TOOLSCHAIN has to be set to 0 and GYP_MSVS_VERSION has to be set to 2017.

    gn gen out/Default
  5. Compile

    ninja -C out/Default peerconnection_serverless

    For Windows users, we also provide a GUI version. You may compile it via

    ninja -C out/Default peerconnection_serverless_win_gui

Demo

AlphaRTC consists of many different components. peerconnection_serverless is an application for demo purposes that comes with AlphaRTC. It establishes RTC communication with another peer without the need of a server.

In order to run the application, you will need a configuration file in json format. The details are explained in the next chapter.

In addition to the config file, you will also need other files, such as video/audio source files and an ONNX model.

To run an AlphaRTC instance, put the config files in a directory, e.g., config_files, then mount it to an endpoint inside alphartc container

sudo docker run -v config_files:/app/config_files alphartc peerconnection_serverless /app/config_files/config.json

Since peerconnection_serverless needs two peers, you may spawn two instances (a receiver and a sender) in the same network and make them talk to each other. For more information on Docker networking, check Docker Networking

Configurations for peerconnection_serverless

This section describes required fields for the json configuration file.

  • serverless_connection

    • sender
      • enabled: If set to true, the client will act as sender and automatically connect to receiver when launched
      • send_to_ip: The IP of serverless peerconnection receiver
      • send_to_port: The port of serverless peerconnection receiver
    • receiver
      • enabled: If set to true, the client will act as receiver and wait for sender to connect.
      • listening_ip: The IP address that the socket in receiver binds and listends to
      • listening_port: The port number that the socket in receiver binds and listends to
    • autoclose: The time in seconds before close automatically (always run if autoclose=0)

    Note: one and only one of sender.enabled and receiver.enabled has to be true. I.e., sender.enabled XOR receiver.enabled

  • bwe_feedback_duration: The duration the receiver sends its estimated target rate every time(in millisecond)

  • video_source

    • video_disabled:
      • enabled: If set to true, the client will not take any video source as input
    • webcam:
      • enabled: Windows-only. If set to true, then the client will use the web camera as the video source. For Linux, please set to false
    • video_file:
      • enabled: If set to true, then the client will use a video file as the video source
      • height: The height of the input video
      • width: The width of the input video
      • fps: The frames per second (FPS) of the input video
      • file_path: The file path of the input video in YUV format
    • logging:
      • enabled: If set to true, the client will write log to the file specified
      • log_output_path: The out path of the log file

    Note: one and only one of video_source.webcam.enabled and video_source.video_file.enabled has to be true. I.e., video_source.webcam.enabled XOR video_source.video_file.enabled

  • audio_source

    • microphone:
      • enabled: Whether to enable microphone output or not
    • audio_file:
      • enabled: Whether to enable audio file input or not
      • file_path: The file path of the input audio file in WAV format
  • save_to_file

    • enabled: Whether to enable file saving or not
    • audio:
      • file_path: The file path of the output audio file in WAV format
    • video
      • width: The width of the output video file
      • height: The height of the output video file
      • fps: Frames per second of the output video file
      • file_path: The file path of the output video file in YUV format

Use PyInfer or ONNXInfer

PyInfer

The default bandwidth estimator is PyInfer, You should implement your Python class named Estimator with required methods report_states and get_estimated_bandwidth in Python file BandwidthEstimator.py and put this file in your workspace. There is an example of Estimator with fixed estimated bandwidth 1Mbps. Here is an example BandwidthEstimator.py.

class Estimator(object):
    def report_states(self, stats: dict):
        '''
        stats is a dict with the following items
        {
            "send_time_ms": uint,
            "arrival_time_ms": uint,
            "payload_type": int,
            "sequence_number": uint,
            "ssrc": int,
            "padding_length": uint,
            "header_length": uint,
            "payload_size": uint
        }
        '''
        pass

    def get_estimated_bandwidth(self)->int:
        return int(1e6) # 1Mbps
ONNXInfer

If you want to use the ONNXInfer as the bandwidth estimator, you should specify the path of onnx model in the config file. Here is an example configuration receiver.json

  • onnx
    • onnx_model_path: The path of the onnx model

Run peerconnection_serverless

  • Dockerized environment

    To better demonstrate the usage of peerconnection_serverless, we provide an all-inclusive corpus in examples/peerconnection/serverless/corpus. You can use the following commands to execute a tiny example. After these commands terminates, you will get outvideo.yuv and outaudio.wav.

    PyInfer:

    sudo docker run -d --rm -v `pwd`/examples/peerconnection/serverless/corpus:/app -w /app --name alphartc alphartc peerconnection_serverless receiver_pyinfer.json
    sudo docker exec alphartc peerconnection_serverless sender_pyinfer.json

    ONNXInfer:

    sudo docker run -d --rm -v `pwd`/examples/peerconnection/serverless/corpus:/app -w /app --name alphartc alphartc peerconnection_serverless receiver.json
    sudo docker exec alphartc peerconnection_serverless sender.json
  • Bare metal

    If you compiled your own binary, you can also run it on your bare-metal machine.

    • Linux users:

      1. Copy the provided corpus to a new directory

        cp -r examples/peerconnection/serverless/corpus/* /path/to/your/runtime
      2. Copy the essential dynanmic libraries and add them to searching directory

        cp modules/third_party/onnxinfer/lib/*.so /path/to/your/dll
        export LD_LIBRARY_PATH=/path/to/your/dll:$LD_LIBRARY_PATH
      3. Start the receiver and the sender

        cd /path/to/your/runtime
        /path/to/alphartc/out/Default/peerconnection ./receiver.json
        /path/to/alphartc/out/Default/peerconnection ./sender.json
    • Windows users:

      1. Copy the provided corpus to a new directory

        cp -Recursive examples/peerconnection/serverless/corpus/* /path/to/your/runtime
      2. Copy the essential dynanmic libraries and add them to searching directory

        cp modules/third_party/onnxinfer/bin/*.dll /path/to/your/dll
        set PATH=/path/to/your/dll;%PATH%
      3. Start the receiver and the sender

        cd /path/to/your/runtime
        /path/to/alphartc/out/Default/peerconnection ./receiver.json
        /path/to/alphartc/out/Default/peerconnection ./sender.json

Who Are We

The OpenNetLab is an open-networking research community. Our members are from Microsoft Research Asia, Tsinghua Univeristy, Peking University, Nanjing University, KAIST, Seoul National University, National University of Singapore, SUSTech, Shanghai Jiaotong Univerisity.

WebRTC

You can find the Readme of the original WebRTC project here

alphartc's People

Contributors

alebzk avatar amithilbuch avatar andresusanopinto avatar benjwright avatar danilchapovalov avatar ehlemur-zz avatar eshrubs avatar fancycode avatar henbos avatar henrikand avatar hnoo112233 avatar jonex avatar kthelgason avatar minyuel avatar mirkobonadei avatar mstyura avatar oprypin avatar orphis avatar perkj avatar philipel-webrtc avatar pkasting avatar pthatcherg avatar rasmusbrandt avatar rodbro avatar sergeyulanov avatar steveanton avatar titov-artem avatar tkchin avatar yingwang avatar zhihuang0718 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alphartc's Issues

No log output when AlphaRTC is built on bare-metal windows11 machine

I follow the instructions to build on windows 11 machine. in which I use an old version of depot_tools to have vpython.bat instead of vpython3.bat. And the peerconnection_serveless_GUI.exe did work, and show the procedure of transporting and finally transport the output.yuv correctly. But there is no log output at all, althoug I turned
"logging": {
"enabled": true,
"log_output_path": "C:\workspace\AlphaRTC\rundir\runtime\webrtc_receiver.log"
}
in receiver.json.

How to fix it?

Easy configuration of RL-based CC or GCC evaluation and update the README

  • In a main branch, allow users to specify whether to run GCC or RL-based CC testing on AlphaRTC.
    • Create a boolean option gcc inside receiver.json and sender.json config
    • Create a single CC class that uses the config to decide whether to use GCC's or RL-based CC's estimated bandwidth for bitrate control.

Error path use

AlphaRTC/DEPS

Line 2900 in d5f16e2

'src/tools_webrtc/get_landmines.py',

Error when run make all
Output

/home/onl/.vpython-root/472e9b/bin/python: can't open file 'src/tools_webrtc/get_landmines.py': [Errno 2] No such file or directory

Should be
'tools_webrtc/get_landmines.py',

Unexpected behavior executing receiver_pyinfer.json and sender_pyinfer.json via network namespace

I'm currently attempting to run the demo peerconnection_serverless using cmdinfer, hosted between a network namespace bridge and node. I've configured the destination IPs for receiver_pyinfer.json and sender_pyinfer.json as I had done for their onnx counterparts with success, but a different process seems to now be taking place through cmdinfer that ends in connection termination and outvideo.yuv generating as an empty 0 byte file, which goes as follows:

  • After initial processes by receiver and sender, receiver reports "(video_send_stream_impl.cc:408): SignalEncoderTimedOut, Encoder timed out." and stops to wait on sender
  • Sender slowly reports to console a series of messages such as "(block_processor.cc:179): Delay changed to 2 at block 429" and "(quality_scaler.cc:378): Checking average QP 70 (70)." among others from connection.cc, block_processor.cc, and quality_scaler.cc
  • Receiver appears to be processing stream frames; however, sender terminates and destroys the connection session
  • Receiver eventually finishes its processing, but ends with the message "(channel.cc:352): Network route was changed." and never terminates
  • The output files are generated, but outvideo.yuv is an empty file with 0 bytes

Below is a copy of the receiver-side log file. As the sender does not appear to log its console reports, I will do my best to relay back what is being reported as requested or seems relevant to help with tracing back the problem.
webrtc.log

pyinfer issue in bare metal compilation

After following the instruction and compiled the code, the onnx version works fine, but the pyinfer gives issue as it says "no frames to decode" at the beginning of the session

poor performance of ONNX model

hello,
I am interested in AlphaRTC and did some network test, but the strange experiment results confused me very much.

Firstly, I compile AlphaRTC in win10 from scratch, and generated peerconnection_serverless.exe. Then I test ONNX Estimator and PyInfer Estimator separately in the local loopback network, which has no bandwidth limitation. I set the returned bandwidth to be 1Mbps in BandwidthEstimator.py for PyInfer.

The test results are shown below:

model delay_95%/ms avg_recv_rate/kbps
onnx 1 68
pyinfer (1M) 1 590

There are two questions that i am confused:

  • why onnx model can only generate 68kbps data rate?
  • why pyinfer(1M) cannot get nearly 1Mbps data rate? i guess that the origin data rate of video source is nearly 590kbps.

Thanks very much!

Bare metal fails over virtual network

I have built the "bare metal" from source on Ubuntu 16.04.7. For testing I am using two network namespace instances connected through a virtual bridge. When I launch the receiver on one namespace node, and the sender on the other node, I get a SignalEncoderTimeOut error. I have confirmed that traffic moves as expected on the virtual interface, and can run the docker version with no error. Logs attached.
receive_log.txt
send_log.txt

Core dumped at running the receiver

I could successfully run the code until it stoped working altogether with this error:

# Fatal error in: ../../examples/peerconnection/serverless/peer_connection_client.cc, line 65
# last system error: 98
# Check failed: false
# receiver.sh: line 6: 1890305 Aborted

I did not recompile or change the pipeline at all, I just reran the code. Do you know what could be the issue? Thanks a lot!

Update README with guides for GCC evaluation and docker-free compilations

  • Add a user guide for GCC evaluation: How to build GCC, configure to run GCC and getting the packet- and frame-level QoE metrics
  • Guide on build options: Recommend source build for quick local development, and docker-based build for deployment
  • Describe the merits of automated, reproducible testing of RL-based CC and GCC using the end-to-end peerconnection_serverless app in AlphaRTC. (convenience of not needing the signaling server, automated grading of QoE metrics, etc)

Remove Redis related code

RETURN_ON_FAIL(GetValue(top, "redis", &second));
RETURN_ON_FAIL(GetString(second, "ip", &config->redis_ip));
RETURN_ON_FAIL(GetInt(second, "port", &config->redis_port));
RETURN_ON_FAIL(GetString(second, "session_id", &config->redis_sid));
RETURN_ON_FAIL(GetInt(second, "redis_update_duration",
&config->redis_update_duration_ms));
second.clear();

Please retrieve whole project

Add guidelines on basic call quality statistics

Current use of skeleton StatsCollect in remote_estimator_proxy.cc confuses users, as by default it does not collect any statistics and there are no specific guidelines on how to use it.

Let's remove StatsCollect and add guidelines for investigating basic call quality statistics from receiver- and sender-side logs in README.

How to compile shared libraries & docker build gets stuck

Hello, I wonder whether there are any instructions about how the shared libraries under /modules/third_party/onnxinfer/lib are compiled. Since their source codes seem not to be included in this repo, it feels a little bit hard to me to imagine how to fit an onnx model into it.
Though we can directly specify the onnx model path in the configuration file, what the input and output look like is still unclear to me. I'd appreciate it if some detailed descriptions (or the source codes of the shared libraries, directly) could be provided. Thanks a lot.

Faster azure pipeline testing

Currently it takes ~30min for each commit to an opened PR, which is quite large.
Especially, sync dependencies phase takes 15m 38s, as the test pipeline builds everything from the scratch, including depot_tools and fetching chromium (gclient sync).
We need to find a way to remove this phase with a docker image that contains depot_tools and chromium.

MicrosoftTeams-image

Add corpus for peerconnection_serverless

Add corpus includes send config, recv config and video file with available copyright, to folder examples/peerconnection/serverless/corpus/ so that this project can directly run as the Readme said.

Inconsistent original/output video size

Hi,

I used the default setting for peerconnection_serverless. However, the original ~1s test.yuv video would become ~19s output video after transmission.

Could someone help me out here?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.