Giter Club home page Giter Club logo

aiortc's Introduction

aiortc

License

Version

Python versions

Tests

Coverage

Documentation

What is aiortc?

aiortc is a library for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) in Python. It is built on top of asyncio, Python's standard asynchronous I/O framework.

The API closely follows its Javascript counterpart while using pythonic constructs:

  • promises are replaced by coroutines
  • events are emitted using pyee.EventEmitter

To learn more about aiortc please read the documentation.

Why should I use aiortc?

The main WebRTC and ORTC implementations are either built into web browsers, or come in the form of native code. While they are extensively battle tested, their internals are complex and they do not provide Python bindings. Furthermore they are tightly coupled to a media stack, making it hard to plug in audio or video processing algorithms.

In contrast, the aiortc implementation is fairly simple and readable. As such it is a good starting point for programmers wishing to understand how WebRTC works or tinker with its internals. It is also easy to create innovative products by leveraging the extensive modules available in the Python ecosystem. For instance you can build a full server handling both signaling and data channels or apply computer vision algorithms to video frames using OpenCV.

Furthermore, a lot of effort has gone into writing an extensive test suite for the aiortc code to ensure best-in-class code quality.

Implementation status

aiortc allows you to exchange audio, video and data channels and interoperability is regularly tested against both Chrome and Firefox. Here are some of its features:

  • SDP generation / parsing
  • Interactive Connectivity Establishment, with half-trickle and mDNS support
  • DTLS key and certificate generation
  • DTLS handshake, encryption / decryption (for SCTP)
  • SRTP keying, encryption and decryption for RTP and RTCP
  • Pure Python SCTP implementation
  • Data Channels
  • Sending and receiving audio (Opus / PCMU / PCMA)
  • Sending and receiving video (VP8 / H.264)
  • Bundling audio / video / data channels
  • RTCP reports, including NACK / PLI to recover from packet loss

Installing

The easiest way to install aiortc is to run:

pip install aiortc

Building from source

If there are no wheels for your system or if you wish to build aiortc from source you will need a couple of libraries installed on your system:

  • Opus for audio encoding / decoding
  • LibVPX for video encoding / decoding

Linux

On Debian/Ubuntu run:

apt install libopus-dev libvpx-dev

OS X

On OS X run:

brew install opus libvpx

License

aiortc is released under the BSD license.

aiortc's People

Contributors

alegonz avatar alex-eri avatar benwilber avatar buendiya avatar davidskuza avatar dergenaue avatar dsvictor94 avatar fippo avatar hentaibaka avatar jlaine avatar jmillan avatar johnboiles avatar juberti avatar kadoshita avatar laityned avatar michaelgira23 avatar mijime avatar nickaknudson avatar nowke avatar ortegatron avatar phanirithvij avatar rprata avatar seler avatar shithead avatar sirf avatar takeshikishita avatar uriyyo avatar whitphx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aiortc's Issues

random video interrupts

Hi jlaine!
Sometimes, when we are video streaming, the frames stops. we track the problem to the rtcrtpreceiver.py file and it seems some frames are None and so they don't inserted in the queue. Do you have any ideas why this happens and what should we do to solve the problem? Is it a bug or an expected behavior?
Here is the related section in rtcrtpreceiver.py module for convenience:

async def _handle_rtp_packet(self, packet):
     self.__log_debug('< %s', packet)
     if packet.payload_type in self._decoders:
         decoder = self._decoders[packet.payload_type]
         loop = asyncio.get_event_loop()

         # RTCP
         if self.__remote_ssrc is None:
             self.__remote_ssrc = packet.ssrc
             self.__remote_counter = LossCounter(packet.sequence_number)
         else:
             self.__remote_counter.add(packet.sequence_number)

         if self._kind == 'audio':
             # FIXME: audio should use the jitter buffer!
             audio_frame = await loop.run_in_executor(None, decoder.decode, packet.payload)
             await self._track._queue.put(audio_frame)
         else:
             # check if we have a complete video frame
             self._jitter_buffer.add(packet.payload, packet.sequence_number, packet.timestamp)
             payloads = []
             got_frame = False
             last_timestamp = None
             for count in range(self._jitter_buffer.capacity):
                 frame = self._jitter_buffer.peek(count)
                 if frame is None:
                     break
                 if last_timestamp is None:
                     last_timestamp = frame.timestamp
                 elif frame.timestamp != last_timestamp:
                     got_frame = True
                     break
                 payloads.append(frame.payload)

             if got_frame:
                 self._jitter_buffer.remove(count)
                 video_frames = await loop.run_in_executor(None, decoder.decode, payloads)
                 for video_frame in video_frames:
                     await self._track._queue.put(video_frame)

I think the important section is the last else block, where you are putting the video_frames to the queue. It creates the video frames only when got_frame is True and other frames get discarded.
Our clients are android smart phones and we are testing in a local network.

Based on another issue we changed the buffer capacity from 32 to 64 and we got slightly better results. Do you think the buffer capacity is an important factor here?

error in rtp.py

Hi Jlaine,
Sometimes when I have audio & video call with another client, this error is occured on Terminal :

Task exception was never retrieved
future: <Task finished coro=<RTCRtpReceiver._run_rtcp() done, defined at /usr/local/lib/python3.6/site-packages/aiortc-0.9.1-py3.6-linux-x86_64.egg/aiortc/rtcrtpreceiver.py:241> exception=error('argument out of range',)>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/aiortc-0.9.1-py3.6-linux-x86_64.egg/aiortc/rtcrtpreceiver.py", line 272, in _run_rtcp
    await self._send_rtcp(packet)
  File "/usr/local/lib/python3.6/site-packages/aiortc-0.9.1-py3.6-linux-x86_64.egg/aiortc/rtcrtpreceiver.py", line 280, in _send_rtcp
    await self.transport._send_rtp(bytes(packet))
  File "/usr/local/lib/python3.6/site-packages/aiortc-0.9.1-py3.6-linux-x86_64.egg/aiortc/rtp.py", line 175, in __bytes__
    payload += bytes(report)
  File "/usr/local/lib/python3.6/site-packages/aiortc-0.9.1-py3.6-linux-x86_64.egg/aiortc/rtp.py", line 74, in __bytes__
    self.jitter, self.lsr, self.dlsr)
struct.error: argument out of range

it seems it is related to rtp.py, please let me know what is it and how to fix it?
Thanks in advance

Jitterbuffer overflow causing up to 3000 lost frame packets

Hi

I'm testing a webrtc to tensorflow process which uses aiortc version 0.5.0 to connect to a kurento media server. I've discovered what looks like a bug in the RTCRtpReceiver and JitterBuffer when video processing. If a frame consists of more frames that the jitter buffer capacity the process discards further packets until MAX_DROPOUT is reached. Then starts processing again.

I have created a local fix to this by

  1. Adding a resize method to the jiffer buffer
  2. When a max is reached just processing the frame contents anyhow.

In testing I discovered that max frame size I was seeing was 85 packets,

My code changes are as follows:

In rtcrtpreceiver.py:

I've changed:

line 100 in the released file (of async def _handle_rtp_packet(self, packet))

if (got_frame: self._jitter_buffer.remove(count) for video_frame in decoder.decode(*payloads): await self._track._queue.put(video_frame)

to:

if (got_frame or (count == self._jitter_buffer.capacity-1 and self._jitter_buffer.fixed())): self._jitter_buffer.remove(count) for video_frame in decoder.decode(*payloads): await self._track._queue.put(video_frame)

I have also added the following methods so jitterbuffer.py:

`def fixed(self):
return self._capacity >= MAX_CAPACITY

def __resize(self, capacity):
    if (capacity>self._capacity):
        frames = [None for i in range(capacity)]
        head = self._head
        for i in range(self._capacity):
            frames[i]=self._frames[head]
            head = (head + 1) % self._capacity
        self._capacity=capacity
        self._frames=frames  
        self._head=0`

and changed line 36 (of def add(self, payload, sequence_number, timestamp) from:

if delta >= self._capacity: if delta > MAX_DROPOUT: self.__reset() self._origin = sequence_number delta = 0 else: return

to

if delta >= self._capacity: if not self.fixed() and delta < (self._capacity + self._capacity/2): self.__resize(self._capacity*2) elif delta > MAX_DROPOUT: self.__reset() self._origin = sequence_number delta = 0 else: return

I currently have MAX_CAPACITY set to 256.

This seems to fix the problem.

SyntaxError: 'await' expressions in comprehensions are not supported

Hi. I am using Python 3.5.3 on Debian9 and latest aiortc source. I can run the apprtc example fine, but when I attempt to run videostream-cli I get the following error:

mike@debian9:/files/webrtc/aiortc/examples/videostream-cli$ python3 cli.py offer -v
File "cli.py", line 47
frames = [await track.recv() for track in self.tracks]
^
SyntaxError: 'await' expressions in comprehensions are not supported

Any ideas what I am doing wrong? Thanks for the excellent webrtc framework!

Mike M.

Help on gettings started: Django integration

Hi, we are developing a django project, and are struggling to find backend liberaries for our P2P video chat service. We have looked into using django channels, but django-channels has no webrtc support. Do you have any ideas on how to integrate aiortc into a django project? As an app for example? I know this isn't the right place to ask this question, maybe you could have a q&a page. Thank you for your time

support multiple video/audio tracks

Excellent package.
referring to:
raise InternalError('Only a single %s track is supported for now' % track.kind) in RTCPeerConnection
I am wondering what it would take to implement multiple tracks for a peerConnection and what alternative is generally recommended for handling multiple video tracks in the interim?
Thanks Alot.

Import failing on MacOS

Hey Jeremy!

Not sure what is the best method to reach out to you, and hence opening an issue :)
I have been trying to explore aiortc, and am trying to install the package from source on my MAC.

Even though the installation is successful when i run python setup.py install, the import seems to be failing, mainly because it is not able to find opus and vpx packages.

Traceback when I attempt to import on Python2 and Python3.6.2

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/satprasa/personal/Dropbox/My_MS_Notes/BU_learning/Algo/aiortc-electrowizard/aiortc/__init__.py", line 9, in <module>
    from .rtcpeerconnection import RTCPeerConnection  # noqa
  File "/Users/satprasa/personal/Dropbox/My_MS_Notes/BU_learning/Algo/aiortc-electrowizard/aiortc/rtcpeerconnection.py", line 8, in <module>
    from .codecs import MEDIA_CODECS
  File "/Users/satprasa/personal/Dropbox/My_MS_Notes/BU_learning/Algo/aiortc-electrowizard/aiortc/codecs/__init__.py", line 3, in <module>
    from .opus import OpusDecoder, OpusEncoder
  File "/Users/satprasa/personal/Dropbox/My_MS_Notes/BU_learning/Algo/aiortc-electrowizard/aiortc/codecs/opus.py", line 4, in <module>
    from ._opus import ffi, lib
ModuleNotFoundError: No module named 'aiortc.codecs._opus'

I have ensured that opus and libvpx are installed using brew. What am I missing? Appreciate the help!

Cheers,
Sathvik
aiortc_import_problem_dependencies.txt

example videostream-cli not work

I run the example
python3 cli.py offer
python3 cli.py answer
but met some error:

Sending video for 10s
Task exception was never retrieved
future: <Task finished coro=<RTCPeerConnection.__connect() done, defined at /usr/local/lib/python3.5/dist-packages/aiortc/rtcpeerconnection.py:442> exception=Exception('libvpx error: ABI version mismatch',)>
Traceback (most recent call last):
  File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/usr/local/lib/python3.5/dist-packages/aiortc/rtcpeerconnection.py", line 450, in __connect
    await transceiver.receiver.receive(self.__remoteRtp[transceiver])
  File "/usr/local/lib/python3.5/dist-packages/aiortc/rtcrtpreceiver.py", line 140, in receive
    self.__decoders[codec.payloadType] = get_decoder(codec)
  File "/usr/local/lib/python3.5/dist-packages/aiortc/codecs/__init__.py", line 29, in get_decoder
    return VpxDecoder()
  File "/usr/local/lib/python3.5/dist-packages/aiortc/codecs/vpx.py", line 133, in __init__
    _vpx_assert(lib.vpx_codec_dec_init(self.codec, lib.vpx_codec_vp8_dx(), ffi.NULL, 0))
  File "/usr/local/lib/python3.5/dist-packages/aiortc/codecs/vpx.py", line 127, in _vpx_assert
    raise Exception('libvpx error: ' + reason.decode('utf8'))
Exception: libvpx error: ABI version mismatch
Task exception was never retrieved
future: <Task finished coro=<RTCPeerConnection.__connect() done, defined at /usr/local/lib/python3.5/dist-packages/aiortc/rtcpeerconnection.py:442> exception=Exception('libvpx error: ABI version mismatch',)>
Traceback (most recent call last):
  File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/usr/local/lib/python3.5/dist-packages/aiortc/rtcpeerconnection.py", line 450, in __connect
    await transceiver.receiver.receive(self.__remoteRtp[transceiver])
  File "/usr/local/lib/python3.5/dist-packages/aiortc/rtcrtpreceiver.py", line 140, in receive
    self.__decoders[codec.payloadType] = get_decoder(codec)
  File "/usr/local/lib/python3.5/dist-packages/aiortc/codecs/__init__.py", line 29, in get_decoder
    return VpxDecoder()
  File "/usr/local/lib/python3.5/dist-packages/aiortc/codecs/vpx.py", line 133, in __init__
    _vpx_assert(lib.vpx_codec_dec_init(self.codec, lib.vpx_codec_vp8_dx(), ffi.NULL, 0))
  File "/usr/local/lib/python3.5/dist-packages/aiortc/codecs/vpx.py", line 127, in _vpx_assert
    raise Exception('libvpx error: ' + reason.decode('utf8'))
Exception: libvpx error: ABI version mismatch
Task exception was never retrieved
future: <Task finished coro=<RTCRtpSender._run_rtp() done, defined at /usr/local/lib/python3.5/dist-packages/aiortc/rtcrtpsender.py:94> exception=Exception('libvpx error: ABI version mismatch',)>
Traceback (most recent call last):
  File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
    result = coro.throw(exc)
  File "/usr/local/lib/python3.5/dist-packages/aiortc/rtcrtpsender.py", line 106, in _run_rtp
    payloads = await loop.run_in_executor(None, encoder.encode, frame)
  File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
    yield self  # This tells Task to wait for completion.
  File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
    future.result()
  File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
  File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.5/dist-packages/aiortc/codecs/vpx.py", line 226, in encode
    _vpx_assert(lib.vpx_codec_enc_init(self.codec, self.cx, self.cfg, 0))
  File "/usr/local/lib/python3.5/dist-packages/aiortc/codecs/vpx.py", line 127, in _vpx_assert
    raise Exception('libvpx error: ' + reason.decode('utf8'))

could you give me some advices?
I am using ubuntu 16.04.

Pass STUN and TURN

I want to be able to pass STUN and TURN servers down to ICE Connection.

Happy to implement, would just like to understand your preference for the implementation. Should I pass to contructor of RPC Peer Connection or as some sort of configuration on that object?

Thanks.

How to fix delay in conversations

Hi Jlaine, how are you?
First of all thanks a lot for writing this great module
we are using it for a conversation between two users, but when playing voice of second user for first user and vice versa, we have delay about less than 2 seconds and during the conversation this delay
increases.
How can I fix this issue?
Please let me know why you are using pause function in your codes?

The code we changed :

import argparse
import asyncio
import json
import logging
import os
import time
import wave

from aiohttp import web
import aiortc
from aiortc import RTCPeerConnection, RTCSessionDescription
from aiortc.mediastreams import (AudioFrame, AudioStreamTrack, VideoFrame,
                                 VideoStreamTrack)

import psycopg2
import hashlib

DB_details = {"db":"db_name",
               "user": "usr_name",
               "pass":"password",
               "host":"localhost",
               "port":"5432"
            }

connect_str = "dbname={} user={} host={} port={} password={}".format(DB_details["db"],
                                                                    DB_details["user"],
                                                                    DB_details["host"],
                                                                    DB_details["port"],
                                                                    DB_details["pass"])

ROOT = os.path.dirname(__file__)

rtc_io = {
    'call-1': {
        '1': {
            'audio': [],
            'video': [],
            'connected': False
        },
        '2': {
            'audio': [],
            'video': [],
            'connected': False
        }
    }
}




def md5(input_string):
    return hashlib.md5(input_string.encode('utf-8')).hexdigest()

async def pause(last, ptime):
    if last:
        now = time.time()
        await asyncio.sleep(last + ptime - now)
    return time.time()


class AudioFileTrack(AudioStreamTrack):
    def __init__(self, path):
        self.last = None
        self.reader = wave.Wave_read(path)

    async def recv(self):
        self.last = await pause(self.last, 0.02)
        return AudioFrame(
            channels=self.reader.getnchannels(),
            data=self.reader.readframes(160),
            sample_rate=self.reader.getframerate())


class AudioRemoteTrack(AudioStreamTrack):
    def __init__(self, call_id, remote_id):
        self.last = None
        self.call_id = call_id

        print("remote id ",remote_id)
        self.remote_id = remote_id
        self.default_frame = AudioFrame(channels=1, data=b'\x00\x00' * 160, sample_rate=8000)

    async def recv(self):
        self.last = await pause(self.last, 0.02)
        if rtc_io[self.call_id][self.remote_id]['audio']:
            audio_frame = rtc_io[self.call_id][self.remote_id]['audio'].pop(0)
            return audio_frame
        else:
            return self.default_frame


class VideoRemoteTrack(VideoStreamTrack):
    def __init__(self, call_id, remote_id):
        self.last = None
        self.call_id = call_id
        self.remote_id = remote_id
        self.default_frame = VideoFrame(width=640, height=480)

    async def recv(self):
        self.last = await pause(self.last, 0.05)
        if int(self.remote_id) == 2:
            print('2:   ', rtc_io[self.call_id][self.remote_id]['video'])
        if rtc_io[self.call_id][self.remote_id]['video']:
            self.default_frame = video_frame = rtc_io[self.call_id][self.remote_id]['video'].pop(0)
            return video_frame
        else:
            return self.default_frame


class VideoDummyTrack(VideoStreamTrack):
    def __init__(self):
        width = 640
        height = 480

        self.counter = 0
        self.frame_green = VideoFrame(width=width, height=height)
        self.frame_remote = VideoFrame(width=width, height=height)
        self.last = None

    async def recv(self):
        self.last = await pause(self.last, 0.04)
        self.counter += 1
        if (self.counter % 100) < 50:
            return self.frame_green
        else:
            return self.frame_remote


async def consume_audio(track, call_id, local_id, remote_id):
    
    while True:
        await asyncio.sleep(0.02)
        frame = await track.recv()
        if rtc_io[call_id][remote_id]["connected"]:
            rtc_io[call_id][local_id]['audio'].append(frame)


async def consume_video(track, call_id, local_id, remote_id):
    if rtc_io[call_id][remote_id]["connected"]:
        while True:
            rtc_io[call_id][local_id]['video'].append(await track.recv())


async def consume_video2(track, local_video):
    """
    Drain incoming video, and echo it back.
    """
    while True:
        local_video.frame_remote = await track.recv()


async def index(request):
    html = open(os.path.join(ROOT, 'index.html'), 'r').read()
    return web.Response(content_type='text/html', text=html)


async def offer(request):
    data = await request.json()
    offer = data['offer']
    offer = RTCSessionDescription(
        sdp=offer['sdp'],
        type=offer['type'])

    pc = RTCPeerConnection()
    pc._consumers = []
    pcs.append(pc)

    local_id = data['local_id']
    call_dict = rtc_io[data['call_id']]
    remote_id = [k for k in call_dict.keys() if k != local_id][0]
    call_dict[local_id]["connected"] = True
    # there is only two keys in call_dict and we want the other key

    remote_audio = AudioRemoteTrack(data['call_id'], remote_id)
    remote_video = VideoRemoteTrack(data['call_id'], remote_id)

    @pc.on('datachannel')
    def on_datachannel(channel):
        @channel.on('message')
        def on_message(message):
            channel.send('pong')

    @pc.on('track')
    def on_track(track):
        if track.kind == 'audio':
            pc.addTrack(remote_audio)
            pc._consumers.append(asyncio.ensure_future(consume_audio(track, data['call_id'], local_id, remote_id)))
        elif track.kind == 'video':
            pc.addTrack(remote_video)
            pc._consumers.append(asyncio.ensure_future(consume_video(track, data['call_id'], local_id, remote_id)))

    await pc.setRemoteDescription(offer)
    answer = await pc.createAnswer()
    await pc.setLocalDescription(answer)

    return web.Response(
        content_type='application/json',
        text=json.dumps({
            'sdp': pc.localDescription.sdp,
            'type': pc.localDescription.type
        }))


pcs = []


async def on_shutdown(app):
    # stop audio / video consumers
    for pc in pcs:
        for c in pc._consumers:
            c.cancel()

    # close peer connections
    coros = [pc.close() for pc in pcs]
    await asyncio.gather(*coros)


async def call(request):
    conn = psycopg2.connect(connect_str)
    cursor = conn.cursor()
    # call_id = str(md5(str(time.time())))
    call_inserted_id = 'call-1'
    caller_id = request.query['caller_id']
    callee_id = request.query['callee_id']

    rtc_io[call_inserted_id] = {}
    rtc_io[call_inserted_id][caller_id] = {
                                            'audio': [], 
                                            'video': [],
                                            'connected':False
                                        }
    rtc_io[call_inserted_id][callee_id] = {
                                            'audio': [], 
                                            'video': [],
                                            'connected':False
                                        }

    sql = f"""
          INSERT INTO calls (id,caller,callee,
                            type,start_time,status)
          VALUES ('{call_inserted_id}',{caller_id},{callee_id},2,{time.time()},1)
          returning id;
            """
    cursor.execute(sql)
    conn.commit()

    return web.Response(
        content_type='application/json',
        text=json.dumps({
            'call_id': call_inserted_id
        }))


async def call_answer(request):
    call_id = request.query["call_id"]
    status = request.query["status"]

    conn = psycopg2.connect(connect_str)
    cursor = conn.cursor()
    sql = f"""
          update calls
          set status = {status}
          where id = '{call_id}';
          """
    cursor.execute(sql)
    conn.commit()

    sql2 = f"""
            select caller 
            from calls
            where id = '{call_id}';
            """
    cursor.execute(sql2)
    res = cursor.fetchone()
    
    rtc_io[call_id][res[0]]["audio"] = []

    return web.Response(
        content_type='application/json',
        text=json.dumps({
            'status': 'success'
        }))

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='WebRTC audio / video / data-channels demo')
    parser.add_argument('--port', type=int, default=8080,
                        help='Port for HTTP server (default: 8080)')
    parser.add_argument('--verbose', '-v', action='count')
    args = parser.parse_args()

    if args.verbose:
        logging.basicConfig(level=logging.DEBUG)

    app = web.Application()
    app.on_shutdown.append(on_shutdown)
    app.router.add_get('/call', call)
    app.router.add_get('/call_answer', call_answer)
    app.router.add_get('/', index)
    app.router.add_post('/offer', offer)
    web.run_app(app, port=args.port, host="0.0.0.0")

Thank you so much for your guidance

Latency with data channel

Hello,

We have a server that send data every 30 ms. on the other side we have a client receiving these data.

With 0.9.1 the client receive data every 30 ms

With 0.9.3 the client receive data by batches. We have nothing for 200ms then several data within 1 or 2 ms.

Is it normal or a bug ?

Regards,

how to send offer data to remote server?

hello
Im new in RTC.In your examples folder,I saw youve made the index html and client.js files and by these pages clients can send offer data to the server to connect together.
If I want to put these two pages as clientside and send offer data from client to server, how to do this work?

aiortc.contrib not found

When I compiling the "python apprtc.py" with python 3.6.6 in the example apprtc, there raises the issue with aiortc.contrib:
Traceback (most recent call last):
File "apprtc.py", line 14, in
from aiortc.contrib.media import frame_from_bgr
ModuleNotFoundError: No module named 'aiortc.contrib'

how to write tracks to file?

I want to to write audio tracks in a file, for example a wav file. Could you please provide some information on how to do that?
To be specific, in the examples /server package and in the consume_audio file, how can I write the result of track.recv to a wav file?

ICE Connection State stuck at "new"

Hi there,

I'm trying to connect with remote peer in browser, but despite both peer exchange their answer/offer, aiortc peer ice candidate status is stuck at "new".

I've checked gathering status, and its at "complete", and signaling status is at "stable".
In addition, browser peer ice candidate status is stuck at "checking". I've tried to find out the reason why it does not work, and it could be a STUN/TURN servers problem.

I've checked browser STUN/TURN servers and they are ok. Any idea which could be the issue?

Thanks.

EDIT: I've been checking STUN and TURN working servers, and I've even added a turn server into "rtcicetransport.py" to discard any NAT problems during signaling.

However, there is no changes. Browser peer is on Google Chrome 66.0.3359.181ย (Build oficial), using javascript RTCPeerConnection, which get connected with aiortc peer using websockets.

Browser peer get stuck on: "ICE Candidate State: checking" and "ICE Gathering State: complete", and aiortc peer get stuck on: "ICE Candidate State: new" and "ICE Gathering State: complete".

I've been dealing with this problem several days, but I dont achieve any solution. Could you help me please?

Thanks again.

Video size mismatch

I'm working on sending and receiving video streams between python clients.

I am having an issue with a mismatch between the amount of bytes that I am sending and the amount that I am receiving. I was originally sending a 240h x 960w frame which is 345600 bytes, but on the receiving side I am getting 368640 bytes which I am then unable to marshal into the correct frame size.

I have simplified the example by sending a 6h x 8w frame which is only 72 bytes, but on the receiving side I am getting 864 bytes.

I think this might have something to do with the compression. I would appreciate some advice as to where to look.

Here is my working branch. I started by modifying cli.py https://github.com/nickaknudson/aiortc/blob/testing/examples/datachannel-cli/cli.py

Send Picture Loss Indication (PLI) when a keyframe is needed

Currently aiortc implements the RTCP NACK mechanism to report missing RTP packets to the sender, allowing seamless recovery under moderate packet loss. However, in the event of heavy packet loss, it does not yet implement Picture Loss Indication (PLI) to request a full keyframe from the remote party. To do so we need to better understand the conditions under which a PLI should be sent.

Note: on the sender side, aiortc responds both to NACK and to PLI, so this is only a receiver side issue.

How can I save audio tracks sent by each client into a wave file?

Hi jlaine,
Thank you so much for writing this useful module.
Would you please let me know how to save audio tracks sent by each client into a wave file separately and then load those wave files for each other.
on the other hand, I tried to load live audio tracks for one client (instead of saving it to another format), but it played with delay and was not clear. Please tell me how can I fix it?
I appreciate for your help

Data Channel transfer rates are a bit low

Data Channel transfer rates are somewhat low, even when sending locally on the same machine.

Using the datachannel-filexfer example to send a compressed 64-megabyte file, it takes about 90 to 120-seconds. I suppose that is about 5-mbps. A chrome-to-chrome local implementation with the same file is about 45-mbps. ie) https://webrtc.github.io/samples/src/content/datachannel/filetransfer/

My goal is to offer a hosted quality-of-service webRTC-focused speed test, offering insights into maximum webRTC UDP connection speeds and packet loss information with both upload and download. I do not yet understand how I might be able to access connection analytics, but I suppose perhaps aiortc is not the ideal solution for what I am trying to do anyways? Either way, I am loving what I am seeing so far regardless. This is Fantastic stuff!

Support MediaStreamTrack events

I tried to handle the MediaStreamTrack.onended event (see here) on a received track in aiortc.
But it seems that no events for MediaStreamTrack are supported yet. Is that right? Is there any other way to detect them?
At least for my usecase that's a major problem. Supporting events like started, ended, etc. for MediaStreamTrack, as documented here, would be really useful.

[RTCRtpSender] Packet sequence_number overflow

Hi, I did a long running test and after some time the eventloop stops when tries to pack a packet to send.

I suspect that it is becouse the sequence_number is never reseted and overflow the unsigned short 'H'

I will reexecute with debug enable this night and return with more detail.

I can fork and fix this, if the spec allow jump to zero when overflow.

Task exception was never retrieved
future: <Task finished coro=<RTCRtpSender._run() done, defined at /home/victor/.pyenv/versions/pfcount/lib/python3.5/site-packages/aiortc/rtcrtpsender.py:78> exception=error("'H' format requires 0 <= number <= 65535",)>
Traceback (most recent call last):
  File "/home/victor/.pyenv/versions/3.5.3/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/home/victor/.pyenv/versions/pfcount/lib/python3.5/site-packages/aiortc/rtcrtpsender.py", line 97, in _run
    await self.transport._send_rtp(bytes(packet))
  File "/home/victor/.pyenv/versions/pfcount/lib/python3.5/site-packages/aiortc/rtp.py", line 138, in __bytes__
    self.ssrc)
struct.error: 'H' format requires 0 <= number <= 65535
Task exception was never retrieved
future: <Task finished coro=<RTCRtpSender._run() done, defined at /home/victor/.pyenv/versions/pfcount/lib/python3.5/site-packages/aiortc/rtcrtpsender.py:78> exception=error("'H' format requires 0 <= number <= 65535",)>
Traceback (most recent call last):
  File "/home/victor/.pyenv/versions/3.5.3/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/home/victor/.pyenv/versions/pfcount/lib/python3.5/site-packages/aiortc/rtcrtpsender.py", line 97, in _run
    await self.transport._send_rtp(bytes(packet))
  File "/home/victor/.pyenv/versions/pfcount/lib/python3.5/site-packages/aiortc/rtp.py", line 138, in __bytes__
    self.ssrc)
struct.error: 'H' format requires 0 <= number <= 65535

Memory leak

Hi,
In a long running a did some memory test and not a leak

aiortc_memory_leak

any tip where it can be?

RemoteStreamTrack got nothing

Hi jlaine,
aiortc is a nice project, but I met some problems when running examples/server/server.py. I modified consume_video and consume_audio function to test whether there're streams coming, but I got many audio here output but only one video here.

async def consume_audio(track):
    while True:
        print('audio here')
        await track.recv()

async def consume_video(track):
    while True:
        print('video here')
        local_video.frame_remote = await track.recv()

I'm using libvpx 1.7.0 (libvpx.so.5) via conda, and compiled aiortc module. I've tried several combinations of aiortc and libvpx.

  • aiortc from pip, libvpx=1.7.0: ImportError: libvpx.so.4: cannot open shared object file: No such file or directory
  • aiortc from pip, libvpx=1.6.1: Exception: libvpx error: ABI version mismatch
  • aiortc compiled, libvpx=1.7.0: no error, but the queue in RemoteStreamTrack is always empty
  • aiortc compiled, libvpx=1.6.1: Exception: libvpx error: ABI version mismatch

Convert YUV420 VideoFrame to Numpy Array?

Hi,

I am trying to convert the YUV420 VideoFrame.data to an RGB/RGBA colorspace Numpy array in order to apply some processing to it. However, I am struggling to make this work. I found the following article which outlines a method:

http://picamera.readthedocs.io/en/latest/recipes2.html#unencoded-image-capture-yuv-format

My implementation is as follows, which at the moment is simply trying to convert a frame of the streamed video from YUV to RGB and then save the resulting image.

async def consume_video(track, local_video):
    while True:
        local_video.frame_remote = await track.recv()
        stream = local_video.frame_remote.data
        fwidth = 640
        fheight = 480
        Y = np.fromstring(stream, dtype=np.uint8, count=fwidth*fheight).\
            reshape((fheight, fwidth))
        U = np.fromstring(stream, dtype=np.uint8, count=(fwidth//2)*(fheight//2)).\
            reshape((fheight//2, fwidth//2)).\
            repeat(2, axis=0).repeat(2, axis=1)
        V = np.fromstring(stream, dtype=np.uint8, count=(fwidth//2)*(fheight//2)).\
            reshape((fheight//2, fwidth//2)).\
            repeat(2, axis=0).repeat(2, axis=1)

        YUV = np.dstack((Y, U, V))[:height, :width, :].astype(np.float)
        YUV[:, :, 0]  = YUV[:, :, 0]  - 16   # Offset Y by 16
        YUV[:, :, 1:] = YUV[:, :, 1:] - 128  # Offset UV by 128

        M = np.array([[1.164,  0.000,  1.596],    # R
              [1.164, -0.392, -0.813],    # G
              [1.164,  2.017,  0.000]])   # B

        RGB = YUV.dot(M.T).clip(0, 255).astype(np.uint8)

        im = Image.fromarray(RGB)
        im.save("sample.jpeg")

The result I am getting is very heavy in the greens, and quite distorted:

wow

Is there another way to do this?

Has anyone else run into similar issues?

input/output raw rtp packet api

Hi:

I am thinking whether we can have the raw input/output raw rtp packet api, use other media lib/framework(ffmpeg, gstreamer) to handle media,

LAN-only testing

I'm trying to do some testing to force all traffic over the LAN. Can you suggest some methods to test this?

I started by filtering all ICE candidates that aren't on the local network. I think that this will ensure a LAN connection, but maybe there is still always the fallback to a TURN server??

To check that it couldn't possibly be using a TURN server, I tried disconnecting the LAN from an internet connection to prove to myself that nothing could be going over the internet, but the ICE gathering process doesn't complete.

Is there a way to gather ICE candidates without an internet connection? Is there a way to ensure communication is only over LAN?

Thanks.

DTLS: Handshake fails when offering

This is what OpenSSL spits out in RAWRTC:

tls: 140610245187392:error:14102410:SSL routines:dtls1_read_bytes:sslv3 alert handshake failure:ssl/record/rec_layer_d1.c:772:SSL alert number 40

I can provide a pcap trace if needed.

Implement RTCP reports

The parsing and serialization code is implemented, now we need to collect data and send the source description / sender / receiver reports.

h.264 video

great project!

I wonder what would be required to also support H.264?

SCTP: Parse destination port

It seems the destination port is not parsed from the SDP. This is how my offer looks like:

{
   "type":"offer",
   "sdp":"v=0\r\no=sdpartanic-rawrtc-0.2.1 1371244329 1 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=ice-options:trickle\r\na=group:BUNDLE rawrtc-sctp-dc\r\nm=application 9 DTLS\/SCTP 6000\r\nc=IN IP4 0.0.0.0\r\na=mid:rawrtc-sctp-dc\r\na=sendrecv\r\na=ice-ufrag:KH0HgbiF8NhcfZj8\r\na=ice-pwd:baklGzPiRBxlhfUq3Dm1fvNiqM1r28fM\r\na=setup:actpass\r\na=fingerprint:sha-256 36:13:5E:68:CB:C9:D0:43:9C:C5:32:41:F4:47:04:3F:E2:84:3A:AB:0E:78:02:FD:37:51:9F:32:10:D1:14:A2\r\na=tls-id:zLqLLaRkwjyxQKlfO9Tv0dva2McaHRID\r\na=sctpmap:6000 webrtc-datachannel 65535\r\na=max-message-size:0\r\n[...]\r\na=end-of-candidates\r\n"
}

Be aware, the SCTP port may also come in the form of

a=sctp-port:6000

in the associated media line.

using Aiortc on windows 10

Hi, in the docs you give details on linux/mac for installing the requirements. How can I use this python module on windows? what libraries need to be installed? Thanks.

One-way media stream using modified server example

I am trying to stream RTSP video to the client. In my case, I do not want the client to send its webcam video to the server, only the server that sends the video. I publish the code here:

https://github.com/eufat/nodeflux-aiortc/blob/master/server.py

My current idea is to asynchronously call the pc.addTrack to send the RTSP video provided with VideoFileTrack. I have an issue whenever addTrack is called, this error thrown:

Traceback (most recent call last):
  File "server.py", line 159, in <module>
    web.run_app(app, port=args.port, host='127.0.0.1')
  File "/usr/local/lib/python3.5/dist-packages/aiohttp/web.py", line 472, in run_app
    loop.run_forever()
  File "/usr/lib/python3.5/asyncio/base_events.py", line 345, in run_forever
    self._run_once()
  File "/usr/lib/python3.5/asyncio/base_events.py", line 1276, in _run_once
    event_list = self._selector.select(timeout)
  File "/usr/lib/python3.5/selectors.py", line 441, in select
    fd_event_list = self._epoll.poll(timeout, max_ev)
  File "server.py", line 102, in handler
    roll_video()
  File "server.py", line 91, in roll_video
    pc.addTrack(local_video)
  File "/usr/local/lib/python3.5/dist-packages/aiortc/rtcpeerconnection.py", line 205, in addTrack
    transceiver = self.__createTransceiver(kind=track.kind, sender_track=track)
  File "/usr/local/lib/python3.5/dist-packages/aiortc/rtcpeerconnection.py", line 561, in __createTransceiver
    dtlsTransport = self.__createDtlsTransport()
  File "/usr/local/lib/python3.5/dist-packages/aiortc/rtcpeerconnection.py", line 488, in __createDtlsTransport
    self.__iceTransports.add(iceTransport)
AttributeError: 'list' object has no attribute 'add'

To reproduce this issue, you can clone the repo https://github.com/eufat/nodeflux-aiortc and start the server by running python server.py

Opus usage

I'm looking for an example how to use an opuc codec attached to the source code.

My idea is to change server example to write sound not as a RAW WAV file but as a compressed one.

How to do that?

Regards,
Tom

Final link failure while executing setup script on Heroku

Hello,

I experienced some weird error on examples/server project while I pushing these codes to Heroku instance for testing purpose. I can execute this on my local environment but Heroku instance can't.

Error was occurred when setup.py was ran. I tried to debug this with several methods such as changing Heroku stack or adding some apt-gets during build, but I couldn't figure out what's happening.

I did apt-get build-essential gcc libopus-dev libvpx-dev on instance first, and instance uses Python 3.6.

Error summary is below:
relocation R_X86_64_PC32 against symbol 'vpx_rv' can not be used when making a shared object; recompile with -fPIC

And setup history and failure logs are below:

Running setup.py install for aiortc: started
Running setup.py install for aiortc: finished with status 'error'
Complete output from command /app/.heroku/python/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-qbrycl28/aiortc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-8b9pktnw-record/install-record.txt --single-version-externally-managed --compile:
        running install
        running build
        running build_py
        creating build
        creating build/lib.linux-x86_64-3.6
        creating build/lib.linux-x86_64-3.6/aiortc
        copying aiortc/sdp.py -> build/lib.linux-x86_64-3.6/aiortc
        ... copying some .py files ...
        copying aiortc/codecs/opus.py -> build/lib.linux-x86_64-3.6/aiortc/codecs
        running build_ext
        generating cffi module 'build/temp.linux-x86_64-3.6/aiortc.codecs._vpx.c'
        creating build/temp.linux-x86_64-3.6
        generating cffi module 'build/temp.linux-x86_64-3.6/aiortc.codecs._opus.c'
        building 'aiortc.codecs._opus' extension
        ... successfully built the opus extension ...
        building 'aiortc.codecs._vpx' extension
        gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/app/.heroku/python/include/python3.6m -c build/temp.linux-x86_64-3.6/aiortc.codecs._vpx.c -o build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/aiortc.codecs._vpx.o
        gcc -pthread -shared build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/aiortc.codecs._vpx.o -lvpx -o build/lib.linux-x86_64-3.6/aiortc/codecs/_vpx.abi3.so
        /usr/bin/x86_64-linux-gnu-ld: /tmp/build_fec03df77a0d0bb84c5472a5fe135ee7/.apt/usr/lib/x86_64-linux-gnu/libvpx.a(deblock_sse2.asm.o): relocation R_X86_64_PC32 against symbol `vpx_rv' can not be used when making a shared object; recompile with -fPIC
        /usr/bin/x86_64-linux-gnu-ld: final link failed: Bad value
        collect2: error: ld returned 1 exit status
        error: command 'gcc' failed with exit status 1

How to add and send ICE candidates

I'm writing a RTCPeerClient using aiortc to communicate with remote peer in a browser (similiar situation to apprtc.py example), but in that example, to add remote ICE candidate, it uses a peer client method called "AddIceCandidate", which does not appear in aiortc API nor exists when I build the code.

So, how could I manage ICE candidates (remote and local) and add them to the peer?

Thanks

send only peerconnection does not work.

in the python side, aiortc just receive medias, does not do addTrack, code like this

@app.route('/offer', methods=['POST'])
async def offer(request):

    data = request.json 

    offer = RTCSessionDescription(sdp=data['sdp'],type=data['type'])

    pc = RTCPeerConnection()
    pc._consumers = []
    pcs.append(pc)

    @pc.on('track')
    def on_track(track):
        if track.kind == 'video':
            print('on_track')
            asyncio.ensure_future(consume_video(track))
            print('on_track')

    await pc.setRemoteDescription(offer)
    answer = await pc.createAnswer()
    await pc.setLocalDescription(answer)

    print('answer', answer)

    return json({
            'sdp': pc.localDescription.sdp,
            'type': pc.localDescription.type
        })

client side code

	var pc = new RTCPeerConnection();

	pc.onaddstream = function(event) {
		console.debug("pc::onAddStream",event);
		//Play it
		addVideoForStream(event.stream);
	};

	pc.onremovestream = function(event) {
		console.debug("pc::onRemoveStream",event);
		//Play it
		removeVideoForStream(event.stream);
	};

    const stream = await navigator.mediaDevices.getUserMedia({
        audio: false,
        video: true
    });

    addVideoForStream(stream,true);

    stream.getTracks().forEach(function(track) {
        pc.addTrack(track, stream);
    });

    const offer = await pc.createOffer({
        offerToReceiveAudio: true,
        offerToReceiveVideo: true
    });

    await pc.setLocalDescription(offer);

    const rawResponse = await fetch('http://localhost:5000/offer', {
        method: 'POST',
        headers: {
            'Accept': 'application/json',
            'Content-Type': 'application/json'
        },
        body: JSON.stringify({
            type: 'offer',
            sdp : offer.sdp
        })
    });
    
    const content = await rawResponse.json();

    console.log(content)

    const answer = new RTCSessionDescription({
        type : 'answer',
        sdp	: content.sdp
    });
    
    //Set it
    await pc.setRemoteDescription(answer);

After do offer/answer, there is no media coming from browser side.

In the chrome://webrtc-internals, it shows audio/video track with no data.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.