versatica / libmediasoupclient Goto Github PK
View Code? Open in Web Editor NEWmediasoup client side C++ library
Home Page: https://mediasoup.org
License: ISC License
mediasoup client side C++ library
Home Page: https://mediasoup.org
License: ISC License
{
Producer::Producer(
Producer::Listener* listener,
Producer::PublicListener* publicListener,
std::string id,
webrtc::MediaStreamTrackInterface* track,
json rtpParameters,
uint8_t maxSpatialLayer,
json appData)
: listener(listener), publicListener(publicListener), id(std::move(id)), track(track),
rtpParameters(std::move(rtpParameters)), appData(std::move(appData))
{
id(std::move(id))
means that such a string will be "mem-moved" to here. Is this really safe? What would happen if the called want to print such a id
(that it got somehow from the server) after producer is done?
Like here. This is needed for letting the app set VP9 SVC available layers (at least for now).
However, the webrtc::RtpEncodingParameters
struct does not have a scalabilityMode
field so we have a problem. Perhaps we may have a derived class with it?
In the other side, in order to have VP9 SVC we should set the corresponding flag in webrtc. I assume it's something as follows (to be confirmed):
webrtc::field_trial::InitFieldTrialsFromString("WebRTC-SupportVP9SVC/EnabledByFlag_3SL3TL/");
Fixed in the JS client here: versatica/mediasoup-client@6b49fab
Related issue: versatica/mediasoup-client#65
NOTE that it works in Chrome (somehow, magic!) but it's a nice bug anyway.
In the JS version I've created h264-profile-level-id and this is the commit that enables it:
In libmediasoupclient there is no need to implement a separate lib since it can just use libwebrtc (similar API, although you need to use their CodecParameterMap
structs/classes):
The README has lot of information about installation and linkage considerations. In order to not have such a info divided into the README and the mediasoup website, I suggest moving that info the the "Installation" page in the website and add a link to it:
## Website and documentation
* [mediasoup.org][mediasoup-website]
Same as mediasoup-client does in its README.
In case of finding an inactive transceiver, the code is not reseting its RtpSender
parameters. It must do it (otherwise the parameters/encodings of the previously sent track are applied to the new track). This is done in the JS client:
https://github.com/versatica/mediasoup-client/blob/master/lib/handlers/Chrome70.js#L166
Actually there is a good reason for every line of code in the JS client to be there.
Here a good reference: https://izzys.casa/2019/02/everything-you-never-wanted-to-know-about-cmake/
Hi. I have download and compiled webrtc sources for Android OS. But when I am trying to compile mediasoup with links to WebRTC - I am getting this error:
/MediaSoupClient/libmediasoupclient/include/Consumer.hpp:6:69: fatal error: api/media_stream_interface.h: No such file or directory
This is my links to WebRTC sources
cmake . -Bbuild
-DLIBWEBRTC_INCLUDE_PATH:PATH=$/media/0872773872772A183/Android_Dev/projects/WebRTCSources/WEBRTC/src/
-DLIBWEBRTC_BINARY_PATH:PATH=$/media/0872773872772A183/Android_Dev/projects/WebRTCSources/WEBRTC/src/build
On this screen you can see the contents of WebRTC sources
http://joxi.ru/LmGvVM6UwJE4Zr
I am using 73 branch and doble-checked that api folder contains media_stream_interface.h
Could you please help me and tell what I am doing wrong? may something wrong with the paths?
We are using both codec.name
("OPUS") and codec.mimeType
("audio/OPUS"). As per WebRTC 1.0 spec, codec.name
no longer exists.
Changes have been done in mediasoup v3 and mediasoup-client v3. Also, H264 codec is matching by also checking the profile-level-id
value which must be the same as in the Router capabilities.
It looks like the symbol visibility compilation flags must be the same among all the code being linked. Otherwise this kind of linker warnings are thrown:
ld: warning: direct access in function 'nlohmann::basic_json<std::__1::map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::adl_serializer> const& nlohmann::basic_json<std::__1::map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::adl_serializer>::operator[]<char const>(char const*) const' from file 'CMakeFiles/test_mediasoupclient.dir/Device.test.cpp.o' to global weak symbol 'typeinfo for nlohmann::detail::type_error' from file '../libsdptransform/libsdptransform.a(parser.cpp.o)' means the weak symbol cannot be overridden at runtime. This was likely caused by different translation units being compiled with different visibility settings.
These warnings are silenced if every linked piece is set the same visibiliy as libwebrtc does: -fvisibility=hidden
, which means libmediasoupclient, libsdptransform, and the projects using libmediasoupclient...
Current state is trying to guess how to tell libwebrtc not to set -fvisibility=hidden
since we cannot mandate libmediasoupclient users to set such option in their project. That should be up to the project using it, but not mandated.
I've tested the libwebrtc ng
option: rtc_enable_symbol_export = true
so far but I see that the sources are compiled with the hidden visibility set.
In Device.hpp:
- inline json Device::GetRtpCapabilities() const
+ inline json& Device::GetRtpCapabilities() const
this->recvRtpCapabilities
already exists within the Device memory so it's safe to return it as reference.
This may happen in other places.
Not useful for now, but it enables encrypted header extensions as per RFC 6904:
Same as here.
Hi,
I am trying to build libmediasoupclient for Windows on a Windows 10 machine. I have successfully built webrtc.lib in the m73 branch, but when I built libmediasoupclient I got a lot errors.
José mentioned in the other issue that libmediasoupclient has been tested on Windows. Is there any documentation about how to do it?
Thanks a lot!
'A la MediaSoupErrors
It's being added to mediasoup (WIP commit) so it must be ignored in client side for now (as done in mediasoup-client here).
Commit in mediasoup-client: versatica/mediasoup-client@4d6391b?ts=2
gcc 4.8 regex
support is completely broken and incomplete so >= 4.9 is required, otherwise it will crash in runtime due the ScalabiliyModeRegex
regex.
Here a way to detect gcc < 4.9 (just if gcc, it should not affect clang AFAIK) that we may want to include in libmediasoupclient and mediasoup:
# Check compiler
AX_COMPILER_VENDOR
AX_COMPILER_VERSION
# GCC 4.9 is the first compiler that implements <regex>: 4.8 shipped a broken
# implementation, so we have to explicitly check for a version >= 4.9
AS_VAR_IF([ax_cv_cxx_compiler_vendor], [gnu], [dnl
AX_COMPARE_VERSION([${ax_cv_cxx_compiler_version}], [lt], [4.9], [AC_MSG_ERROR([GCC v. 4.9 is required])])
])
(however that such a code is in the configure.ac
file...).
More:
These changes include the v3 refactor of handlers and SDP stuff and also the codec parameters feature.
In the JS version we use Object as argument in most of the public API. This is to allow easy extensibility and to not force the user to set useless parameters (such as maxSpatialLayer
when in an audio Producer).
The C++ uses multi arguments:
Producer* SendTransport::Produce(
Producer::PublicListener* producerPublicListener,
webrtc::MediaStreamTrackInterface* track,
json simulcast,
uint8_t maxSpatialLayer,
json appData);
NOTE: It should be json& simulcast
and json& appData
BTW.
So if I want to send a simple audio track I must do this:
json simulcast{ nullptr };
json appData{ json::object() };
// I just wanted to send a simple audio track, but...
transport->Produce(this, track, simulcast, 0, appData);
Tomorrow we may add a new foo
param into Produce
and the API would break.
Would it make sense to have a public options struct?:
struct ProducerOptions
{
webrtc::MediaStreamTrackInterface* track{ nullptr },
json simulcast{ json::value_t::null },
uint8_t maxSpatialLayer{ 0 },
json appData{ json::value_t::object }
};
Producer* SendTransport::Produce(
Producer::PublicListener* producerPublicListener,
ProducerOptions& options);
Like in here
It's an artifact. JS commit here: versatica/mediasoup-client@026445f
Here it's not receiving dtlsParameters
. Why? It's common for both AnswerMediaSection
and OfferMediaSection
subclasses.
It's better.
Related commit in mediasoup-client: versatica/mediasoup-client@22ab43c?ts=2
NOTE: Since I cannot test this is real browsers, it would be nice to test how the resulting SDP looks like in libmediasoupclient.
Hi. Are you planning to provide sources for flutter?
To be in sync.
https://github.com/versatica/mediasoup-client/blob/master/lib/videoLayers.js
Also, optional
option no longer needed.
BTW: Shouldn't all C++ methods start with uppercase?
Chrome 74.0.3729.157 (desktop current stable) uses temporal layers in H264 and signals them via framemarking RTP extension. So, if the current libwebrtc version in libmediasoupclient also does it, it must announce "scalabilityMode: "L1T3" in H264, as in mediasoup-client.
NOTE: In mediasoup-client Chrome75 handler has been renamed to Chrome74 since it does everything that we expect from Chrome 75.
To be more descriptive:
- auto transport = new SendTransport(
+ auto* transport = new SendTransport(
It just makes sense for Plan-B in which all the receiving tracks must share the very same RTP parameters. However, in Unified-Plan each remote Producer RTP parameters are kept.
I mean: getReceivingFullRtpParameters()
could just be removed.
The signature of json PeerConnection::GetNativeRtpCapabilities()
could be:
void PeerConnection::GetNativeRtpCapabilities(json& jsonObject)
So internally it fills jsonObject["codecs"]
, etc.
Also, in:
json Handler::GetNativeRtpCapabilities()
{
std::unique_ptr<PeerConnection> pc(new PeerConnection(nullptr, {}));
return pc->GetNativeRtpCapabilities();
}
the very same:
void Handler::GetNativeRtpCapabilities(json& jsonObject)
{
std::unique_ptr<PeerConnection> pc(new PeerConnection(nullptr, {}));
pc->GetNativeRtpCapabilities(jsonObject);
}
Some changes related to VP9 and encodings signaling (needed for signaling dtx:true
when doing screen sharing):
https://github.com/versatica/mediasoup-client/commits/v3/lib/handlers/Chrome70.js?ts=2
There is a experiment/flag/trial in WebRTC to use bigger buffers in reception. This would be useful for the broadcasting scenario, so if we find where to set it, it should be an option (not hardcoded).
However, more buffering means more possible delay.
Hi. I am trying to compile and use mediasoup client library and then use it im my Android project
I am wondering if it is possible at all?
According to the documentation - to compile mediasoup - at first we need to compile webrtc and provide paths to webrtc sources and binaries to the CMake script of mediasoup
But the documentation talks us to compile webrtc as a static library (for desktop platforms) - but this static library cannot be used in android, because android is working with shared libraries and the compilation process for android is a bit different that standard webrtc compilation. I have compiled webrtc and got and webrtc.aar library as a result. Then I have provided paths to this .aar file and to webrtc sources into the mediasoup build script and have got a static mediasoup .a library.
But when I have imported them into my android project and tried to get an access to mediasoup classes from my android C++ code - I have got lots of exceptions about missing references between mediasoup, libsdptransform and webrtc. I think this happens because of wrong compilation params or something like this. Maybe because webrtc was compiled for android like an .aar file and not .a library
Could you please tell me is it really possible to compile and use mediasoup v3 client library with webrtc in android project?
And if it is possible - could you please describe me basic steps of how they all can be compiled and placed together?
Related commit in mediasoup-client JS: versatica/mediasoup-client@c63defc?ts=2
This makes it possible to set maxBitrate
or dtx
, etc in a single encoding (for audio, for instance, or video with no simulcast).
NOTE: This is just for Chrome >= 75.
I assume that mediasoupclient.hpp
is the public header file that users should include into their apps. It looks as follows:
#ifndef MEDIASOUP_CLIENT_HPP
#define MEDIASOUP_CLIENT_HPP
#include "Consumer.hpp"
#include "Device.hpp"
#include "Producer.hpp"
#include "Transport.hpp"
using SendTransportListener = SendTransport::Listener;
using TransportListener = Transport::Listener;
using ConsumerListener = Consumer::PublicListener;
using ProducerListener = Producer::PublicListener;
#endif
AFAIS it's exporting the following symbols:
SendTransportListener
TransportListener
ConsumerListener
ProducerListener
Those public symbols should be within the mediasoupclient
namespace.
This makes it possible to enable OPUS stereo and even DTX.
Related commit in mediasoup-client JS.
Just "divagating":
libmediasoupclient
../build/test/test_mediasoup-client
(my eyes!).Instead of using the super outdated react-native-webrtc (which poorly implements a WebRTC API of 2013) we should create react-native-mediasoupclient (based on libmediasoupclient, of course).
Let's this issue stay here until we initiate the new project.
Just check the commit in Chrome70 handler in mediasoup-client:
versatica/mediasoup-client@bb3d032?ts=2#diff-fff5b78808aed54b3ffac35cc482de64
As here: versatica/mediasoup-client@49ecc6d
direction
attribute.Commit in mediasoup-client.
Refactor: replace ugly transportRemoteParameters
, consumerRemoteParameters
, transportLocalParameters
, producerLocalParameters
, etc. with destructuring Objects with proper and well known keys.
Note that there is also this naming change:
- async restartIce({ remoteIceParameters } = {})
+ async restartIce({ iceParameters } = {})
As explained here: versatica/mediasoup-client#64 (not fixed yet).
It does not affect libwebrtc, but must be fixed anyway because libwebrtc will eventually support MID header extension.
We can properly provide the PeerConnection with an array of encodings
(RTCRtpEncodingParameters as per spec) rather than passing a pseudo simulcast
array which does not scale well.
For example, the app may wish to send a single audio or video encoding (no simulcast) and set its max bitrate or framerate.
Done in JS: versatica/mediasoup-client@dd203e8
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.