Giter Club home page Giter Club logo

Comments (19)

paullouisageneau avatar paullouisageneau commented on August 11, 2024

Well, It turns out Chrome at least allow to bypass the message size limit by overriding AS Option in SDP so you don't have to listen the events to re-try.

The AS option relates to bandwidth for media over RTP. It is unrelated to message size, and it doesn't affect Data Channels. datachannel-wasm doesn't support WebRTC media, only WebRTC Data Channels.

However, I can't find b=AS in the SDP emitted by this library.

This library does not generate the SDP description. As a wrapper on the JavaScript WebRTC API, it only forwards the SDP generated by the browser.

The only thing I find on this library's SDP is the max-message-size but I don't think Chrome respect this property at all.

This is the actual property for the max message size supported over Data Channels. It should be supported by Chrome. However, you can't just modify it and expect it to work reliably: even if you increase the value, the message size limit still probably exists in the implementation.

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

@paullouisageneau Are you sure that it's not related to DataChannels?

If you look at, Peer5/ShareFest#10

I can clearly see it's related to DC and allow bypass of OnBuferAmountLow event.

the message size limit still probably exists in the implementation.

This is true but I can see that Chrome uses an internal Queue for large message sizes so again user don't have to listen OnBuferAmountLow event

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

@paullouisageneau Are you sure that it's not related to DataChannels?

If you look at, Peer5/ShareFest#10

I can clearly see it's related to DC and allow bypass of OnBuferAmountLow event.

Actually, this thread is very old (2013), they talk about the early experimental implementation of WebRTC in Chrome 25. At that time, Data Channels in Chrome were not the current standard over SCTP but another prototype over RTP. Therefore, they had no proper congestion control and depended on RTP bandwidth to limit the throughput instead. As it didn't allow the traffic to adapt to the available network capacity, people used such hacks to increase the throughput manually (in any case, it was not a way to bypass the onbufferedamountlow event).

With modern SCTP Data Channels, introduced in 2014, media bandwidth is unrelated to Data Channels.

the message size limit still probably exists in the implementation.

This is true but I can see that Chrome uses an internal Queue for large message sizes so again user don't have to listen OnBuferAmountLow event

The limitation is not at the message queue but at lower level in the SCTP layer. For instance, a whole message might need to fit in the SCTP send or receive buffer.

The message queue is not for messages that are too large, it just stores pending messages when the user sends at a throughput higher than what the network can achieve. In that case, you need to watch bufferedamount and wait for onbufferedamountlow event to resume sending, otherwise, the queue will grow until the browser kills the connection because it takes up too much memory.

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

@paullouisageneau

So, what's the optimal chunk size?

I see everywhere people suggest to use 64kb

Here, sending 350kb causes the DC to stuck and emit "unknown" error message.

I can see the browser emitted SDP's max-message-size set to 2MB

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

So, what's the optimal chunk size?

I see everywhere people suggest to use 64kb

Yes, 64KB seems good, and it's small enough to work everywhere, even older browsers.

Here, sending 350kb causes the DC to stuck and emit "unknown" error message.

I can see the browser emitted SDP's max-message-size set to 2MB

The other side might have a lower limit. If it's libdatachannel, the maximum is 256KB by default, but you can increase it with rtc::Configuration::maxMessageSize (note increasing the maximum size also increases per-connection memory usage).

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

@paullouisageneau

Yes, 64KB seems good, and it's small enough to work everywhere, even older browsers.

I have no plan of targeting older browsers, only the modern Chrome (or Edge)

The other side might have a lower limit. If it's libdatachannel, the maximum is 256KB by default, but you can increase it with rtc::Configuration::maxMessageSize (note increasing the maximum size also increases per-connection memory usage).

You know I have upgraded libdatachannel to new version exactly because of maxMessageSize option.

I have already set maxMessageSize to 12500000 (12.5 MB)

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

Are you sure that the message size is the limitation here, and that the Data Channel doesn't get closed just because the bufferedAmount is too high? Browsers will close the connection if the buffer level exceeds 16MB.

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

@paullouisageneau

I thought message size is limitation because sending 300-400kb size image cause the stuck effect, but sending something like 100kb size image doesn't. The same for PDF files.

However, it's not the case with txt files. I was able to sent 1 MB txt file without any issue.

Sometimes it's same for images, I was able to send 400kb different image without any issue.

Here is the flow of successful file transfer

File https://upload.wikimedia.org/wikipedia/commons/b/bc/Alter_Elbtunnel_Nord_Kuppel_1024px_400KB.jpg
File Size 400kb

Stream Started

Sent Start Signal

Sent data with size of 73315 bytes
Sent data with size of 98250 byes

onBufferedAmountLow Event Reached 

Sent data with size of 81875 byes
Sent data with size of 144934 byes

Sent Finish Signal

Stream has been ended

onBufferedAmountLow Event Reached 

And, here is the flow of unsuccessful file transfer,

File https://www.bing.com/th?id=OHR.Hokulea_EN-US8698576653_1920x1080.jpg&rf=LaDigue_1920x1080.jpg
File Size 333kb

Stream Started

Sent Start Signal

Sent data with size of 333424 bytes

Sent Finish Signal

WebRTC Error Event Reached and Error is unknown

The server sees only 610 B worth data, and I suppose this is the start signal not the data.

I don't know what exactly this means, WebRTC Error Event Reached and Error is unknown

Here the libdatachannel debug logs,

debug_log.txt

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

The remote offer in libdatachannel's log containsa=max-message-size:262144, which means with the max message size for the browser is 256KiB, so 333KB is too large for one message.

For both performance and compatibility reasons, you should not send the entire file as one message, but instead open a reliable and ordered Data Channel, read the file chunk by chunk (for instance with a chunk size of 64KB) and send each chunk in a separate message. When finished, you close the Data Channel, so the receiving side knows the file is finished. This will allow sending files of any size with good performance.

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

@paullouisageneau You mentioned that Chrome has 16 MB buffer. What would happen if I just regex replace a=max-message-size:262144 to something larger?

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

Instead open a reliable and ordered Data Channel

This happen by default, no?

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

@paullouisageneau You mentioned that Chrome has 16 MB buffer. What would happen if I just regex replace a=max-message-size:262144 to something larger?

The 16 MB limit is different from the message size, it's the maximum total size of all pending messages buffered at the sender. If Chrome advertises 256KiB as maximum message size, it corresponds to a per-message buffer size in the implementation. Changing the size in the emitted SDP will do nothing to change the internal limitation.

Instead open a reliable and ordered Data Channel

This happen by default, no?

Yes it does.

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

@paullouisageneau

Changing size, it's the idea I got from the linked old RTC one (hoping it might work on this too).

I tried with 64kb chunks and everything works, but works very slowly.

Transferring 10mb file take 1 min and I have Fiber internet.

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

I tried with 64kb chunks and everything works, but works very slowly.

Transferring 10mb file take 1 min and I have Fiber internet.

It might be linked to the way you send the data, for instance not keeping enough data buffered in the Data Channel.

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

for instance not keeping enough data buffered in the Data Channel

I don't understand. Can you elaborate?

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

How do you send the chunks?

from datachannel-wasm.

jeffRTC avatar jeffRTC commented on August 11, 2024

Like this,

                    auto chunks = toChunks(buffer, 64000);

                    for(auto const &chunk: chunks) {
                        message.setBinary(chunk);

                        std::string serialized = hps::to_string(message);

                        if (auto channel = dc.lock())
                        {
                            std::cout << "Sent" << "\n";

                            channel->send(reinterpret_cast<rtc::byte *>(serialized.data()), serialized.size());
                        }
                    }

toChunks function can be seen below,

template<typename T> 
std::vector<std::vector<T>> toChunks(const std::vector<T>& source, size_t chunkSize)
{
    std::vector<std::vector<T>> result;
    result.reserve((source.size() + chunkSize - 1) / chunkSize);

    auto start = source.begin();
    auto end = source.end();

    while (start != end) {
        auto next = std::distance(start, end) >= chunkSize
                    ? start + chunkSize
                    : end;

        result.emplace_back(start, next);
        start = next;
    }

    return result;
}

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

If you send all chunks at the same time like this and close the Data Channel immediately, it should transfer as fast as possible.

Beware, it is unsafe if the file is bigger than a few tens of MB, as you might get over the 16MB buffer limit, instead you should set bufferedAmountLowThreshold to a given value, stop filling the buffer at a higher bufferedAmount and wait for onbufferedamountlow to continue sending.

from datachannel-wasm.

paullouisageneau avatar paullouisageneau commented on August 11, 2024

Closing as the original issue was a misunderstanding about the option.

from datachannel-wasm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.