Giter Club home page Giter Club logo

interceptor's People

Contributors

adamroach avatar adityaa30 avatar adriancable avatar aler9 avatar antonito avatar asticode avatar at-wat avatar boks1971 avatar boushley avatar cnderrauber avatar cptpcrd avatar davidzhao avatar edaniels avatar guangjinpan avatar jmfotokite avatar kcaffrey avatar m1k1o avatar masterada avatar mengelbart avatar pionbot avatar pougetat avatar renovate-bot avatar renovate[bot] avatar rlpeck avatar sean-der avatar stv0g avatar thatsnotright avatar treyhakanson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

interceptor's Issues

Add tools to support RTP stream saving/playback

Summary

Add libraries that make it easier to play and record RTP streams.

Motivation

It's useful to play pre-recorded streams directly to a PeerConnection to support playback of audio/video files without transcoding.

Implement better internal stats merging

What did you do?

Sending compound RTCP packets beginning with a receiver report followed by any other feedback such as NACK.

What did you expect?

Receiver report and NACKs be reflected in the generated stats.

What happened?

Receiver reports are recorded but NACK is ignored. The reason is an early return from the loop that processes packets here: https://github.com/pion/interceptor/blob/master/pkg/stats/stats_recorder.go#L249

The same issue exists for extended reports.

Instead of returning the latest stats generated from the (extended) report block, the stats should be merged with the one generated from all the other packet types included in the same compound packet.

Implementing a better merge function might also help simplify some of the other recording functions.

History in cc's `FeedbackAdapter` is never cleaned up

Your environment.

  • Version: current
  • Browser: N/A
  • Other Information - N/A

What did you do?

I noticed that the history map in cc/feedback_adapter.go never has elements removed. That means that map could grow to have a size of 1<<16 - 1 for each track where this is used.

What did you expect?

I would have expected to see some form of cleanup of the history map but I'm not sure if time based makes sense.

What happened?

Implement RFC 8888 feedback adapter for GCC

The GCC implementation currently only works with TWCC feedback reports. It would be nice if it could also work with the feedback format described in RFC 8888. I think this should be straightforward to implement as it only needs to be added to feedback_adapter.go.

The Jitter buffer iterceptor doesn't handle packet loss

Your environment.

What did you do?

I'm trying to use jitter buffer interceptor.

What did you expect?

I expect to get an error on read if the current packet was lost. And I expect to get the next packet on the next read.

What happened?

If even one packet was lost, all subsequent reads will return an error.

The reason

The interceptor does not handle the situation with the absence of the next packet and does not shift the expected packet number


It looks like we should add something like this:

if errors.Is(err, ErrNotFound) {
	i.buffer.SetPlayoutHead(i.buffer.PlayoutHead() + 1)
}

Optimise `SendSideBWE.WriteRTCP`

Summary

Right now the SendSideBWE.WriteRTCP calls delayController.updateDelayEstimate passing all list of acknowledgements in each rtcp.TransportLayerCC packet. Current implementation passes each acknowledgement one by one in ackPipe and ackRatePipe which can become bottleneck when the number of acknowledgements is >100 in a twcc feedback report which is possible for a large room with multiple active audio/video tracks.

func (d *delayController) updateDelayEstimate(acks []cc.Acknowledgment) {
for _, ack := range acks {
d.ackPipe <- ack
d.ackRatePipe <- ack
}
d.ackRTTPipe <- acks
}

Motivation

I have been trying to enable transport-cc for audio tracks in order to get transport wide metrics (loss, jitter, rtt) used to calculate call quality for audio-only rooms. After enabling for audio, I observed random spikes in cpu usage (usage as 10x compared to without transport-cc for audio) for certain calls [where most of peers were android clients using native webrtc].

Such spikes happened intermittently and looking at below cpu usage via pprof about 50% of the cpu was busy processing twcc packets.

Screenshot 2022-06-14 at 02 10 51

The exact cause for this is unknown as it happens for 1 out of 10 calls.

Describe alternatives you've considered

In search of the reason behind cpu spikes I tried optimising the delayController.updateDelayEstimate method by passing all acknowledgements at once and observed some improvement with below benchmarking results.

Benchmark results for SendSideBWE [#142]

  1. Original Implementation: Send 1 ack at a time to each channel

    ❯ go test -run=BenchmarkSendSideBWE -bench=BenchmarkSendSideBWE_WriteRTCP -v -benchtime 5s -count 3
    goos: darwin
    goarch: arm64
    pkg: github.com/pion/interceptor/pkg/gcc
    BenchmarkSendSideBWE_WriteRTCP
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=10
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=10-10                824898              7663 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=10-10                868285              7222 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=10-10                833091              7747 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=100
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=100-10               113960             52894 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=100-10                97976             52030 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=100-10               115982             52788 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=500
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=500-10                24283            257929 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=500-10                24034            248203 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=500-10                23204            246539 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=1000
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=1000-10               10000            511518 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=1000-10               10000            512859 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=1000-10               10000            510880 ns/op
    PASS
    ok      github.com/pion/interceptor/pkg/gcc     79.384s
    
    
  2. Optimised: Send all acks in each rtcp.TransportLayerCC pkt at once

    ❯ go test -run=BenchmarkSendSideBWE -bench=BenchmarkSendSideBWE_WriteRTCP -v -benchtime 5s -count 3
    goos: darwin
    goarch: arm64
    pkg: github.com/pion/interceptor/pkg/gcc
    BenchmarkSendSideBWE_WriteRTCP
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=10
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=10-10               1753162              3631 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=10-10               1494188              3587 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=10-10               1739785              3548 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=100
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=100-10               399759             17435 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=100-10               383304             16141 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=100-10               399852             15329 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=500
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=500-10               101514             58903 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=500-10                87471             59383 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=500-10               100100             62173 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=1000
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=1000-10               53884            111935 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=1000-10               48418            123194 ns/op
    BenchmarkSendSideBWE_WriteRTCP/num_sequences=1000-10               47625            128780 ns/op
    PASS
    ok      github.com/pion/interceptor/pkg/gcc     90.325s
    

Comparison:

Count delta chunks per rtcp pkt / Implementation 1 at a time All at once Improvement
10 to 15 7544 ns/op 3588 ns/op 2.1x
100 to 150 52570 ns/op 16301 ns/op 3.2x
500 to 750 250890 ns/op 60153 ns/op 4.1x
1000 to 1500 511752 ns/op 121303 ns/op 4.2x

Interceptors not useful by default/Require user to call ReadRTCP

If users want NACKs they need to call ReadRTCP, otherwise the func returned by BindRTCPReader isn't executed. For new users I worry this will be easy to accidental not do. Do others think this will be an issue?

  • Can we notify the users that certain features aren't available unless they call Read or ReadRTCP ?
  • Should we revisit the design?

For now I am updating my pion/webrtc PR to add Read calls.

Race Condition on a NACK request

Your environment.

  • Version: interceptor v0.0.10 and webrtc v3.0.12
  • Browser: Chrome version 88.0.4324.192 (Official Build) (x86_64)

What did you do?

A race condition occurs when the interceptor tries to resend a packet on a NACK request.

==================
WARNING: DATA RACE
Read at 0x00c00047ba3a by goroutine 61:
  github.com/pion/interceptor/pkg/nack.(*sendBuffer).get()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/send_buffer.go:64 +0x8b
  github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).resendPackets.func1()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/responder_interceptor.go:110 +0x59
  github.com/pion/rtcp.(*NackPair).Range()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/transport_layer_nack.go:64 +0x54
  github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).resendPackets()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/responder_interceptor.go:109 +0x158

Previous write at 0x00c00047ba3a by goroutine 65:
  github.com/pion/interceptor/pkg/nack.(*sendBuffer).add()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/send_buffer.go:60 +0x238
  github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).BindLocalStream.func1()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/responder_interceptor.go:88 +0x1b0
  github.com/pion/interceptor.RTPWriterFunc.Write()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/interceptor.go:82 +0x72
  github.com/pion/webrtc/v3.(*interceptorToTrackLocalWriter).WriteRTP()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/interceptor.go:69 +0xf5
  github.com/pion/webrtc/v3.(*TrackLocalStaticRTP).writeRTP()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/track_local_static.go:139 +0x262
  github.com/pion/webrtc/v3.(*TrackLocalStaticRTP).WriteRTP()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/track_local_static.go:126 +0x168
  main.(*publisher).addListeners.func3()
      /home/ec2-user/publisher.go:129 +0x3fe

Goroutine 61 (running) created at:
  github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).BindRTCPReader.func1()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/responder_interceptor.go:67 +0x195
  github.com/pion/interceptor.RTCPReaderFunc.Read()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/interceptor.go:97 +0x68
  github.com/pion/webrtc/v3.(*RTPSender).Read()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/rtpsender.go:215 +0x21c
  github.com/pion/webrtc/v3.(*RTPSender).ReadRTCP()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/rtpsender.go:224 +0x85
  main.(*subscriber).listenForRtcpPackets()
      /home/ec2-user/subscriber.go:88 +0xdd

Goroutine 65 (running) created at:
  github.com/pion/webrtc/v3.(*PeerConnection).onTrack()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/peerconnection.go:451 +0x164
  github.com/pion/webrtc/v3.(*PeerConnection).startReceiver.func1()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/peerconnection.go:1196 +0x5e4
==================
==================
WARNING: DATA RACE
Read at 0x00c0000d5390 by goroutine 60:
  github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).resendPackets.func1()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/responder_interceptor.go:111 +0x126
  github.com/pion/rtcp.(*NackPair).Range()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/transport_layer_nack.go:64 +0x54
  github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).resendPackets()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/responder_interceptor.go:109 +0x158

Previous write at 0x00c0000d5390 by goroutine 65:
  github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).BindLocalStream.func1()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/responder_interceptor.go:88 +0x96
  github.com/pion/interceptor.RTPWriterFunc.Write()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/interceptor.go:82 +0x72
  github.com/pion/webrtc/v3.(*interceptorToTrackLocalWriter).WriteRTP()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/interceptor.go:69 +0xf5
  github.com/pion/webrtc/v3.(*TrackLocalStaticRTP).writeRTP()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/track_local_static.go:139 +0x262
  github.com/pion/webrtc/v3.(*TrackLocalStaticRTP).WriteRTP()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/track_local_static.go:126 +0x168
  main.(*publisher).addListeners.func3()
      /home/ec2-user/publisher.go:129 +0x3fe

Goroutine 60 (running) created at:
  github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).BindRTCPReader.func1()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/pkg/nack/responder_interceptor.go:67 +0x195
  github.com/pion/interceptor.RTCPReaderFunc.Read()
      /home/ec2-user/go/pkg/mod/github.com/pion/[email protected]/interceptor.go:97 +0x68
  github.com/pion/webrtc/v3.(*RTPSender).Read()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/rtpsender.go:215 +0x21c
  github.com/pion/webrtc/v3.(*RTPSender).ReadRTCP()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/rtpsender.go:224 +0x85
  main.(*subscriber).listenForRtcpPackets()
      /home/ec2-user/subscriber.go:88 +0xdd

Goroutine 65 (running) created at:
  github.com/pion/webrtc/v3.(*PeerConnection).onTrack()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/peerconnection.go:451 +0x164
  github.com/pion/webrtc/v3.(*PeerConnection).startReceiver.func1()
      /home/ec2-user/go/pkg/mod/github.com/pion/webrtc/[email protected]/peerconnection.go:1196 +0x5e4
==================

Bandwidth Estimation

Downstream projects implement this today. Would be really great to bring them into this project! What we need

Message Generation (available in Ion today)

  • Sender and Receiver Reports
  • Transport Wide Congestion Control

Processing of those reports (Not available in Go yet)

Implement better internal stats merging

What did you do?

Sending compound RTCP packets beginning with a receiver report followed by any other feedback such as NACK.

What did you expect?

Receiver report and NACKs be reflected in the generated stats.

What happened?

Receiver reports are recorded but NACK is ignored. The reason is an early return from the loop that processes packets here: https://github.com/pion/interceptor/blob/master/pkg/stats/stats_recorder.go#L249

The same issue exists for extended reports.

Instead of returning the latest stats generated from the (extended) report block, the stats should be merged with the one generated from all the other packet types included in the same compound packet.

Implementing a better merge function might also help simplify some of the other recording functions.

Change Attributes from map interface key to use a typed key

Summary

This issue is to discuss the possibility to change the attr map from interface key to a typed key. As maybe I'm missing a use case for an interface key, we should discuss if it's neccesary.

Motivation

Usually attributes list should not be big enough to take advantage of a Map, from a performance perspective a slice should perform far better, if we decide to stick to a map, we should consider using a typed key.

An interface key may perfrom 10x worst because it needs to perform the following steps:

  • If both "sides" of the cmp are of same types.
  • If that type has '==' defined. ([]slices do not, for example)
  • If the values are equal or not, but only using a dynamically dispatched "comparator"
    source

cc: @Sean-Der @masterada @tarrencev @jech

Interceptor only provide rtp.Packet based API, some users want to deal with []byte

@OrlandoCo just discovered this issue in pion/webrtc currently if a user does Read (not ReadRTP) it will skip the interceptors entirely.

Currently Interceptors deal entirely in rtp.Packet so pion/webrtc so we just don't use them in this case. We have the following choices. We have three different things we could do.

Provide the ability to write rtp.Packet, rtcp.Packet and []byte

Each Interceptor would provide the ability to do both.

Move to a []byte based API

Instead of dealing with rtp.Packet in the public API everything will be []byte based.

Keep the current API

No changes


These are my random thoughts as of today.

  • If we switch to []byte Interceptors will have to parse multiple times
    • Is this actually a problem, how expensive is parsing?
  • If we switch to []byte we can drop the dependency on pion/rtcp and pion/rtp
  • Do people care about this?
  • Having multiple paths means more bugs/code to maintain

In pion/webrtc in the short term I propose switching Read to be fed via Unmarshal from ReadRTP. The extra operations are really unfortunate, but it unblocks the issue today.

NACK responder always responding with most recent packet

In the NACK responder, it tries to cache sent packets in order to respond to NACKs. However, it's storing a pointer to the rtp.Packet struct, which may be re-used by the sender process without the knowledge of NackResponder.

Adam from the community has observed symptoms where retransmitted packets contained payload that did not match the sequence number that was requested. (but a later sequence number that overrode the earlier one).

We need to be storing a copy of the packet to ensure that nack responder has full ownership over its data.

Logic when generating TWCC feedback packets seems strange

The logic when generating feedback packets for TWCC seems a bit odd, specifically when there are few received packets

if len(r.receivedPackets) < 2 {
r.receivedPackets = []pktInfo{}
return []rtcp.Packet{feedback.getRTCP()}
}

In webrtc-rs/webrtc we've seen this logic cause the generated packets to have no chunks which in turn causes libWebRTC to ignore them with a message like:

[64267:81923:1013/111056.055484:WARNING:transport_feedback.cc(420)] Buffer too small (16 bytes) to fit a FeedbackPacket. Minimum size = 18

It would seem that no packet should be generated in the event that len(r.receivePackets) < 2 and the r.receivedPackets shouldn't be cleared

expose the underlying primitives

Summary

Expose the underlying primitives in pion/interceptor, such as SenderStream and friends.

Motivation

A lot of these primitives are quite useful, it would be nice to use them without the interceptor API. I find the interceptor API to be a bit cumbersome to integrate with and some of the primitives are nice to use as-is.

Describe alternatives you've considered

Right now I'm sorta copying the code into my own codebase. It's not great. :)

when will google congestion control be released?

Summary

I am hungry for Google Congestion Control

Motivation

Why are we doing this? What use cases does it support? What is the expected outcome? Is this available in other implementations? Can this not be implemented using primitives?

Describe alternatives you've considered

A clear and concise description of the alternative solutions you've considered.

Additional context

Add any other context or screenshots about the feature request here.

Priority queue error

Your environment.

  • Version: v0.1.28

What did you do?

jitterBuffer := jitterbuffer.New(jitterbuffer.WithMinimumPacketCount(16))
jbEvent := make(chan struct{})
eventFunc := func(event jitterbuffer.Event, jb *jitterbuffer.JitterBuffer) {
	jbEvent <- struct{}{}
}
jitterBuffer.Listen(jitterbuffer.BeginPlayback, eventFunc)

go func() {
	for {
		select {
		case <-jbEvent:
			{
				packet, _ := jitterBuffer.Pop()
                                    ...
			}
                    ...
		}
	}
}()

go func() {
	for {
		packet, _, err := remoteTrack.ReadRTP()
                    ...
		jitterBuffer.Push(packet)
	}
}()

What did you expect?

It seemed to me that I would receive jitterbuffer.BeginPlayback event whenever the JitterBuffer packet count became 16.

What happened?

func (q *PriorityQueue) PopAt(sqNum uint16) and func (q *PriorityQueue) PopAtTimestamp(timestamp uint32) don't reduce the PriorityQueue length.

It would also be great to have an example of using JitterBuffer in practice.

Get RTT from `SendSideBWE`

Summary

Get RoundTripTime from SendSideBWE

Motivation

Earlier SendSideBWE.GetStats used to return round trip time. This was pretty helpful along with other metrics in the same stats map to calculate the overall downlink connection quality. In a82b843 it was removed.

[SendSideBWE ] `delayController` raises panic when packet arrives after interceptor is closed

Your environment.

  • Version: github.com/pion/interceptor v0.1.11-0.20220401022307-09051cdde82b
  • Other Information -

Panic stack trace:

runtime.gopanic
        /usr/local/go/src/runtime/panic.go:965
runtime.chansend
        /usr/local/go/src/runtime/chan.go:281
runtime.chansend1
        /usr/local/go/src/runtime/chan.go:143
github.com/pion/interceptor/pkg/gcc.(*delayController).updateDelayEstimate
        /go/pkg/mod/github.com/pion/[email protected]/pkg/gcc/delay_based_bwe.go:96
github.com/pion/interceptor/pkg/gcc.(*SendSideBWE).WriteRTCP
        /go/pkg/mod/github.com/pion/[email protected]/pkg/gcc/send_side_bwe.go:141
github.com/100mslive/ion-sfu/pkg/sfu.(*TrackAllocator).OnTransportCCFeedback
        /go/pkg/mod/github.com/100mslive/[email protected]/pkg/sfu/track_allocator.go:459
github.com/100mslive/ion-sfu/pkg/sfu.(*DownTrack).handleRTCP
        /go/pkg/mod/github.com/100mslive/[email protected]/pkg/sfu/downtrack.go:985
github.com/100mslive/ion-sfu/pkg/sfu.(*DownTrack).Bind.func1
        /go/pkg/mod/github.com/100mslive/[email protected]/pkg/sfu/downtrack.go:273
github.com/100mslive/ion-sfu/pkg/buffer.(*RTCPReader).Write
        /go/pkg/mod/github.com/100mslive/[email protected]/pkg/buffer/rtcpreader.go:25
github.com/pion/srtp/v2.(*ReadStreamSRTCP).write
        /go/pkg/mod/github.com/pion/srtp/[email protected]/stream_srtcp.go:30
github.com/pion/srtp/v2.(*SessionSRTCP).decrypt
        /go/pkg/mod/github.com/pion/srtp/[email protected]/session_srtcp.go:173
github.com/pion/srtp/v2.(*session).start.func1
        /go/pkg/mod/github.com/pion/srtp/[email protected]/session.go:141

What did you do?

What did you expect?

No panics

What happened?

There's a race between RTCP packets written to SendSideBWE and interceptor being closed due to which any packets written after the interceptor is closed raised the panic send on closed channel. This panic happens very rarely (1 out of 200 Peer) which makes it pretty hard to reproduce in local env. It happens at the same time when ice state goes into closed state and respective peer-connection is closed

Would ignoring any new packets after the interceptor is closed be a good fix? @Sean-Der

What's the unit of ReceivedRTPStreamStats's Jitter?

Hey,

First off a big bravo for the stat's interceptor, it's really awesome! ❤️

However I'm wondering what's the unit of ReceivedRTPStreamStats's Jitter? 🤔 If it's not a time.Duration, does it make sense to convert to it? If so, how can I do it?

Cheers
Quentin

Stats recorder SSRC handling for PLI/NACK/FIR/...

The stats recorder handles incoming receiver report blocks by filtering them by SSRC. The same should be done for PLI/NACK/FIR (and maybe SRs?). See also these code lines:

Correct RR handling:
https://github.com/pion/interceptor/blob/master/pkg/stats/stats_recorder.go#L190

Incorrect handling for PLI/NACK/FIR (includes RTCP packets from other SSRCs in the same stats):
https://github.com/pion/interceptor/blob/master/pkg/stats/stats_recorder.go#L241

The Jitter buffer interceptor doesn't respect the actual packet length

Your environment.

What did you do?

I'm trying to use jitter buffer interceptor.

What did you expect?

I expect to receive RTP packets as usual, but in correct order.

What happened?

I receive invalid RTP packets. The payload length is more than expected. The payload is filled by zeroes.

The reason

The interceptor tries to unmarshal the whole buffer and doesn't respect the actual packet length.

if err := packet.Unmarshal(buf); err != nil {

It looks like we should replace packet.Unmarshal(buf) to packet.Unmarshal(buf[:n])

Dependency Dashboard

This issue provides visibility into Renovate updates and their statuses. Learn more

This repository currently has no open or pending branches.


  • Check this box to trigger a request for Renovate to run again on this repository

Add faster probing to GCC startup

To match the behavior of Chromium's implementation as reported by @kcaffrey

I did spend some time digging through the chromium source code, and 
it looks like they are doing a multiplicative increase with an 8%/sec increase
as well, so I think it must converge faster due to the probing it does at the start.

It looked like they might start their estimate at 30Kb/s instead of 10Kb/s, but that 
doesn't make a big difference. Even starting at 30Kb/s, with 1.08 as the multiplicative
 factor, the rate won't reach 1Mb for at least 45sec: 
https://www.wolframalpha.com/input/?i=30000+*+1.08%5Et+%3D+1000000
But chrome reaches it almost instantly, and the only thing I saw in their source that 
could explain that was the probing in the first few seconds. But there is also a lot of 
code in chromium so I haven't read it all yet

Bandwidth Estimator

The issue covers everything we want to do around Congestion Estimation and Pion. This covers a Bandwidth Estimator in Pion, and generating feedback for remote peers.

@mengelbart is the owner of this work. If you have feedback/suggestions/ideas he knows this all the best!


  • bc30165 Implement Sender/Receiver Reports
  • c8a26a2 Implement Transport Wide Congestion Control Feedback
  • Design a interface/API that all Bandwidth Estimators will satify
  • Merge the NetworkConditoner so we can test our implementations
    • Start with basic tests (hard limits on total bandwidth/start queueing at certain bitrate]
    • Implement full test suite from RFC8867
  • Implement Google Congestion Control
    • Have it be Sender estimates only using TWCC. No REMB our Receiver Reports.
  • Implement FlexFEC
    • This will be needed by servers attempting to deliver Simulcast/SVC
  • Make WebRTC Bandwidth Estimation accessiable to others
    • Write blog posts about what you are implementing/challenges
    • Write a dedicated chapter in WebRTC for the Curious about Bandwidth Estimation
  • Make work accessible for other projects.

In the future we will also need


Support RTX

  • Enable RTX in SettingEngine
  • Support RTX in SDP layer
  • Support RTX in Track/PC layer

interdeparturethreshold should be 0 or a very small value like in google webrtc C++ code

In pkg/gcc/arrival_group_accumulator.go, interDepartureThreshold is 5 * time.Millisecond, whereas google C++ implementation uses 0 here.
https://webrtc.googlesource.com/src//+/d090952628606228afa47c842812927f15227290/modules/congestion_controller/goog_cc/inter_arrival_delta.cc?autodive=#125

If we believe performance is an issue, we should use a very small value, like 100 microseconds. Otherwise, this introduces a large variation in delay, which occasionally causes delay estimator to signal overuse when there is no actual overuse.

Add PCAP logger

Summary

Implement a pcap logger that can be enabled in debug mode to generate a packet dump of a PeerConnection.

Motivation

It's difficult to debug issues in the DTLS or RTP stacks because an external packet logger like tcpdump does not have access to the DTLS encryption keys.

To make it easier to track down issues, it should be possible to save a packet dump for analysis.

Design

To aid debugging, the pcap should include all the types of packets Pion sends and receives over its ICE connections:

  • STUN requests
  • DTLS handshakes
  • Decrypted DTLS packets
  • RTP packets
  • RTCP packets

These should be mapped to IP addresses and port numbers where possible so that issues with STUN and firewall hole punching can be diagnosed.

It should be possible to:

  • Enable or disable the writer on a per-PeerConnection level
  • Attach a pcap writer before or after a PeerConnection has been created
  • Write packets from multiple PeerConnections to a single pcap

Here's a sketch of a design:

// Logger can be attached via SettingEngine
f, _ := os.Open("out.pcap")
s := &SettingEngine{
  PacketLogger: NewPCAPLogger(f),
}
api := NewAPI(WithSettingEngine(s))
pc, _ := api.NewPeerConnection(Configuration{})

(...)

// Later, the logger can be changed or removed
s.PacketLogger = NewPacketLogger(otherfile)
s.PacketLogger = nil
type PacketLogger interface {
  WriteUDP(b []byte, dst, src *net.UDPAddr)
  WriteTCP(b []byte, dst, src *net.TCPAddr)
}

Notes

Related to pion/website#29
Partially implemented PCAP format: pion/webrtc#521

Jitter Buffer error

Your environment.

  • Version: v0.1.29 and main branch

What did you do?

I push and pop packets to the Jitter Buffer.

What did you expect?

If I added N packets then I can get N packets.

What happened?

If I pushed packet with a SequenceNumber equal MaxUint16, then I won't be able to pop it. Packets with this SequenceNumber are possible.

I think the problem is in the following code and similar.

@thatsnotright Have a look, please.

Next Steps

@masterada @at-wat Just starting a tracking issue, feel free to add anything to this list you think are important! I will also be helping after v3.0.0 is tagged :)

  • Create examples
  • Move NACK to dedicated pkg. Should top-level just be structure, and each specific topic be their own folder?
  • Tests
  • Finish README
    • Elevator Pitch
    • Roadmap/Interceptor Types
    • Motivations for Interceptors/why we did it this way
    • Architecture/Design Document (in own .md file?)

GCC: aggressive estimated bitrate drop on seemingly healthy connection

Your environment.

  • Version: a82b843
  • Browser: Firefox 103.0 can reproduce this issue with some consistency.

What did you do?

I am having Firefox subscribe to a simulcasted video stream originally coming from Chrome through a pion SFU.

What did you expect?

The estimated bitrate not to change so dramatically.

What happened?

The estimated bitrate sharply decreased.

Logs

latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
delaystats: {13.127066ms 14.36628ms 36.5ms 33ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
loss controller increasing; averageLoss: 0, decreaseLoss: 0, increaseLoss: 0
rateController.onReceivedRate: 285030
delaystats: {-13.015515ms -10.3869ms 36.5ms 27ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
rateController.onReceivedRate: 221592
delaystats: {532.713µs -9.07026ms 36.5ms 35ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
rateController.onReceivedRate: 182895
rateController.onReceivedRate: 194354
rateController.onReceivedRate: 209145
delaystats: {1.158966ms -6.6237ms 36.5ms 51ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
loss controller increasing; averageLoss: 0, decreaseLoss: 0, increaseLoss: 0
rateController.onReceivedRate: 177910
rateController.onReceivedRate: 185813
rateController.onReceivedRate: 196080
rateController.onReceivedRate: 203805
delaystats: {43.89668ms 75.56388ms 36.5ms 124ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
rateController.onReceivedRate: 129338
rateController.onReceivedRate: 137593
rateController.onReceivedRate: 145966
rateController.onReceivedRate: 154677
delaystats: {-45.174547ms -11.15454ms 36.5ms 110ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
rateController.onReceivedRate: 140632
rateController.onReceivedRate: 151864
rateController.onReceivedRate: 166900
delaystats: {15.046823ms 17.29278ms 36.5ms 125ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
rateController.onReceivedRate: 175799
rateController.onReceivedRate: 218550
delaystats: {-15.879735ms -12.90072ms 36.5ms 119ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
loss controller increasing; averageLoss: 0, decreaseLoss: 0, increaseLoss: 0
rateController.onReceivedRate: 214988
delaystats: {199.162µs -12.12756ms 36.5ms 35ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
rateController.onReceivedRate: 208580
delaystats: {6.213µs -11.73912ms 36.5ms 30ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
rateController.onReceivedRate: 174357
rateController.onReceivedRate: 187548
rateController.onReceivedRate: 199310
rateController.onReceivedRate: 211071
delaystats: {660.609µs -10.14048ms 36.5ms 36ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
loss controller increasing; averageLoss: 0, decreaseLoss: 0, increaseLoss: 0
rateController.onReceivedRate: 189839
rateController.onReceivedRate: 205789
rateController.onReceivedRate: 221738
rateController.onReceivedRate: 240489
rateController.onReceivedRate: 257610
rateController.onReceivedRate: 276259
rateController.onReceivedRate: 293941
delaystats: {27.328607ms 41.21298ms 36.5ms 142ms normal increase 10000000}
latest: 10000000, loss bitrate: 10000000, delay bitrate: 10000000
rateController.onReceivedRate: 266448
rateController.onReceivedRate: 284261
rateController.onReceivedRate: 302073
rateController.onReceivedRate: 319902
rateController.onReceivedRate: 336946
rateController.onReceivedRate: 353991
rateController.onReceivedRate: 371053
rateController.onReceivedRate: 388114
rateController.onReceivedRate: 405289
rateController.onReceivedRate: 422465
rateController.onReceivedRate: 439657
rateController.onReceivedRate: 456848
rateController.onReceivedRate: 468228
rateController.onReceivedRate: 479608
rateController.onReceivedRate: 496359
rateController.onReceivedRate: 513126
rateController.onReceivedRate: 529893
rateController.onReceivedRate: 546661
rateController.onReceivedRate: 563428
loss controller increasing; averageLoss: 0, decreaseLoss: 0, increaseLoss: 0
rateController.onReceivedRate: 510531
rateController.onReceivedRate: 526779
rateController.onReceivedRate: 537485
delaystats: {6.415829ms 51.91056ms 40.5ms 237ms overuse decrease 500000}
latest: 10000000, loss bitrate: 500000, delay bitrate: 500000

I added some log statements here and there to get a better picture. Of course I am also happy to provide further information if necessary.

I am not suggesting there's a bug or anything but I'd like to understand a bit better what's going on to potentially mitigate this behaviour if possible.

Stats recorder handling of SRs in simulcast

In simulcast, a sender can send multiple streams with different SSRCs. The stats recorder depends on outgoing SRs to calculate RemoteInboundRTPStreamStats. Since the recorder only looks at SRs sent for a specific SSRC, the stats will be empty for the streams with SSRCs which are not included in SRs and thus don't seem to be sending SRs. This is slightly different from RRs, where we have multiple media SSRCs for different report blocks. I think the solution must be to somehow let the stats recorder of one SSRC handle the SRs sent by another SSRC if they are related by simulcast.

Shifted arrival times when some packets in TransportLayerCC have expired

Your environment.

  • Version: master
  • Browser: N/A

What did you do?

When TransportLayerCC packets are parsed in feedback_adapter.go the refTime and deltaIndex are only updated for packets found in history. If only some packets in the report are missing from history this leads to subsequent packets getting the wrong arrival time. See test below.

What did you expect?

The arrival time for packets still in history should be correct even if some earlier packets in the report expired.

func TestRecvDeltaOutOfSync(t *testing.T) {
	adapter := NewFeedbackAdapter()

	t0 := time.Time{}
	headers := []rtp.Header{}
	// passes when packet count is <= 250
	for i := uint16(0); i < 251; i++ {
		pkt := getPacketWithTransportCCExt(t, i)
		headers = append(headers, pkt.Header)
		assert.NoError(t, adapter.OnSent(t0, &pkt.Header, 1200, interceptor.Attributes{TwccExtensionAttributesKey: hdrExtID}))
	}

	results, err := adapter.OnTransportCCFeedback(t0, &rtcp.TransportLayerCC{
		Header:             rtcp.Header{},
		SenderSSRC:         0,
		MediaSSRC:          0,
		BaseSequenceNumber: 0,
		PacketStatusCount:  22,
		ReferenceTime:      0,
		FbPktCount:         0,
		PacketChunks: []rtcp.PacketStatusChunk{
			&rtcp.StatusVectorChunk{
				PacketStatusChunk: nil,
				Type:              rtcp.TypeTCCStatusVectorChunk,
				SymbolSize:        rtcp.TypeTCCSymbolSizeTwoBit,
				SymbolList: []uint16{
					rtcp.TypeTCCPacketReceivedSmallDelta,
					rtcp.TypeTCCPacketReceivedSmallDelta,
				},
			},
		},
		RecvDeltas: []*rtcp.RecvDelta{
			{
				Type:  rtcp.TypeTCCPacketReceivedSmallDelta,
				Delta: 3,
			},
			{
				Type:  rtcp.TypeTCCPacketReceivedSmallDelta,
				Delta: 6,
			},
		},
	})

	assert.NoError(t, err)

	assert.NotEmpty(t, results)
	assert.Len(t, results, 2)

	assert.Contains(t, results, Acknowledgment{
		SequenceNumber: 1,
		Size:           headers[1].MarshalSize() + 1200,
		Departure:      t0,
		Arrival:        t0.Add(9 * time.Microsecond),
	})
}

totalLost count error in receiverStream?

in interceptor/receiver_stream.go

`func (stream *receiverStream) setReceived(seq uint16) {
pos := seq % stream.size
stream.packets[pos/64] |= 1 << (pos % 64)
}

func (stream *receiverStream) delReceived(seq uint16) {
pos := seq % stream.size
stream.packets[pos/64] &^= 1 << (pos % 64)
}

func (stream *receiverStream) getReceived(seq uint16) bool {
pos := seq % stream.size
return (stream.packets[pos/64] & (1 << (pos % 64))) != 0
}`

In there 3 functions, pos always in range [0, 128), thus pos/64 always in [0, 1).
So, it only record recent 128 packages, may get error data in totalLost

If it not a bug, just IGNORE this.

Rare panic when setting TWCC extension

There is a panic that originates from calling SetExtension on header that is nil.

Unfortunately, it happened only once, I don't know what lead to it and I did not have atached debugger to see more of it.

panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xd46aa5]
    goroutine 209875 [running]:
    github.com/pion/rtp.(*Header).SetExtension(0xc001666f00, 0x1, {0xc002db0a4c, 0x2, 0x2})
    	/go/pkg/mod/github.com/pion/rtp@v1.7.13/packet.go:397 +0x4a5
    github.com/pion/interceptor/pkg/twcc.(*HeaderExtensionInterceptor).BindLocalStream.func1(0xc003054401?, {0xc001679200, 0x4a4, 0x5b4}, 0xc000bade88?)
    	/go/pkg/mod/github.com/pion/interceptor@v0.1.12/pkg/twcc/header_extension_interceptor.go:51 +0x9f
    github.com/pion/interceptor.RTPWriterFunc.Write(0xc000badf7c?, 0xc002d77300?, {0xc001679200?, 0xc000badef8?, 0x89d0cf?}, 0xc000901740?)
    	/go/pkg/mod/github.com/pion/interceptor@v0.1.12/interceptor.go:83 +0x31
    github.com/pion/interceptor/pkg/nack.(*ResponderInterceptor).resendPackets.func1(0xb8ef?)

Zombie ticker running

Your environment.

What did you do?

go app consuming 3 cores at no traffic. Using pprof, the runtime.siftdownTimer was consuming approx 20% of the cpu -- found a possible leak in LeakyBucketPacer.Run method which started a ticker but never stopped it.

What did you expect?

Ticker to be stopped after return

What happened?

Ticker did not stop

The interceptor registry is too rigid

Instead of having an interceptor registry and binding it to the media engine, perhaps it would be enough to have a method that adds an interceptor to a given PC? This, in particular, would make it easy to share state between interceptors:

state := make([]stateful, 42);
i1 := &Interceptor1{state: state}
i2 := &Interceptor2{state: state}
AddInterceptor(pc, i1)
AddInterceptor(pc, i2)

Let me give another example. Suppose you want to write an interceptor to generate sender/receiver reports (which is something we definitely want to see in Pion, since this functionality is essential both for lipsync and getting decent datarates). The interceptor will need a data structure to keep information about time offsets and jitter statistics; this data structure is likely to be per-PC, and therefore a different instance will be used for each PC. Thus, you'll most likely want to stick a separate interceptor on each PC, rather than sticking it in the MediaEngine.

concurrent map read & write in `leaky_bucket_pacer`

Your environment.

  • Version: v0.1.8
  • Other Information -
fatal error: concurrent map read and map write

goroutine 828975 [running]:
runtime.throw(0xf8a21d, 0x21)
  /usr/local/go/src/runtime/panic.go:1117 +0x72 fp=0xc019715e88 sp=0xc019715e58 pc=0x4385b2
runtime.mapaccess2_fast32(0xe212a0, 0xc0008d63c0, 0xf0993bd3, 0xc0155eb890, 0x3)
  /usr/local/go/src/runtime/map_fast32.go:61 +0x1ac fp=0xc019715eb0 sp=0xc019715e88 pc=0x411c8c
github.com/pion/interceptor/pkg/gcc.(*LeakyBucketPacer).Run(0xc000b48a80)
  /go/pkg/mod/github.com/pion/[email protected]/pkg/gcc/leaky_bucket_pacer.go:122 +0x2b9 fp=0xc019715fd8 sp=0xc019715eb0 pc=0xaf7bd9
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1371 +0x1 fp=0xc019715fe0 sp=0xc019715fd8 pc=0x472121
created by github.com/pion/interceptor/pkg/gcc.NewLeakyBucketPacer
  /go/pkg/mod/github.com/pion/[email protected]/pkg/gcc/leaky_bucket_pacer.go:60 +0x1ce

What did you do?

Ran a load test with 300 Peers to stress test the interceptor

What did you expect?

No crash

What happened?

ion-sfu node crashed

I think the issue can be fixed by using the p.writeLock for https://github.com/pion/interceptor/blob/master/pkg/gcc/leaky_bucket_pacer.go#L122

Incorrect PacketsReceived value from RemoteInboundRTPStreamStats

Your environment.

  • Version: v0.1.17
  • Browser: Chrome v114.0.5735.133

What did you do?

Captured OutboundRTPStreamStats and RemoteInboundRTPStreamStats using the stats interceptor, from a previously defined track.

go func() {
	var stats *stats.Stats

	ssrc := uint32(rtpSender.GetParameters().Encodings[0].SSRC)
	for !element.stop {
		stats = statsGetter.Get(ssrc)
		if stats != nil {
			log.Println(stats.OutboundRTPStreamStats, stats.RemoteInboundRTPStreamStats)
		}
		time.Sleep(time.Second * 5)
	}
}()

What did you expect?

Values of PacketsSent from OutboundRTPStreamStats should be aligned or slightly higher than PacketsReceived from RemoteInboundRTPStreamStats.

What happened?

Once in a while, PacketsReceived's value explode and is not correctly aligned anymore:

2023/06/26 14:31:19 OutboundRTPStreamStats
        PacketsSent: 2991
        BytesSent: 3501804
        HeaderBytesSent: 35892
        NACKCount: 0
        FIRCount: 0
        PLICount: 0
 RemoteInboundRTPStreamStats:
        PacketsReceived: 2991
        PacketsLost: 0
        Jitter: 0.0192
        RoundTripTime: 0s
        TotalRoundTripTime: 0s
        FractionLost: 0
        RoundTripTimeMeasurements: 0

2023/06/26 14:31:24 OutboundRTPStreamStats
        PacketsSent: 3588
        BytesSent: 4196307
        HeaderBytesSent: 43056
        NACKCount: 0
        FIRCount: 0
        PLICount: 0
 RemoteInboundRTPStreamStats:
        PacketsReceived: 3583
        PacketsLost: 0
        Jitter: 0.017822222222222222
        RoundTripTime: 0s
        TotalRoundTripTime: 0s
        FractionLost: 0
        RoundTripTimeMeasurements: 0

2023/06/26 14:31:29 OutboundRTPStreamStats
        PacketsSent: 4185
        BytesSent: 4893407
        HeaderBytesSent: 50220
        NACKCount: 0
        FIRCount: 0
        PLICount: 0
 RemoteInboundRTPStreamStats:
        PacketsReceived: 4185
        PacketsLost: 0
        Jitter: 0.021566666666666668
        RoundTripTime: 0s
        TotalRoundTripTime: 0s
        FractionLost: 0
        RoundTripTimeMeasurements: 0

2023/06/26 14:31:34 OutboundRTPStreamStats
        PacketsSent: 4785
        BytesSent: 5595347
        HeaderBytesSent: 57420
        NACKCount: 0
        FIRCount: 0
        PLICount: 0
 RemoteInboundRTPStreamStats:
        PacketsReceived: 4294841009
        PacketsLost: 0
        Jitter: 0.022344444444444445
        RoundTripTime: 0s
        TotalRoundTripTime: 0s
        FractionLost: 0
        RoundTripTimeMeasurements: 0

2023/06/26 14:31:39 OutboundRTPStreamStats
        PacketsSent: 5384
        BytesSent: 6300016
        HeaderBytesSent: 64608
        NACKCount: 0
        FIRCount: 0
        PLICount: 0
 RemoteInboundRTPStreamStats:
        PacketsReceived: 4294841608
        PacketsLost: 0
        Jitter: 0.0145
        RoundTripTime: 0s
        TotalRoundTripTime: 0s
        FractionLost: 0
        RoundTripTimeMeasurements: 0

2023/06/26 14:31:44 OutboundRTPStreamStats
        PacketsSent: 6115
        BytesSent: 7160160
        HeaderBytesSent: 73380
        NACKCount: 0
        FIRCount: 0
        PLICount: 0
 RemoteInboundRTPStreamStats:
        PacketsReceived: 4294842339
        PacketsLost: 0
        Jitter: 0.011177777777777778
        RoundTripTime: 0s
        TotalRoundTripTime: 0s
        FractionLost: 0
        RoundTripTimeMeasurements: 0

Continued NACK Improvements

  • Currently nack is sent in a fixed intervals.
  • There is no limit for how many times a nack will be sent for a single packet (current implementation sends nacks as long as either the packet arrives, or the packet is older than max sequence number - buffer size)
  • Nacked packets/nack requests are sent using the same PayloadType as normal rtp packets. It is possbile to send them in a dedicated feedback stream. This is much more complicated, and current implementation works, so it should definietly be an improvement in a separate issue.

Missing `remote-inbound-rtp` stat in Firefox when sending audio track

Your environment.

  • Version: v0.1.11
  • Browser: Firefox 108

What did you do?

Sending a single audio track from Firefox to pion webrtc, acting as SFU. It's also reproducible with the basic broadcast example.

What did you expect?

I'd expect a remote-inbound-rtp stat to be available on the client.

What happened?

Only the outbound-rtp stat is available.

Other facts

  • This works nicely on Chrome where the remote-inbound-rtp stat is correctly returned.
  • Publishing a video track works as expected and correctly generates the stat report, even on Firefox.
  • Doing a simple peer-to-peer test on Firefox and sharing a single audio track correctly generates the missing stat.

I agree that on first looks it almost seems like a Firefox issue but just wanted to double check with some experts here in case there's anything that could be done.

Packetdumper logging duplicates

What did you do?

Similar to #84 the packetdumper interceptors pass pointers to rtp.Header around, which can contain byte slices if they contain extensions. These slices can be overwritten by the next incoming packet before the first one was dumped. In that case, the same (later) header extension is dumped twice. I think we can either make a deep copy of the header or format before passing the pointer on to the writer loop. I am not sure which one is more efficient.

Evaluate and maybe add optimizations to GCC implementation

There are a couple of optimizations that could improve the current GCC implementation which I'd like to test and add if they prove useful:

  • [] Implement linear regression as an alternative to the kalman filter
  • [] Improve efficiency of pkg/gcc/rtt_estimator.go:
    • [] Use a better ring buffer implementation
    • [] Calculate rate once and afterwards only update with new measurements
  • [] pkg/gcc/rate_controller.go uses bitsPerFrame calculated by assuming 30 frames per second, can/should we make this configurable?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.