Giter Club home page Giter Club logo

rfc8312bis's People

Contributors

bbriscoe avatar goelvidhi avatar gorryfair avatar injongrhee avatar junhochoi avatar larseggert avatar lisongxu avatar maolson-msft avatar martinthomson avatar nealcardwell avatar sangtaeha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rfc8312bis's Issues

Analysis in [FHP00] doesn't include delayed ACK factor

Yoshi said:

7: Section 4.3 I think The analysis in [FHP00] doesn't include delayed ACK factor. So, the AIMD TCP model here
can be a bit aggressive compared to a TCP that doesn't enable ABC and uses delayed ACK.
This is fine, but I think it might be good to clarify it.

RFC 8511 handling of ECE signal (slow start vs congestion avoidance)

Markku Kojo said,

ABE (RFC 8511) is currently the only experimental RFC to modify
the TCP-sender response to ECE. ABE allows modifying multiplicative
decrease factor only for AIMD TCP and only when ECE arrives in
congestion avoidance, that is, not when the sender is in slow-start.

Applying a decrease factor of 0.7 (or higher) when a congestion
singnal arrives and ends the initial slow start would be
inconsiderate because it extends the convergence time from
the slow-start overshoot. ABE has found that using a larger decrease
factor yields performance improvement when applied in congestion
avoidance, but not otherwise. Do we have data that would support
different findings with CUBIC?

Why a_aimd can be set to 1

Yoshi said:

9: Section 4.3 "Note that once W_est reaches W_max, that is, W_est >= W_max, ..."

I might miss something, but I'm not sure why a_aimd can be set to 1 to be compatible with AIMD TCP.
Does this mean b_cubic is also updated? If not, why this can be compatible?

Results are based on the algorithms

Yoshi said:

12: Section 5.2 and Section 5.3. Do these results are based on the algorithms and the parameter values
described in the draft? If there're differences, I think it should be described.

Initialization

This is tiny and editorial:

Section 4.7 of RFC 8312 talks about how values are initialized after an RTO; it seems obvious that these values would also be used at the very beginning, but this is never explicitly said. It seems obvious that, at this point, W_last_max should be set to W_max, but how to initialize W_last_max is also not explicitly said (unless I missed it).

Update K definition (Eq. 2) to account for Fast Convergence

As discussed in this tcpm thread, after fast convergence (section 4.6), the candidate target value of the congestion window may be less than the current congestion window. For example, say cwnd = 100, beta_cubic = .7 and a congestion event occurs:

W_max = cwnd = 100;
W_max = W_max*(1.0+beta_cubic)/2.0 = 85   // further reduce W_max for fast convergence

cwnd = cwnd * beta_cubic = 70	          // window reduction
W_cubic(0) = W_max * beta_cubic = 59.5

If we were to enter congestion avoidance at this point, with a small enough RTT, the candidate target congestion window as calculated by W_cubic(t+RTT) may be less than the current congestion window (~59.5 < 70).

The suggestion in the thread from Lisong Xu and Vidhi Goel is to change Equation 2 to:

K = cubic_root((W_max - cwnd)/C) (Eq. 2)

where cwnd is the congestion window size at the beginning of the current congestion avoidance. This will ensure the target is greater than the current congestion window, and in my opinion is a more clear representation of what K represents.

Add handling of spurious retransmissions

I wrote up a blog last year describing my experience implementing Cubic for QUIC. There were three points:

  1. Issues with the formula base "Reno compatibility", which is documented in issue #20
  2. Sensitivity of Hystart to delay jitter, which is more an issue with Hystart than with Cubic
  3. Sensitivity to spurious repeats

I think that third issue should be addressed in the revised RFC. The basic problem is that any detected packet loss causes the host to reduce the window and start a new epoch. In some environments we can see spurious loss detection, caused for example by delayed ACKs or out of order delivery. These spurious losses can be easily identified in QUIC, and with some extra work in TCP. It would be useful if the revised RFC has a section on handling spurious losses.

The handling that I did implement was simple: reset the epoch parameters to the value of the previous epoch, before the loss. Documenting at least that would be nice.

RACK (and QUIC)

Markku Kojo said,

The draft states that RACK (and QUIC loss detection) can be used
with CUBIC to detect losses. However, it seems to have gone
unnoticed that RACK may also detect loss of a retransmission in
which case the congestion control response is required to be taken
twice, i.e., ssthresh and cwnd must be lowered again (MUST in
RFC 5681 Sec. 4.3). Once RACK got published all new congestion
controls and updates to existing RFCs must include this essential
congestion control response, if the congestion control mechanism
intends to use RACK for loss detection.

This draft does not have any such requirement nor does it specify
how this is done?

Units are missing or unclear

@martinthomson raised this on the QUIC slack:

What are the units of C? What are the units of t? (edited)
I'm assuming that the units of W_cubic(t) and W_max are bytes (or multiples of MSS, I guess). But what about W_est(t)?
W_est(t) = W_max*0.7 + 1.1 * t / RTT
(Numbers approximated.) That's a component in bytes, and a unit-less component.

Sec 5.1

Markku Kojo said:

Sec 5.1

In this subsection, one should show the impact of CUBIC when competing with AIMD TCP. The numbers in tables are derived from
analytical models that give average window size with fixed random loss probabilities and unlimited bandwidth. That is not the same as when flows are combeting in the same congested bottleneck that builds a queue.
Loss probabilities for different flows are likely to be different especially at lower levels of statistical multiplexing.

The first para of sec 5.1 does not sound like true. Simply looking at the original CUBIC paper [HRX08] reveals that CUBIC dominates AIMD TCP (SACK TCP) in the regions where SACK TCP alone is able to fully utilize the available bandwidth (Figure 10 c up until 200 Mbps, and to some extent in Fig 10 a with 40 ms delay). And ín all cases where SACK TCP alone is not able to utilize all available b/w, CUBIC steals multiple times more b/w from SACK TCP than what SACK TCP is not able to utilize. Figures 5 and 6 tell the same story. Has something changed and/or is there possibly data that provides alternative evidence.

In addition, the recommended value for constant C and the two alternative values presented in the draft are the same as in the original paper. It would be interesting to see if there has been any experimentation with different values and what might be the outcome?

Update the definition of W_max to include fast convergence scenario

Copying some suggestions from Lisong here,

change "wmax: size of cwnd just before cwnd was reduced in the last congestion event".
to "wmax: size of cwnd just before cwnd was reduced in the last congestion event when fast convergence is disabled"
or "wmax: cwnd (without fast convergence) or reduced cwnd (with fast convergence) just before cwnd was reduced in the last congestion event" ?

"TCP-Friendly" is a bit misleading

Given that a lot of TCP deployments use Cubic, I think that the intent here is to be friendly to Reno instead. As the text establishes, it is AIMD(1, 0.5) that this wants to be sensitive to.

It might have made sense to talk about TCP when Cubic wasn't widely deployed, but now it is just confusing.

Citing Experimental RFCs as if being a part of CUBIC

Markku Kojo said,

The draft says that CUBIC MAY implement DSACK [RFC3708], limited slow
start [RFC 3742], [RFC7661] and hybrid slow start [cites a paper].
Aren't the first three down references? Not sure if it is appropriate
for a Stds Track document to cite experimental work or a paper like
this even though it's a MAY.

Michael Scharf's review

Michael Scharf wrote:

In the current version, the abstract, introduction and some later non-normative sections are by and large copied from RFC 8312. While that is perfectly reasonable for a -bis document, I believe the document could be a bit more explicit regarding the new status as PS, even if this would imply some editorial changes as compared to RFC 8312.

As of today, 8312bis is probably one of the most important and most widely deployed TCP standards. But the text is not necessarily written that way, given its origins.

Here are some examples of what comes into my mind:

1/ Abstract:

"CUBIC is an extension to the traditional TCP standards. It differs from the traditional TCP standards only in the congestion control algorithm on the sender side."

IMHO one could also start here with something much more explicit along the lines of "CUBIC is a standard TCP congestion control algorithm [...]".

Personally, I don't like the term "traditional standards" in this context. In fact, after 8312bis will be published as PS, CUBIC may actually become part of what one could consider as "traditional standards". Maybe it would be better to avoid that term altogether? At least, I believe it could be avoided by rewording the abstract.

2/ Introduction:

The key sentence "It is therefore to be regarded as the current standard for TCP congestion control" comes at the end after a lot of text on the historical background.

An alternative would be to start in the introduction with what CUBIC is as according to this document and why CUBIC is relevant. Obviously, the historical context is important and must be explained in the document. But I am less sure whether the history needs to be at the beginning of the introduction. At least newcomers to congestion control may more care about the content of this Proposed Standard and less about research that resulted in RFC 8312.

3/ Section 5.4:

"CUBIC has been extensively studied by using both NS-2 simulation and testbed experiments, covering a wide range of network environments. More information can be found in [HKLRX06]. Additionally, there is decade-long deployment experience with CUBIC on the Internet."

This is another example where the most important message as of 2021 comes somewhere at the end. Given the experience with CUBIC, one could just start with the last sentence "There is decade-long deployment experience..." and then state something along the lines of "The original CUBIC design has been studied extensively by using both NS-2 simulation and testbed experiments...". Just as a thought.

Probably the existing text in -01 also works in all these cases and this is just about editorial style. But I think we could at least discuss whether some alternative wordings would make sense given the new PS status, in particular for newcomers who may not have read RFC 8312 and don't know its history (and probably never have to once this PS is published).

Slow-Start Overshoot w/ loss-based congestion conrol

Markku Kojo said,

The larger decrease factor of 0.7 seems unadviseable also if
used in the initial slow start with loss based congestion
control (w/ Not-ECT traffic); packets start getting dropped
when a TCP sender has increased cwnd in slow start such that
the available network bandwidth and buffering capacity at the
bottleneck is filled, but the TCP sender continues sending
more packets for one RTT doubling cwnd and hence also the number
of packets inflight before the congestion signal reaches the sender.
Now, even if the sender uses the standard decrease factor of 0.5,
the cwnd gets reduced only to a value that equals to the cwnd just
before (or around) the congestion point. That is, the network is
still full when the sender enters fast recovery but we do not
expect more drops during fast recovery in a deterministic model.
Only in congestion avoidance after the recovery, the sender
increases cwnd again and gets a packet drop that takes the
sender to a normal sawtooth cycle in an ideal case. So, the
convergence time from slow-start is expexted to be fast though
in reality loss recovery does not always work ideally with
such many drops in a window of data.

However, if the sender applies decrease factor of 0.7, it
continues in fast recovery with a 40% higher cwnd than what is
the available network capacity. This is very likely to result in
significant number of packet losses during fast recovery, and
very likely to result in loss of retransmissions. So, it is no
wonder that so many people have been very concerned about the
slow-start overshoot and the problems it creates.
It is very obvious that applying decrease factor of 0.7 in
the initial slow start is likely to extend the convergence
time from the slow-start overshoot significantly. Or, do we
have data that shows that such concern is unnecessary?
Also, a number of new loss-recovery mechanisms have been
introduced maybe mainly because of this?
I would hesitate recommending decrease factor of 0.7 when
a congestion event occurs during the initial slow start.

Discussion in Sec 5. brings up surprisingly little data

Markku Kojo said:

I regret to say that the discussion in Sec 5. brings up surprisingly little data to back up the claims that are made. Given the long deployment experience that is emphasised in the draft, there, however, is little evidence (measurement data) summarised and cited to back up the claims. "There is a long deployment experience" does not provide any evidence as such. There should be a lot of studies with measurement data accumulated over the years that would support the assertions in the doc. Or, is there?

cwnd can now be less than 2

@martinthomson raised this on the QUIC slack:

      W_max = cwnd;                 // save window size before reduction
      ssthresh = cwnd * beta_cubic; // new slow-start threshold
      ssthresh = max(ssthresh, 2);  // threshold is at least 2 MSS
      cwnd = cwnd * beta_cubic;     // window reduction

cwnd can now be less than 2.

Adaptive adjustment

Is this "adaptive adjustment" that is mentioned in the text something that is still planned? Or should we remove this?

Events detected by RACK

Yoshi said:

2: Section 3.1 "After a window reduction in response to a congestion event is
detected by duplicate ACKs or Explicit Congestion Notification-Echo
(ECN-Echo, ECE) ACKs [RFC3168], CUBIC remembers the congestion window..."

I think the events detected by RACK (or PTO for QUIC) can also be included here.

Clear definition of "the beginning of the current congestion avoidance stage"

While improving the FreeBSD Cubic implementation, we found that there is ambiguity as to when to define "the beginning of the current congestion avoidance stage". However, for the classical design assumption for TCP with infinite data available to send, the minutae don't matter much.

But in a corner case, let us assume the send buffer allows the transmission of exactly ssthresh bytes. After that, the application stalls for some time (in multiples of the RTT) before making more data available to send...

In the original code, t(0) was set, once cwnd > ssthresh (before the application stall above).

However, that can lead to excessive jumps, when cubic re-calculates cwnd the next time data is available for sending, as an excessive amount of time may have passed.

When FreeBSD is doing now (https://reviews.freebsd.org/D25746) is to track properly, when the first time to recalculate cwnd is after having left ssthresh (or having been application limited, which is similar).

We have not looked closely how app-limited and request/response type flows are handled with cubic in Linux or other OS.

A clear definition as to when to start taking the base t(0) for calculating t in the cubic formular may be good to have.

Replace modelled TCP Reno window approach with AIMD emulation

Yuchung Cheng wrote:

I'd recommend replacing the modelled TCP Reno window approach in
section 4.2 with an AIMD emulation (Linux's approach).

In our experience, TCP-friendly regions are the predominant mode of
(Linux) Cubic for any regular Internet connection. IOW Cubic is often
"Reno" unless the loss rate is abysmal. The modelled approach is based
on a simple bulk transfer where modern network applications are mostly
structured traffic (burst, idle, repeat). Under such traffic
structures the model has two issues:

The model assumes cwnd overshoot causes losses that are repaired in
one round of fast recovery. In reality, the losses are often due to
bursts to short messages, causing more rounds and even timeouts to
repair. So the overall loss rate "p" tends to be higher than the ideal
model, causing the model to underestimate the window (hence runs in a
more conservative Reno). Instead Linux's approach is to simply emulate
Reno AIMD based on the number of packets per ACK. This also avoids
square-root operation.

ssthresh and cwnd_start should not exceed Beta_cubic * congestion window at loss

Lisong's mentioned this in an off-line discussion,

Below I will use X to refer to (beta * cwnd right before the congestion event).

An implementation of CUBIC can choose different ways to adjust the cwnd during the fast recovery and timeout.

But after the fast recovery/ECN, it should set cwnd (i.e., cwnd_start) and ssthresh to X. This is because all the parameters (such as C, alpha, and beta) of CUBIC are chosen based on this assumption. If an implementation chooses its own ssthresh and own cwnd_start, then the performance of such an implementation will be very different from what we expected.

I understand and agree with your motivation to be more flexible (as CUBIC may be implemented for various purposes, TCP, UDP, media streaming as RC 7661, QUIC, .... ). Therefore, I would suggest that an implementation SHOULD set ssthresh and cwnd_start to X, and it MAY set them to a value lower than X as the cost of lower performance. But it MUST NOT set them to a value higher than X.

RFC 8311 updates sender's response to CE / ECE

Markku Kojo said,

RFC 8311 (sec 4.1) allows modifying the TCP-sender response to
ECE for experimental purposes only. Has there been any discussion
with tsvwg in that modifying the TCP-response to ECE in CUBIC is
conflict with RFC 8311 as CUBIC is currently intended to become
a Standards Track RFC?

Highlight difference to paper

Would be nice to highlight what has changed from the original paper as that paper is still a good source to start with and understand cubic.

Fairness to AIMD congestion control

Markku Kojo said,

The equation on page 12 to derive increase factor α_cubic that
intends to achieve the same average window as AIMD TCP seems to
have its origins in a preliminary paper that states that the
authors do not have an explanation to the discrepancy between
their AIMD model and experimental results, which clearly deviate.
It seems to have gone unnoticed that the equation assumes equal
drop probability for the different values of the increase factor
and multiplicative decrease factor but the drop probability
changes when these factors change. The equations for the drop
probability / the # of packets in one congestion epoch
are available in the original paper and one can easily verify
this. Therefore, the equations used in CUBIC are not correct
and seem to underestimate W_est for AIMD TCP, resulting in
moving away from AIMD-Friendly region too early. This gives
CUBIC unjustified advantage over AIMD TCP particularly in
environments with low level of statistical multiplexing. With
high level of multiplexing, drop probability goes higher and
differences in the drop probablilities tend to get small. On the
other hand, with such high level of competition, the theoretical
equations may not be that valid anymore.

CUBIC for QUIC

Since most of the new implementations for CUBIC will be in the context of QUIC stacks (as opposed to TCP stacks), it would be useful to add a section or an appendix on how one would do that. For example, it might make sense to describe how CUBIC would be integrated into an implementation that followed the QUIC -recovery draft.

Contribution to buffer bloat and slower convergence due to larger decrease factor

Markku Kojo said,

This draft uses a larger cwnd decrease factor, resulting in larger
average cwnd and buffer occupation. This means that it is
likely to contribute significantly to buffer bloat, particularly
when considering also the use of concave increase function in the
beginning of the congestion avoidance that keeps the cwnd close
to maximum most of the time as carefully explained in the draft.
This means that CUBIC keeps also buffer bloated router queues
very efficiently full at all times.

Currently the draft does mention the slower convergence speed
as the only side effect for the larger decrease factor and does
not discuss the contribution to buffer bloat. It would be
important to assess this together with measurement data to
back up any observations.

Do we have data in different environments, including buffer-bloated
environments that show how much effect CUBIC has compared to
AIMD TCP?
And, how does larger decrease function impact convergence speed,
particularly in buffer-bloated environments.
Many people have complained that window-based (TCP) congestion
control drives buffer bloat. Of course, also the current standard
AIMD TCP tends to fill in the buffer-bloated queues but it
unlikely does it as effectively as CUBIC? This would be good to
understand better.

Keeping w_max and reducing only cwnd

Yoshi said:

11: Section 4.7: "we update w_max as follows, before the window reduction as described in section 4.6"

I am wondering if reducing w_max is the right approach here. Because if we reduce w_max, CUBIC
can exit from convex region earlier than the case where fast convergence is not used.
It seems to me that keeping w_max and reducing only cwnd (using smaller value than b_cubic)
look more conservative.

Question on AIMD-Friendly Region

Hi, while I am implementing this change in quiche, I want to make sure my understanding is correct on "AIMD-Friendly Region" section.

It says to use

W_{est} = W_{est} + α_{aimd} * \frac{segments\_acked}{cwnd}

To calculate W_est value and alpha_aimd initially is

α_{aimd} = 3 * \frac{1 - β_{cubic}}{1 + β_{cubic}}

Since β_{cubic is 0.7 (Section 4.6), it comes down to

α_{aimd} = 3 * (1-0.7)/(1+0.7) = 0.529

And α_{aimd} will become 1 when W_est >= W_max.

Which means in each ACK, W_est can be calculated as follows:

W_est = W_est + 0.529 * (segments_acked / cwnd)               (W_est < W_max)
W_est = W_est + 1 * (segments_acked / cwnd)                   (W_est >= W_max)

Is my understanding correct? My concern is when W_est < W_max, it's slower than Reno.

Also, I think the definition of segments_acked is missing in the draft.

Two meanings for a_aimd

Yoshi said:

5: Section 4.3 It seems to me that there are two meanings for a_aimd in this section.
One is the additive factor for CUBIC and the other is a generic parameter for AIMD() function.
This looks a bit confusing to me.

Lower bound for congestion window (drops or classic ECN)

Markku Kojo said,

The draft modifies RFC 3168 when ECE arrives and would result
in cwnd < 2 MSS by setting a lower bound of 2 MSS for cwnd (only
ssthresh is supposed to have a lower bound of 2 MSS).
This is in conflict with RFC 3168, RFC 5033, and RFC 2914 which
require "full backoff", that is, a sender must continue decreasing
sending rate as long as congestion persists. This is a fundamental
property for any congestion control mechanism. For ECN, RFC 3168
(sec 6.1.2) requires that cwnd is halved until the minimum cwnd
of one MSS is received, and then the sender continues reducing
sending rate by using a timer with exponential backoff, if more
ECE-echo packets keep on arriving.

This implementation bug has been long with Linux and is present
in other stacks as well and should get corrected ASAP with
appropriate advise in all published RFCs, instead of replicating
the bug in the RFC series.

Congestion window TCP friendly region after W_max

The idea of using alpha_aimd as defined below is to ensure that the congestion window growth for Cubic is similar to standard TCP as Cubic's reduction factor (0.3) is smaller than that of standard TCP (0.5)

Currently, alpha_aimd = 3*(1-beta_cubic)/(1+beta_cubic) // for the entire TCP friendly region

But we think that once the cwnd in TF region reaches W_max, we should set the alpha_aimd to 1 to have similar behavior as standard TCP congestion algorithm (eg. NewReno)

if (W_est < W_max)
alpha_aimd = 3*(1-beta_cubic)/(1+beta_cubic)
else
alpha_aimd = 1

I'd be happy to work on a PR for this if folks think that this would be a good addition.

In PDF, alpha/beta looks broken

In a converted PDF, it looks like alpha -> α and beta -> β

For example,

α_{aimd} = 3 * \frac{1 - β_{cubic}}{1 + β_{cubic}}

looks like following:

Screen Shot 2021-02-23 at 6 02 39 PM

Change introductory text to reflect deployment experience

The abstract, introduction and other text throughout the document need to be updated to reflect the significantly broader deployment experience CUBIC has seen since RFC8312 was published. At that time, it had been the default for Linux for years, but since then, it's also been the default for Windows and Apple stacks.

Fast convergence

It seems that there is a mistake in Section 4.7

Current draft

W_max
= 
W_max * (1 + β_cubic)/2,   if  cwnd < W_max
cwnd,                      otherwise

but it should be

W_max
= 
cwnd * (1 + β_cubic)/2,   if  cwnd < W_max
cwnd,                      otherwise

Also I like to make the following change to clearly specify the behavior with fast convergence is disabled

W_max
= 
cwnd * (1 + β_cubic)/2,   if  (cwnd < W_max) and (fast convergence is enabled)
cwnd,                      otherwise

Thanks!
Lisong

Overly aggressive window increase

Since we are revising this RFC, I guess it is a good time to fix some Cubic bugs reported in our NSDI 2019 paper .

This RFC sets W_cubic(t + RTT) as the target window size after in the next RTT. However, this targe size may be too high, like even higher than 2 * cwnd (i.e., more aggressive than slow start), in the following special cases.

  • case 1: RTT is extremely long. An extremely long RTT is very likely an indication of network congestion, in such an environment it is dangerous to set a very high target.

  • case 2: after a long idle period (i.e., a big increase of t). This is a bug reported and fixed by Google.

  • case 3: after a long application rate-limited period (i.e., a bug increase of t). Similar to case 2

To be safer, we may change Equation (1) as follow to fix all the above bugs

    W_cubic(t) = C*(t-K)^3 + origin_point (Eq. 1)
    if (W_cubic (t) > 2* cwnd)
        W_cubic(t)  =  2 * cwnd

Note that, Linux Cubic already does something similar (line 328) by limiting target to be no more then 1.5 * cwnd.

Thanks

Overridden by linear growth by AIMD

Yoshi said:

10: Section 4.5: " The convex profile ensure that the window increases very slowly at the beginning.."

I am wondering how much this part is accurate. Because of Principal 2, even though cwnd is increased
through convex profile, I think it will be overridden by linear growth by AIMD.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.