Oh well, at least it's

different
Everything here is my opinion. I do not speak for your employer.
March 2015
April 2015

2015-03-29 »

Well, this looks fun.

http://simula.stanford.edu/~alizade/Site/DCTCP.html

Apparently the idea is to slow down in relation to how many packets are lost, as opposed to just dropping off very fast when a single packet (or maybe two) is lost.  This sort of thing would help a lot for GFiber customers, I suspect, whose connections (when nearby at least) are a lot more like connections to a datacenter than to the rest of the Internet.  On the other hand, I suspect this has effects on the Internet itself that are somewhere between "untested" and "really really bad."

According to what I've been able to measure, GFiber customers' performance at the moment (when on a wired link) is limited mainly by the fact that you could have transferred almost any reasonable amount of data within a single RTT.  So any kind of packet loss, window sizing, etc is automatically making a typical transfer several times slower.

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com