Well, this looks fun.
Apparently the idea is to slow down in relation to how many packets are lost, as opposed to just dropping off very fast when a single packet (or maybe two) is lost. This sort of thing would help a lot for GFiber customers, I suspect, whose connections (when nearby at least) are a lot more like connections to a datacenter than to the rest of the Internet. On the other hand, I suspect this has effects on the Internet itself that are somewhere between "untested" and "really really bad."
According to what I've been able to measure, GFiber customers' performance at the moment (when on a wired link) is limited mainly by the fact that you could have transferred almost any reasonable amount of data within a single RTT. So any kind of packet loss, window sizing, etc is automatically making a typical transfer several times slower.