You'll like it

We promise
Everything here is my opinion. I do not speak for your employer.
February 2013
March 2013

2013-02-23 »

The history of every TCP server in Unix goes something like this history of inetd:

1. inetd would just accept() immediately and fork a copy of your service (say, ftpd) every time a connection came in.

2. That went bad when you got too many connections, so they changed it to do a maximum number of forks per second or per minute (aka a "QPS limit").

3. That went bad if you had long-lived connections, so people wrote things like xinetd (or djb's tcpserver) that limit the total number of live connections and refuse to accept() until one of them finishes.  (In non-fork-based threaded services, this limit is usually implemented with a thread pool.)

Seems Go's HTTP server is still on step 1.  I would make a snarky comment about what year that's from, but I actually don't know, because inetd already did step 2 when I started using Unix in 1993.

Step 3 (xinetd) is way more recent, I think maybe 1996 or so.  So Go's way of doing it isn't that obsolete really.  Only 17 years.

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com