There's no timing variability problem that can't be solved with bigger buffers. Except the problem of too many buffers.
Bah. And they say we're out of IPv4 addresses.
Hmm. I wonder if PF_RING could reduce CPU usage for our multicast stuff.
Visiting Montreal this week, and my Canadian accent ("aboot") has finally been explained to me as "Canadian raising" (contrast with "Canadian raisin"):
Apparently the difference is most easily demonstrated with the pronunciation of "loud" vs "lout". Unfortunately I don't have any Americans around right now to demonstrate to me how they do it differently. This calls for some experimentation when I get back to NYC.
Another one is "writer" vs. "rider." Apparently the 't' in the first word is pronounced like the 'd' in the second, but you can still tell them apart (in Canada) because the 'i' is pronounced differently. At least by Canadians. And possibly others because as Wikipedia claims, "it appears to be spreading."
None of this explains why Americans pronounce it "mawn treal" however.
Okay, this article is pretty stupid, don't bother reading it.
But I just have to comment on the title. Keeping turbo-charged Internet a secret? The Canadian customs agent asked me what I work on at Google, and I said, "Google Fiber... it's the Internet thi–" and he interrupted me and said, "In Kansas City, yeah! That's great! When is it coming to Canada?" "Secret" is not the word I would use.
I have a feeling that by the time this is over, I'm going to know a lot more about TCP than I ever wanted to know.
...nah, scratch that. I wanted to know.
Today I learned that (perhaps obviously in retrospect) the wifi signal strength meter on your laptop/phone/tablet indicates how strong it is receiving the signal from the access point. There is no guarantee that the signal in the reverse direction is being received so well, or at all.
If you boost the "transmit power" setting on the router (as you can with some firmware, including dd-wrt and friends), it will make your signal strength meter go up, all right, but you might actually get worse results. In particular, you could get that suspicious behaviour I'm sure we've all seen, where you have lots of bars but still can't get an IP address. Hmmm...
Also, from my desk, I can hear 15 guest access points, and 17 corporate ones. (Presumably some of these overlap with each other.) That's a lot. But another thing I learned today is that too many access points being visible from a particular spot can actually make things worse because they interfere with each other. I'm sure whoever laid out the office wifi network does a fancypants job, but I wish I knew what tools they use and what their algorithm is. I guess probably I could track someone down and find out.
A tricky, but normal, part of the transition from pre-launch to post-launch is changing from an attitude of "every bug is super important because if it happened in the lab, it'll probably happen to thousands of customers" to an attitude of "there will definitely be too many bugs to fix them all, so we have to pick our battles carefully and not get overwhelmed."
I've been stuck in the former mode. I guess it's time to start shifting a bit.
On the other hand, that's not an excuse for not fixing bugs.
""" "Governments have policies that can make it easy or hard, so
I say, 'if you make it hard for me, enjoy your Comcast,'" Medin quipped.
Got a cheap 2.4 GHz wifi spectrum analyzer on Amazon (this one: http://www.amazon.com/Wi-spy-2-4I-Spectrum-Analyzer-USB-based/dp/B002FB611E - beware, it's officially obsolete and only has software for windows).
It's kind of fun to play with. Most useful observation so far: it looks like my Macbook produces lovely-looking clean signals within the 20 MHz range of its channel... but my Nexus S either tries to use a 40 MHz 802.11n signal (wishful thinking, guy: you're really not that fast!) or has some seriously ugly signal generation that babbles all over the place. Perhaps that explains its terrible wifi performance and the fact that it kept crashing my friend's router last week.
Also, I took three photos with the camera and emailed them to myself. It took a super long time for them to be sent, but the spectrum was almost entirely idle during that time. Meanwhile, I navigated the phone over to youtube and played a video, and the spectrum was busy. Conclusion: you weren't even trying to send that email, scumbag.
This is a fun toy. I think I'll take it home next and see if it can diagnose my suspicious wifi dropouts at home (not just on my phone). Rumour has it you can also see interference from microwave ovens and home portable phones.
I feel that if your headline is "X is not that far fetched," then you probably know it's far fetched.
Hypothesis: the main times theory agrees with real life are the times when theory predicts everything will come crashing down in a series of cascading failures.
Somehow theory always gets that part right.
The history of every TCP server in Unix goes something like this history of inetd:
1. inetd would just accept() immediately and fork a copy of your service (say, ftpd) every time a connection came in.
2. That went bad when you got too many connections, so they changed it to do a maximum number of forks per second or per minute (aka a "QPS limit").
3. That went bad if you had long-lived connections, so people wrote things like xinetd (or djb's tcpserver) that limit the total number of live connections and refuse to accept() until one of them finishes. (In non-fork-based threaded services, this limit is usually implemented with a thread pool.)
Seems Go's HTTP server is still on step 1. I would make a snarky comment about what year that's from, but I actually don't know, because inetd already did step 2 when I started using Unix in 1993.
Step 3 (xinetd) is way more recent, I think maybe 1996 or so. So Go's way of doing it isn't that obsolete really. Only 17 years.
The failure mode
which you have guarded against
is not the true failure mode.
– from my upcoming book of zen philosophy, "Reflections on post-mortem reflection."
And a bonus entry:
A wise man once said,
A wiser man once said,
"You can guarantee your transaction is executed at most once, or at least once. You can guarantee your transaction is executed at most once, or at least once. "
I showed up at the Yale University career center today for some intern interviews. I said to the receptionist, "I'm from my company, here for some interviews." She looked me up and down, frowned a bit, and said, "You're here to interview students, or to be interviewed?" ....in a voice that implied I look pretty much unqualified for either.
Yup, I've still got it.
The guy who invented the Raspberry Pi came to visit and I got to have lunch with him. Some surprising stuff I learned:
He's a technical director at a company where his project started off as discouraged and has now reached the status of being his full time job.
About a million units a year (their current volume) is approximately the lowest volume they would want to support. But it is in the range they would want to support.
The Raspberry Pi is not just a custom board: it's a custom chip. This guy is the chip designer. The way it happened was he had the idea to take their existing 3D graphics coprocessor chip and squeeze a tiny ARMv6 (ARM11) core into a space on the die that was otherwise unused. It sounds like a 20% project, basically. The result is a rather unusual SoC that has comparatively crappy main CPU performance (700 MHz single issue), but very very good graphics/video performance. (The full board costs $25-$35, where the $25 model has no ethernet port.)
He believes this tradeoff (great graphics, wimpy processor) is a good balance for a super-low-cost educational device, since students care about making cool graphics. I think he has a point.
There are in fact a few other manufacturers who could make devices in the same price range as Raspberry Pi, but you'd get slightly higher CPU performance with much lower graphics performance. Nobody else bundles a high-end graphics processor into a chip in that price range.
Someone on the Internet managed to reverse engineer their DSP and document how it works, writing a complete architecture manual for it.
Our TV box uses the same graphics chipset: "Oh, that's the exact same graphics processor then. We only have one graphics processor design that we use across our entire line. I designed it."
Um. Okay then. :)
My sister got me a Raspberry Pi for Christmas (yes, we are a geeky family) and I played with it a bit, and it definitely does playback full HD video while taking super low CPU, as long as you use the coprocessor.