Google is unique among places I've worked for two reasons: the security advice is stricter, and I actually believe there's a need for it.
I thought this was interesting:
It doesn't seem to really do anything seccomp() (http://man7.org/linux/man-pages/man2/seccomp.2.html) can't do. But it has the advantage that it's comprehensible by normal mortals, and you can just throw a couple of lines into your program to make it "less exploitable."
What I'm not sure about is whether "less exploitable" is realistically better than "not exploitable on this scale." It reminds me of the note in djb's doc about qmail security history (http://cr.yp.to/qmail/qmailsec-20071101.pdf) where he said that separating programs into multiple user accounts had precisely zero impact on qmail security in the end.
In the real world, people could use this tame() system call to make things better in small tools, but the small tools won't be running in a sandbox as tight as, say, Chrome's sandbox, and programs still manage escape from Chrome's sandbox frighteningly often. On the other hand, non-sandboxed programs do much, much, much worse.
Not sure how to feel about it.
I've now been poking at data in R for long enough that I went into python and tried to run median() on a list. What do you mean there's no global median() function? What year is this? Huh.
A simple analogy
Wifi should be like air. You don't configure it, it's just always there, you don't want to think about it.
Internet should be like electricity. Everyone has it. You're gonna have to pay a monthly fee, and it's technically metered somehow, but mostly the power company doesn't screw you so you don't care. (via a friend)
Internet + Wifi is like... a tesla coil.
Shelly's Wifi principles:
- Like the air we breathe, it just works.
- Ubiquitous: it's everywhere, and it's fast enough.
- Reliable: it always works, and it's fast enough.
- Intuitive: no learning curve.
I've been sending the same code review comments to several different people lately, so I thought I might share it.
1) Do not redirect stderr to /dev/null. Trust me, you're gonna want the error message someday. If your program is producing something stupid on stderr every time it runs, find a safer way to make the stupid message not appear: for example, add a –quiet option or something to ask it to not print useless information.
2) When catching exceptions, make the 'try' section of your try/except or try/catch block as short as possible. In python:
... do a bunch more stuff ...
... do a bunch more stuff ...
(Also acceptable: don't use an 'else' clause at all and just put the rest of the code below the try/except block. Which is preferable depends on the flow you want, especially whether you do 'pass' inside the except block or not.)
The idea with #2 is that if "do a bunch more stuff" accidentally throws a FooError that you weren't expecting, you don't want to accidentally catch it with your original FooError handler, because a) you're handling a FooError you weren't expecting so you might handle it wrong, and b) you're masking an error condition you didn't know existed. Better to let the exception leak out and get a useful stack trace.
These things seem obvious to me after years of doing it, but looking back, I guess I had to be taught these things at some point too. Spread the word :)