At work we have a lot of servers, and sometimes they have problems. Even though I mostly just handle development servers, not production ones, it's still kind of a pain to keep track of them all and it's always embarrassing to find out from a co-worker that one of them has stopped working.
So I use a bunch of cron jobs to do a bunch of daily tasks and make sure things are flowing smoothly, but this isn't really all that optimal. The problem is that you don't want cron emails when tasks work (some of them are running every ten minutes!). But you do want emails when they don't work. On the other hand, you don't need an email every ten minutes when something breaks. And moreover, you really want to be notified when the server the task was supposed to run on has crashed and the job doesn't run at all.
Sadly, cron itself doesn't do a very good job of this, particularly that last part, where it's completely useless.
So I spent a few hours today whipping up cron2rss. It reads, saves, and eventually expires the stdout/stderr output from multiple cron jobs on multiple computers, then turns the result into two RSS feeds: one with everything, and one with only the failures. And it auto-inserts entries into the feed whenever it's been too long since one of the tasks has produced a log message. The RSS service can also be run on more than one computer at a time, so that if one of your RSS feeds dies, the others can still tell you about a failure.
I leave you with this food for thought:
0,10,20,30,40,50 * * * * ~/cron2rss/add test-website wget -O/dev/null http://versabanq.com
How do you set up something like that if you don't have my tool? What would you do if your test was more complicated? What if you had 50 servers instead of 1?
ssh+2FA to all your machines, anywhere, without opening firewall ports.