You'll like it

We promise
Everything here is my opinion. I do not speak for your employer.
August 2015
September 2015

2015-08-13 »

So, I finally found a good reason to hate python refcounting (other than the usual spurious incorrect ones): if you pre-allocate a bunch of objects, then fork(), then try to run two instances, you don't save any memory because the refcounts rapidly end up dirtying almost every single page.

But I'm not so easily defeated!  I decided to modify my copy of the python interpreter as follows: there's a new global flag that says "all objects you touch from now on should be marked as permanent."  Permanent means that, next time you go to increment/decrement the refcount on that object, you skip it, thus avoiding dirtying the page. So the process is something like:
- set global perm flag
- allocate a bunch of heavy objects
- clear global perm flag

I was able to confirm with a synthetic test that this at least basically worked: I allocated a bunch of objects in a big list, forked, and looked at all the objects. With my patch, memory only contained one of each.  Without my patch, the memory usage doubled.

Unfortunately, in the actual case I care about (imported python modules, etc) most of the pages seem to still be getting dirtied shortly after the fork.

I'm CEO at Tailscale, where we make network problems disappear.

Why would you follow me on twitter? Use RSS.

apenwarr on gmail.com