Keren was interested in Monte Carlo optimization, so I thought I might submit the one I wrote. It is structured as an optimizer coupled with a set of Movers which propose moves. The optimizer holds a collection of Movers and a local optimizer (which can be used to mix local optimization with monte-carlo). Current movers take a set of particles and a set of attributes and perturb all of those attributes on each of the particles either uniformly in a a ball or with a normal distribution.
A class called MoveGuard keeps track of the information necessary to revert if the monte carlo step is rejected. Note that as it stands, writing a Mover in Python would take noticeably more work as the macro does a good bit of work.
As a note, my current implementation uses the boost random number library. I think we should probably make boost a required dependency since - it is really useful (I could easily write better iterators for various things) - all linux systems have builds these days (mostly installed by default) and we distribute mac and windows binaries (and the important boost bits are header-only affairs)