> I think that's a little generous! It's just a standard cell-based > algorithm. The only optimization I see in the code is on the hash > function, and that's still a very simple function. By optimized, I meant tweaked grid sizes and amount particles can move and things like that. He thinks he did some work messing with it, I haven't checked :-)
> The Modeller code probably gets most of its performance from having > optimizers that are cooperative. In order to tell whether a nonbond > update is required, a simple heuristic is used: has any atom moved > more > than some cutoff distance since the last update? This is easy to test > for because all of the optimizers update this atom shift value > whenever > they move an atom. In IMP there is no such guarantee - the optimizers > write back new values for the optimizable attributes whenever they > feel > like it (plus, of course, we have attributes other than xyz). Such helps, but I don't think it is the main source of speedup. I have the MaxChangeScoreStates which play the same role (less efficiently), but it doesn't look like they are a major time sync. I can easily put in an l2 version of that (the current is l_1) which would tighten the movement bounds a bit.
One problem I was running in to when running some of my code was that the nonbonded lists would become quite huge and take up really large amounts of memory. I haven't yet found a good system for keeping the slack large enough so that we don't have to rebuild too often and not having the nbls be too big. The problem is that some things move quite far under my optimization scheme, but most things do not. I have some ideas in terms of only updating some of the pairs, but I haven't implemented them yet.
Anyway, I haven't heard back from Frido about whether he was building with NDEBUG defined or not. If not, that would be the primary problem.