There has been some interest is supporting more efficient evaluation
of the score after a small number of particles have changed. There are
a number of different ways of supporting this.
There are a number of challenges which have to be met by such a new
scoring scheme
- we don't want to have to change existing restraints to make them
correct
- the derivatives of particles which did not change themselves might
change
- we don't want to make regular evaluation less efficient
- somehow have to handle combinatorial changes (such as the set of
nearby particles changing)
One solution would be to
- each particle stores copies of all its attributes when it is
modified for the first time each optimization step
- a restraint can ask a particle to swap in its old values, evaluate
with those, then restored the new values and evaluate with those to
get the difference. This swapping of values could be made very efficient
- the nonbonded list would need to add all pairs that were just
removed back to the non-bonded list to make sure that they are
properly subtracted off
- all the dirty bits are cleared after evaluation finishes
- existing restraints are all correct, they just are not necessarily
as efficient as possible.
Who would find fast incremental evaluations useful? What are expected
scenarios? Import parts of the scenario are how many particles would
change each step and what sort of restraints would be used.