I can't find the imp-commit email for some reason. Anyway, there was a recent patch (581) changing a model::remove_particle to deactivating the particle. This change is wrong as the particles are currently reference counted in C++ and removing the particle from the model should do the right thing assuming there are no python references hanging around. If it does not, it is a bug in kernel code which should be fixed.
In fact, the "Particle::set_is_active" method should probably go away.
Daniel Russel wrote: > I can't find the imp-commit email for some reason. Anyway, there was a > recent patch (581) changing a > model::remove_particle to deactivating the particle. This change is > wrong as the particles are currently reference counted in C++ and > removing the particle from the model should do the right thing assuming > there are no python references hanging around. If it does not, it is a > bug in kernel code which should be fixed.
The testcase tried to use a Python reference to a particle which had already been deleted. This clearly won't work, which is why I fixed it in the patch. Running that testcase through valgrind clearly showed access to freed memory, as I wrote in the svn log. By all means, submit a patch which fixes the bug if you like. But the reference counting clearly does not work with Python right now, and I don't have time to fix this the "right" way.
> In fact, the "Particle::set_is_active" method should probably go away.
Sure - if we're going to allow particle deletion, there's no reason to have active/inactive.
Ben
> >> In fact, the "Particle::set_is_active" method should probably go >> away. > > Sure - if we're going to allow particle deletion, there's no reason to > have active/inactive. >
We might need it for discrete optimization via MC or inference. I would prefer if you'll leave it till we sort out the discrete optimization framework.
Keren Lasker wrote: >> Sure - if we're going to allow particle deletion, there's no reason to >> have active/inactive. >> > > We might need it for discrete optimization via MC or inference. > I would prefer if you'll leave it till we sort out the discrete > optimization framework. Currently, once a particle is deactivated, various things will skip over it/remove it so trying to reactivate it afterwards would be a bit of a mess. The problem is that either we have to put checks everywhere in the code or we have to remove inactive particles from various lists (like the set of particles to be checked for nonbonded interactions).
As a result, we should probably remove the methods now. If we want them later, we can put them back and change the other code.
What would you want to do with inactivating particles as opposed to deleting them?
On May 28, 2008, at 12:53 PM, Daniel Russel wrote:
> Keren Lasker wrote: >>> Sure - if we're going to allow particle deletion, there's no >>> reason to >>> have active/inactive. >>> >> >> We might need it for discrete optimization via MC or inference. >> I would prefer if you'll leave it till we sort out the discrete >> optimization framework. > Currently, once a particle is deactivated, various things will skip > over > it/remove it so trying to reactivate it afterwards would be a bit of a > mess. The problem is that either we have to put checks everywhere in > the > code or we have to remove inactive particles from various lists (like > the set of particles to be checked for nonbonded interactions). > > As a result, we should probably remove the methods now. If we want > them > later, we can put them back and change the other code. > ok - I guess that this is one of the reasons we use svn .... > What would you want to do with inactivating particles as opposed to > deleting them? > __ a particle might not take part in a specific stage of optimization - but it is still relevant for the next steps. > _____________________________________________ > IMP-dev mailing list > IMP-dev@salilab.org > https://salilab.org/mailman/listinfo/imp-dev
>> What would you want to do with inactivating particles as opposed to >> deleting them? >> __ > a particle might not take part in a specific stage of optimization - > but it is still relevant for the next steps. You can always just leave it out of the restraints (or remove it from the restraints). What I do is - first create all the restraints, but leave their list of particles blank - then create some data structure which encodes all of my active particles and the relationships between them (in my case, it is just a bunch of lists of particles) - set the particles used by each restraint as appropriate for what I am optimizing now
If I change what I want to use, I just modify my lists and rerun the last function.
Now that I think about it, I think this sort of architecture is a better alternative to trying to stick metadata into the restraints.
Ben Webb wrote: > Daniel Russel wrote: >> I can't find the imp-commit email for some reason. Anyway, there was >> a recent patch (581) changing a >> model::remove_particle to deactivating the particle. This change is >> wrong as the particles are currently reference counted in C++ and >> removing the particle from the model should do the right thing >> assuming there are no python references hanging around. If it does >> not, it is a bug in kernel code which should be fixed. > > The testcase tried to use a Python reference to a particle which had > already been deleted. This clearly won't work, which is why I fixed it > in the patch. Running that testcase through valgrind clearly showed > access to freed memory, as I wrote in the svn log. By all means, > submit a patch which fixes the bug if you like. But the reference > counting clearly does not work with Python right now, and I don't have > time to fix this the "right" way. No. The nbl should have been keeping the particle alive until the evaluate call (and the evaluate call will have memory issues if the memory is no longer valid since it needs to check if the particle is active or not). So there are probably deeper bugs.
Daniel Russel wrote: > Ben Webb wrote: >> The testcase tried to use a Python reference to a particle which had >> already been deleted. This clearly won't work, which is why I fixed it >> in the patch. Running that testcase through valgrind clearly showed >> access to freed memory, as I wrote in the svn log. By all means, >> submit a patch which fixes the bug if you like. But the reference >> counting clearly does not work with Python right now, and I don't have >> time to fix this the "right" way. > No. The nbl should have been keeping the particle alive until the > evaluate call (and the evaluate call will have memory issues if the > memory is no longer valid since it needs to check if the particle is > active or not). So there are probably deeper bugs.
Indeed. You can reproduce the problem with the following: % cd imp/kernel/test/states % svn up -r580 test_nonbonded_list.py % ../../../bin/imppy.sh valgrind --suppressions=../../../tools/valgrind-python.supp python test_nonbonded_list.py -v > /dev/null
With latest SVN, this will run through and report 0 errors. With r580, you'll get stuff like the following:
Test bbox NBL ... ==10105== Invalid read of size 1 ==10105== at 0x46C509B: _wrap_Particle_get_is_active (IMP_wrap.cc:3544) ==10105== by 0x27D758C: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x27A30B6: PyObject_Call (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x2825E7E: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x2828D74: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x2827448: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x282862E: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x282862E: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x282862E: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x2828D74: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x27C2EB1: (within /usr/lib/libpython2.5.so.1.0) ==10105== by 0x27A30B6: PyObject_Call (in /usr/lib/libpython2.5.so.1.0) ==10105== Address 0x4a162ac is 20 bytes inside a block of size 76 free'd ==10105== at 0x40054AA: operator delete(void*) (vg_replace_malloc.c:342) ==10105== by 0x48AC2B8: IMP::Particle::~Particle() (Particle.cpp:23) ==10105== by 0x48A6573: IMP::internal::ObjectContainer<IMP::Particle, IMP::IndexIMP::ParticleTag >::unref(IMP::Particle*) (ObjectContainer.h:55) ==10105== by 0x48AAB6E: IMP::internal::ObjectContainer<IMP::Particle, IMP::IndexIMP::ParticleTag >::remove(IMP::IndexIMP::ParticleTag) (ObjectContainer.h:95) ==10105== by 0x48A3666: IMP::Model::remove_particle(IMP::IndexIMP::ParticleTag) (Model.cpp:34) ==10105== by 0x46B8168: _wrap_Model_remove_particle (IMP_wrap.cc:14118) ==10105== by 0x27D758C: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x27A30B6: PyObject_Call (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x2825E7E: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x2828D74: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x2827448: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0) ==10105== by 0x282862E: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0) ==10105==
Ben
participants (3)
-
Ben Webb
-
Daniel Russel
-
Keren Lasker