I had recently encountered a similar problem. There is also another
issue, when running 8 jobs concurrently in the same directory each
thread needs some input files (in my understanding) which are deleted or
modified by some other threads and that leads to deadlock.
I don't think that's the question David was asking. But what you say is
not correct - 8 jobs will run concurrently in the same directory (in
fact it is designed to work that way, since they will need the same
initial conformation input file). The parallel loop modeling should not
delete or modify any input files, so I don't know why you're seeing a
deadlock - we have not seen such behavior here. Perhaps some problem
with your network storage? The parallel framework also does not use
threads - it uses independent processes - and that together with the
master-slave design makes a deadlock (where A is waiting for B, but B is
also waiting for A) rather unlikely.
That said, the scripts you attach will get rather confused if you run
them in the same directory, because they are rather less efficient than
the parallel loop modeling included with Modeller. You are asking each
slave to build the set of restraints and initial model before building
the loop models. Obviously if one slave overwrites this initial model
while another slave is trying to read it, things will get confused (but
a deadlock should still not occur - the slave will simply raise an
exception). But since the initial model and restraints are always the
same, this is not necessary. The extra computational effort (and the
potential for confusion) can be simply avoided by building the
restraints and initial model on the master node - then all the slaves do
is read them in and generate one or more loop models. This is in fact
what happens if you call loopmodel's "use_parallel_job" method before
you call make().