Dear all, I'm trying to run a parallel job with modeller. The script is
from modeller import * from modeller.automodel import * from modeller import soap_protein_od
j = job() for i in range(16) j.append(local_slave())
log.verbose() env.io.atom_files_directory = ['.', '../atom_files'] a = dopehr_loopmodel(env, alnfile='OurAlignment.ali', knows='2p1mB', sequence='Rece', assess_methods=(assess.DOPEHR, soap_protein_od.Scorer(), assess.GA341), loop_assess_methods=assess.DOPEHR) a.starting_model = 1 a.ending_model = 16 a.loop.starting_model = 1 a.loop.ending_model = 10 a.use_parallel_job(j) a.make()
However, it doesn't seem to work properly. In particular, it creates 16 slave output files containing exclusively 'import site' failed and then it hangs there, doing nothing at all, until I kill it with CTRL-C. What am I doing wrong? For sake of completeness, I'm running modeller 9.17, the distribution is Gentoo Linux, and single-CPU runs of modeller work fine without problems (so far).
Thanks!