Hi Ben,
I wanto to carry out with loop refinement from the "best" model generated by modeller. However, the log file from loop.py script only ranks a modpdf score.
I edited the script to generate other assessment scores as follow:
m = MyLoop(env, inimodel='PROTEIN.B99990220.pdb', sequence='PROTEIN', assess_methods=(assess.DOPE, assess.GA341)) m.loop.starting_model= 1 # index of the first loop model m.loop.ending_model = 20 # index of the last loop model m.loop.md_level = refine.very_fast # loop refinement method
m.make()
# Get a list of all successfully built models from a.outputs ok_models = filter(lambda x: x['failure'] is None, a.outputs)
# Rank the models by DOPE score key = 'DOPE score' ok_models.sort(lambda a,b: cmp(a[key], b[key]))
# Get top model m = ok_models[0]
print "Top model 1: %s (DOPE score %.3f)" % (m['name'], m[key]) m = ok_models[1]
print "Top model 2: %s (DOPE score %.3f)" % (m['name'], m[key]) m = ok_models[2]
print "Top model 3: %s (DOPE score %.3f)" % (m['name'], m[key]) m = ok_models[3]
print "Top model 4: %s (DOPE score %.3f)" % (m['name'], m[key]) m = ok_models[4]
print "Top model 5: %s (DOPE score %.3f)" % (m['name'], m[key])
However, the assess_methods seems bo be ignored and loopy.log file does not list the models ranked by DOPE socore.
root@hercules:~/modeller/set7/loop# mod9.9 loop.py Could not find platform independent libraries <prefix> Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] 'import site' failed; use -v for traceback Traceback (most recent call last): File "loop.py", line 30, in ? ok_models = filter(lambda x: x['failure'] is None, a.outputs) TypeError: iteration over non-sequence root@hercules:~/modeller/set7/loop#
Is there a problem with the way I set the scritp?
Regards,
Flavio