On 6/15/10 3:29 AM, Thomas Evangelidis wrote: > I've read previous posts on the same topic and concluded that it is > better to generate multiple models with moderate refinement and loop > optimization level, rather that a few with very thorough > parameterization.
Indeed.
> I have concluded about the optimum alignment after a lot of > experimentation and would like to set up a very effective optimization > process. However I'm not sure about the output files. My code looks like > this: > > a = MyLoopModel(env, alnfile=alignment, > knowns=known_templates, > assess_methods=(assess.DOPEHR,assess.normalized_dope), > sequence='target') > a.starting_model = 1 > a.ending_model = 2
This seems to contradict your statement above. Two models is probably not going to give you sufficient sampling - as John W suggested, you should be building many more models - perhaps 100.
> a.loop.starting_model = 1 # First loop model > a.loop.ending_model = 5 # Last loop model
This should generate 12 models in total - 2 comparative models (*.B999*.pdb files), and then 5 further loop models for each comparative model (*.BL*.pdb files).
> Which generates the following pdb files: > > target.B99990001.pdb target.B99990002.pdb target.BL00040002.pdb > target.IL00000001.pdb target.IL00000002.pdb
The .IL files are initial (unoptimized) loops, so they are of little utility. But there should be many more loop (.BL) models generated. The log file will tell you why a particular model optimization failed.
> I thought the above should perform model refinement twice and write 5 > different conformations (loop optimization) for each.
Indeed.
> 2) I 'd like to ask your opinion about the most effective way to find a > near-native protein conformation in low sequence identity levels. How > should the parameters shown above be set? I don't care if it's running a > day or so as long as I get good results.
Build more models - that's the most effective way.
Ben Webb, Modeller Caretaker