[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [modeller_usage] modeller_usage Digest, Vol 9, Issue 88



I can't address your own results, but I can tell you my experience:

To find a native like structure I created 100 models for 5 different template combinations with and without loop modelling and with and without MD refinement, and I found that no loop modelling and no MD refinement gave me the best models.

I took my 5-10 best models as ranked by dope and the objective function and then submitted them to PROsa.  Then I further analyzed the 5 or so best via procheck.  IMO brute force and sheer numbers are the best way to determine emperically what combinations of settings will work best for you.

I would definitely try no loop model and no MD and always do 100 models minimum.

Cheers.

--- On Tue, 6/15/10, <> wrote:

From: <>
Subject: modeller_usage Digest, Vol 9, Issue 88
To:
Date: Tuesday, June 15, 2010, 8:25 AM

Send modeller_usage mailing list submissions to
    " href="/mc/compose?to=">

To subscribe or unsubscribe via the World Wide Web, visit
    https://salilab.org/mailman/listinfo/modeller_usage
or, via email, send a message with subject or body 'help' to
    " href="/mc/compose?to=">

You can reach the person managing the list at
    " href="/mc/compose?to=">

When replying, please edit your Subject line so it is more specific
than "Re: Contents of modeller_usage digest..."


Today's Topics:

   1. structure refinement and loop optimization protocol
      (Thomas Evangelidis)
   2. Using my own initial structure to model a loop
      (Thomas Evangelidis)


----------------------------------------------------------------------

Message: 1
Date: Tue, 15 Jun 2010 02:29:37 +0100
From: Thomas Evangelidis <" href="/mc/compose?to=">>
Subject: [modeller_usage] structure refinement and loop optimization
    protocol
To: " href="/mc/compose?to=">
Message-ID:
    <" href="/mc/compose?to=">>
Content-Type: text/plain; charset="iso-8859-1"

Dear Modellers,

I've read previous posts on the same topic and concluded that it is better
to generate multiple models with moderate refinement and loop optimization
level, rather that a few with very thorough parameterization. I've also
noticed myself that with the thorough parameterization parts of the
secondary structure are distorted.

I have concluded about the optimum alignment after a lot of experimentation
and would like to set up a very effective optimization process. However I'm
not sure about the output files. My code looks like this:

            a = MyLoopModel(env, alnfile=alignment,
>                       knowns=known_templates,
> assess_methods=(assess.DOPEHR,assess.normalized_dope),
>                       sequence='target')
>             a.starting_model = 1
>             a.ending_model = 2
>             # Normal VTFM model optimization:
>             a.library_schedule = autosched.normal
>             a.max_var_iterations = 200 ## 200 by default
>             # Very thorough MD model optimization:
>             a.md_level = refine.slow
>             a.repeat_optimization = 1
>
>             a.loop.starting_model = 1           # First loop model
>             a.loop.ending_model   = 5          # Last loop model
>             a.loop.md_level       = refine.slow # Loop model refinement
> level
>


Which generates the following pdb files:

target.B99990001.pdb  target.B99990002.pdb  target.BL00040002.pdb
> target.IL00000001.pdb  target.IL00000002.pdb
>


I thought the above should perform model refinement twice and write 5
different conformations (loop optimization) for each. So my questions are
the following:

1) Can you explain what's happening with the .pdb files?

2) I 'd like to ask your opinion about the most effective way to find a
near-native protein conformation in low sequence identity levels. How should
the parameters shown above be set? I don't care if it's running a day or so
as long as I get good results.

3) I also attempted to cluster the models with a.cluster(cluster_cut=1.5),
which generated a representative structure with the parts of the protein
that remained similar in most of the models but without the variable parts
(files cluster.ini and cluster.opt). Does it make sense to select the model
that is closer to that consensus structure? If yes is there a way to do it
with Modeller? I know it can been found with Maxcluster program. Or
alternatively, do you reckon it is better to select the based model based on
the normalized DOPE z-score?


Hope to get some answers on these question cause I've been strangling to
find the best refinement/optimization protocol for several weeks.

thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://salilab.org/archives/modeller_usage/attachments/20100615/dadc5422/attachment.html

------------------------------

Message: 2
Date: Tue, 15 Jun 2010 13:25:22 +0100
From: Thomas Evangelidis <" href="/mc/compose?to=">>
Subject: [modeller_usage] Using my own initial structure to model a
    loop
To: " href="/mc/compose?to=">
Message-ID:
    <" href="/mc/compose?to=">>
Content-Type: text/plain; charset="iso-8859-1"

I was looking at :

http://www.salilab.org/modeller/manual/node26.html#SECTION:initialmodel

and was wondering if it's right to pass a loop fragment generated de novo
into automodel with the inifile parameter. Is it the same as using it as a
template in the alignment (that's what I do so far)? If yes, can I define
multiple infiles to model multiple loops of my protein whilst doing homology
modeling for the rest of it?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://salilab.org/archives/modeller_usage/attachments/20100615/10cda378/attachment.html

------------------------------

_______________________________________________
modeller_usage mailing list
" href="/mc/compose?to=">
https://salilab.org/mailman/listinfo/modeller_usage


End of modeller_usage Digest, Vol 9, Issue 88
*********************************************