The question here, how much minimization you did with CHARMM, how much each structure has changed. If substantially, than the original modeller models and scores are not too relevant. Otherwise, in your case, the original modeller scores are essentially indifferent from each other, any of the original models looks good.
The prosa scores reflect substantial differences: there is a two standard deviation difference.
If there is no CHARMM minimization, PROSA and modeller scores correlate but not perfectly at all. In general, PROSA is good to detect major problems, like which template is more suitable for a given target, or which alignment (if there are serious differences among alternative alignments). But if you build several models based on the same template and same alignment, prosa is less informative, and it is better to stick to modeller scores. In your case this picture is blurred by the fact that CHARMM might has changed the models during refinement (for better or worse).
Andras
Violet Chang wrote: > > Hi Modellers, > > I have a question regarding the objective function in the output > file(.B99....) of MODELLER. > > I've been evaluating the models that run by MODELLER. I selected three > best models from my running results based on their objective function( the > lower the better). Each objective function of them is like following: > A: 6103.3325, > B: 6090.5195, > C: 6077.6436. > > Then I used ProsaII to calulate the z- scores after I did CHARRM energy > minimization. The zp-comb of them are > A: -5.69, > B: -3.82, > C: -5.87. > > >From Modeller I can see B and C might have better models than A does. But > from ProsaII, I can exclude B. Does this kind of thing happen before? I'd > like to know if I should include as many as models I have to run Prosa, in > case the models with higher objective functions have lower z-scores. > Thank you very much. > > Sincerely, > Violet