[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [modeller_usage] Modelling Query



Dear Ben,
 
Thank you so much for the instant response, i have understood what you have told, but following is the error messege i get when i use the command "mod9v7 model.py"
 
 mod9v7 evaluate_model.py
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
'import site' failed; use -v for traceback

 
my second question is about checking the reliability of an alignment in PIR format which we use to generate models... as i have generate this alignment after doing hand threading so i want to check its reliabilty, please tell me if i can do it using following script:
 
# Example for: alignment.append(), alignment.write(),
#              alignment.check()
# Read an alignment, write it out in the 'PAP' format, and
# check the alignment of the N-1 structures as well as the
# alignment of the N-th sequence with each of the N-1 structures.
from modeller import *
log.level(output=1, notes=1, warnings=1, errors=1, memory=0)
env = environ()
env.io.atom_files_directory = '../atom_files'
aln = alignment(env)
aln.append(file='ompA-1mal-align.ali ', align_codes='all')
aln.write(file='ompA-1mal.pap', alignment_format='PAP')
aln.write(file='ompA-1mal.fasta', alignment_format='FASTA')
aln.check()
 
 
 
as i have run this script i got following error, first using the "mod9v7 align.py" the same error messege i got for "mod9v7 model.py" and the remaining error messeges are about the alignment format which i could not understand
 
mod9v7 align.py
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
'import site' failed; use -v for traceback
Traceback (most recent call last):
  File "align.py", line 18, in ?
    aln.check()
  File "/usr/lib/modeller9v7/modlib/modeller/alignment.py", line 198, in check
    self.check_structure_structure(io=io)
  File "/usr/lib/modeller9v7/modlib/modeller/alignment.py", line 207, in check_structure_structure
    return f(self.modpt, io.modpt, self.env.libs.modpt, eqvdst)
IOError: readlinef__E> Error encountered on file read: Is a directory
 
thanking in advance
 
Muhammad Noon

--- On Thu, 6/25/09, Modeller Caretaker <> wrote:

From: Modeller Caretaker <>
Subject: Re: [modeller_usage] Modelling Query
To: "Cool Hunk" <>
Cc:
Date: Thursday, June 25, 2009, 9:34 AM

Cool Hunk wrote:
> I am a new Modeller user, i am having some problems running Modeller,
> while i run the script using command "*mod9v7 model.py*" it doesnt run
> completely and generates errors

What errors? We can only help if you tell us what the errors are.

> but when i run the script using command
> *"Python model.py" *it runs completely and generate all the pdb files of
> models but not the log file for it even though i have included the
> function "*log.verbose()"* in the script, and the same is the case for
> model evaluation script and energies calculation script.....

Python doesn't generate log files - the output comes out on standard
output (i.e. your screen, if you didn't direct it somewhere else). To
get something very similar to running mod9v7, use something like

python model.py > model.log

> my other question is about the model evaluation, that do i have to run
> the evaluation script for each model individually and the compare the
> final dope scores or the scripts takes all the models at once and give
> me the final good model???

You can do it either way. You can evaluate a single model with the
script at http://salilab.org/modeller/9v7/manual/node242.html
Alternatively you can have automodel assess every model it builds
automatically by setting assess_methods; an example is at
http://salilab.org/modeller/9v7/manual/node20.html

    Ben Webb, Modeller Caretaker
--
" ymailto="mailto:">             http://www.salilab.org/modeller/
Modeller mail list: http://salilab.org/mailman/listinfo/modeller_usage