next up previous contents index
Next: job.start() start Up: Parallel job support Previous: job.queue_task() submit   Contents   Index

job.run_all_tasks() -- run all queued tasks, and return results

run_all_tasks()
This runs all of the tasks in the job's queue on any available slave. The tasks are run in the same order they were submitted, and this function returns a list of all the return values from the tasks, in that order.

Tasks are run in a simple round-robin fashion on the available slaves. If a slave fails while running a task, that task is automatically resubmitted to another slave. If you submit more tasks than available slaves, new slaves are automatically added to the job if the job supports this functionality (e.g., sge_qsub_job()).
Example: examples/python/mytask.py


from modeller import *
from modeller.parallel import task

class mytask(task):
    """A task to read in a PDB file on the slave, and return the resolution"""
    def run(self, code):
        env = environ()
        env.io.atom_files_directory = "../atom_files"
        log.verbose()
        mdl = model(env, file=code)
        return mdl.resolution

Example: examples/python/parallel-task.py


from modeller import *
from modeller.parallel import *

# Load in my task from mytask.py (note: needs to be in a separate Python
# module like this, in order for Python's pickle module to work correctly)
from mytask import mytask

# Create an empty parallel job, and then add 2 slave processes running
# on the local machine
j = job()
j.append(local_slave())
j.append(local_slave())

# Run 'mytask' tasks
j.queue_task(mytask('1fdx'))
j.queue_task(mytask('1b3q'))
j.queue_task(mytask('1blu'))

results = j.run_all_tasks()

print "Got model resolution: ", results



Ben Webb 2007-01-19