This creates a new job object, used to keep track of multiple slave processes. It is initially empty, but acts just like an ordinary Python list, so you can add or remove slave objects (see below) using ordinary list operations (e.g., append, del). Also, if you provide a list of suitable slave objects to job(), they will automatically be added.
Each slave runs a MODELLER process. This will expect to find 'mod9v3' in your PATH; if this is not the case, specify the binary location and name with the modeller_path variable. (By specifying the 'bin/modslave.py' script you can start a slave which uses the system Python rather than that included in the MODELLER interpreter, which is useful if you're using the system Python for your master process as well.)
Each slave, when started, tries to connect back over the network to the master node. By default, they try to use the fully qualified domain name of the machine on which you create the job object (the master). If this name is incorrect (e.g., on multi-homed hosts) then specify the true hostname with the host parameter.
Each slave will run in the same directory as the master, so will probably fail if you do not have a shared filesystem on all nodes. The output from each slave is written to a logfile called '${JOB}.slaveN' where '${JOB}' is the name of your master script file (or 'stdout' if you are reading from standard input) and 'N' is the number of the slave, starting from zero.
Once you have created the job, to use the task interface, submit one or more tasks with job.queue_task(), and then run the tasks with job.run_all_tasks().
To use the message-passing interface, first start all slaves with job.start(), and then use Communicator.send_data(), Communicator.get_data() and slave.run_cmd() to pass messages and commands.
Example: See job.start(), job.run_all_tasks() command.