SMP

Example

#!/bin/bash
#
#$ -S /bin/bash
#$ -l arch=linux-x64    # Specify architecture, required
#$ -l mem_free=1G       # Memory usage, required.  Note that this is per slot
#$ -pe smp 2            # Specify parallel environment and number of slots, required
#$ -R yes               # SGE host reservation, highly recommended
#$ -cwd                 # Current working directory

blastall -p blastp -d nr -i in.txt -o out.txt -a $NSLOTS

Notes

OpenMPI

Example

#!/bin/bash
#
#$ -S /bin/bash
#$ -l arch=linux-x64    # Specify architecture, required
#$ -l mem_free=1G       # Memory usage, required.  Note that this is per slot
#$ -pe ompi 2           # Specify parallel environment and number of slots, required
#$ -R yes               # SGE host reservation, highly recommended
#$ -V                   # Pass current environment to exec node, required
#$ -cwd                 # Current working directory

# Load OpenMPI-1.8 environment
module load openmpi-1.8-x86_64

# Run application
mpirun -np $NSLOTS hello_mpi

# hello_mpi is the binary
# $NSLOTS is the number of slots specified above (2 in this case)

Running tightly coupled OpenMPI jobs entirely on one node

#!/bin/bash
#
#$ -S /bin/bash
#$ -l arch=linux-x64            # Specify architecture, required
#$ -l mem_free=1G               # Memory usage, required.  Note that this is per slot
#$ -pe ompi_onehost 8           # Specify parallel environment and number of slots, required
#$ -R yes                       # SGE host reservation, highly recommended
#$ -l h_rt=00:30:00             # Runtime estimate, highly recommended
#$ -V                           # Pass current environment to exec node, required
#$ -cwd                         # Current working directory

# Load OpenMPI-1.8 environment
module load openmpi-1.8-x86_64

# Run application
mpirun -np $NSLOTS hello_mpi

Notes

bash: module: line 1: syntax error: unexpected end of file
bash: error importing function definition for `module'

MPICH2

You need to compile your program linked to MPICH2 libraries. Here is an example of how to do this:

Example

#!/bin/bash
#
#$ -S /bin/bash
#$ -l arch=linux-x64    # Specify architecture, required
#$ -l mem_free=1G       # Memory usage, required.  Note that this is per slot
#$ -pe pe_mpich2 2      # Specify parallel environment and number of slots, required
#$ -R yes               # SGE host reservation, highly recommended
#$ -V                   # Pass current environment to exec node, required
#$ -cwd                 # Current working directory

export MPICH2=/netopt/mpi/mpich2
export MPIEXEC_RSH=rsh

# Run mpiexec from MPICH2 directory
${MPICH2}/bin/mpiexec -rsh -nopm -n $NSLOTS -machinefile $TMPDIR/machines hello_mpi


# hello_mpi is the binary
# $NSLOTS is the number of slots specified above (2 in this case)

Notes

Running tightly coupled MPICH2 jobs entirely on one node

#!/bin/bash
#
#$ -S /bin/bash
#$ -l arch=linux-x64            # Specify architecture, required
#$ -l mem_free=1G               # Memory usage, required.  Note that this is per slot
#$ -pe pe_mpich2_onehost 8      # Specify parallel environment and number of slots, required
#$ -R yes                       # SGE host reservation, highly recommended
#$ -l h_rt=00:30:00             # Runtime estimate, highly recommended
#$ -V                           # Pass current environment to exec node, required
#$ -cwd                         # Current working directory

export MPICH2=/netopt/mpi/mpich2
export MPIEXEC_RSH=rsh

# Run mpiexec from MPICH2 directory
${MPICH2}/bin/mpiexec -rsh -nopm -n $NSLOTS -machinefile $TMPDIR/machines hello_mpi

Notes

QB3cluster: Parallel_jobs (last edited 2017-07-13 23:04:09 by Joshua_Baker-LePain)