Navigate / search

TORQUE Resource Manager


The TORQUE Resource Manager is a distributed resource manager providing control over batch jobs and distributed compute nodes. Its name stands for Terascale Open-Source Resource and QUEue Manager.

Some of TORQUE’s commands are used at the shell command line, others are embedded in the shell script that runs your program. Here are the commonly used shell commands.


Frequently Used Shell Commands

CommandBasic UsaageDescription
qsubqsub [script name]submit a pbs job
qstatqstat [job_id]show status of pbs batch jobs
qdelqdel [job_id]delete pbs batch job


Shell Commands to Check Queue and Job Status

pbsnodeslist status of all compute nodes
qstat -qlist all queues
qstat -alist all jobs
qstat -rlist running jobs
qstat -f job_idlist full information about job_id
qstat -Qf queuelist full information about queue
qstat -Blist summary status of the job server


For complete documentation on the commands listed here, refer to the online man page: type man command at the shell prompt.

#PBS in your Job Script

The best way to control execution of your job is through the use of #PBS commands in your job script. The job script is any shell script you normally run to execute your programs. The #PBS commands appear to be comments to the shell but when your script is submitted to the PBS job scheduler (via the qsub command), they can alter job attributes and select scheduler options.

Here is a script template that includes descriptive comments of the #PBS commands:


# set the number of nodes and processes per node. In this case you request 2 nodes, 8 processor per node.
#PBS -l nodes=2:ppn=8

# set max wallclock time. In this case, 100 hours.
#PBS -l walltime=100:00:00

# set name of job
#PBS -N CO2_optimization

# mail alert at start, end and abortion of execution
#PBS -m bea

# send mail to this address

# use submission environment

# start job from the directory it was submitted

# your submission command


More detailed information about #PBS commands can be found, among others, in the Research Computing and Cyberinfraestructure page at PennState.