[ Platform Documentation ] [ Title ] [ Contents ] [ Previous ] [ Next ] [ Index ]
bsub
submits a batch job to LSF
SYNOPSIS
bsub
[options] command [arguments]
bsub
[-h
|-V
]OPTION LIST
-I
|-Ip
|-Is
-a
esub_parameters
-b
[[month:
]day:
]hour:
minute
-C
core_limit
-c
[hours:
]minutes[/
host_name |/
host_model]
-D
data_limit
-e
err_file
-E "
pre_exec_command [arguments ...]"
-ext
[sched
]"
external_scheduler_options"
-f "
local_file operator [remote_file]"
...
-F
file_limit
-g
job_group_name
-G
user_group
-i
input_file |-is
input_file
-J
job_name |-J "
job_name[
index_list]%
job_slot_limit"
-k "
checkpoint_dir [checkpoint_period][method=
method_name]"
-L
login_shell
-m "
host_name[@
cluster_name][+
[pref_level]] | host_group[+
[pref_level]] ..."
-M
mem_limit
-n
min_proc[,
max_proc]
-o
out_file
-P
project_name
-p
process_limit
-q "
queue_name ..."
-R "
res_req"
-sla
service_class_name
-sp
priority
-S
stack_limit
-t
[[month:
]day:
]hour:
minute
-T
thread_limit
-U
reservation_ID
-u
mail_user
-v
swap_limit
-w '
dependency_expression'
-wa '
[signal | command |CHKPNT
]'
-wt '
[hours:
]minutes'
-W
[hours:
]minutes[/
host_name |/
host_model]DESCRIPTION
Submits a job for batch execution and assigns it a unique numerical job ID.
Runs the job on a host that satisfies all requirements of the job, when all conditions on the job, host, queue, and cluster are satisfied. If LSF cannot run all jobs immediately, LSF scheduling policies determine the order of dispatch. Jobs are started and suspended according to the current system load.
Sets the user's execution environment for the job, including the current working directory, file creation mask, and all environment variables, and sets LSF environment variables before starting the job.
When a job is run, the command line and
stdout
/stderr
buffers are stored in the directory home_directory/.lsbatch
on the execution host. If this directory is not accessible,/tmp/.lsbtmp
user_ID is used as the job's home directory. If the current working directory is under the home directory on the submission host, then the current working directory is also set to be the same relative directory under the home directory on the execution host. The job is run in/tmp
if the current working directory is not accessible on the execution host.If no command is supplied,
bsub
prompts for the command from the standard input. On UNIX, the input is terminated by entering CTRL-D on a new line. On Windows, the input is terminated by entering CTRL-Z on a new line.Use
-g
to submit a job to a job group.Use
-n
to submit a parallel job.Use
-I
,-Is
, or-Ip
to submit a batch interactive job.Use
-J
to assign a name to your job.Use
-k
to specify a checkpointable job.To kill a batch job submitted with
bsub
, usebkill
.Jobs submitted to a chunk job queue with the following options are not chunked; they are dispatched individually:
-I
(interactive jobs)-c
(jobs with CPU limit greater than 30)-W
(jobs with run limit greater than 30 minutes)To submit jobs from UNIX to display GUIs through Microsoft Terminal Services on Windows, submit the job with bsub and define the environment variables LSF_LOGON_DESKTOP=1 and LSB_TSJOB=1 on the UNIX host. Use
tssub
to submit a Terminal Services job from Windows hosts. See LSF on Windows for more details.Use
bmod
to modify jobs submitted withbsub
.bmod
takes similar options tobsub
.If the parameter LSB_STDOUT_DIRECT in
lsf.conf
is set toY
ory
, and you use the-o
option, the standard output of a job is written to the file you specify as the job runs. If LSB_STDOUT_DIRECT is not set, and you use-o
, the standard output of a job is written to a temporary file and copied to the specified file after the job finishes. LSB_STDOUT_DIRECT is not supported on Windows.DEFAULT BEHAVIOR
LSF assumes that uniform user names and user ID spaces exist among all the hosts in the cluster. That is, a job submitted by a given user will run under the same user's account on the execution host. For situations where nonuniform user names and user ID spaces exist, account mapping must be used to determine the account used to run a job.
bsub
uses the command name as the job name. Quotation marks are significant.If fairshare is defined and you belong to multiple user groups, the job will be scheduled under the user group that allows the quickest dispatch.
The job is not checkpointable.
bsub
automatically selects an appropriate queue. If you defined a default queue list by setting LSB_DEFAULTQUEUE, the queue is selected from your list. If LSB_DEFAULTQUEUE is not defined, the queue is selected from the system default queue list specified by the LSF administrator (see the parameter DEFAULT_QUEUE inlsb.params(5)
).LSF tries to obtain resource requirement information for the job from the remote task list that is maintained by the load sharing library (see
lsfintro(1)
). If the job is not listed in the remote task list, the default resource requirement is to run the job on a host or hosts that are of the same host type (seelshosts(1)
) as the submission host.
bsub
assumes only one processor is requested.
bsub
does not start a login shell but runs the job file under the execution environment from which the job was submitted.The input file for the batch job is
/dev/null
(no input).
bsub
sends mail to you when the job is done. The default destination is defined by LSB_MAILTO inlsf.conf
. The mail message includes the job report, the job output (if any), and the error message (if any).
bsub
charges the job to the default project. The default project is the project you define by setting the environment variable LSB_DEFAULTPROJECT. If you do not set LSB_DEFAULTPROJECT, the default project is the project specified by the LSF administrator in thelsb.params
configuration file (see the DEFAULT_PROJECT parameter inlsb.params(5)
). If DEFAULT_PROJECT is not defined, then LSF usesdefault
as the default project name.OPTIONS
-B
Sends mail to you when the job is dispatched and begins execution.
-H
Holds the job in the PSUSP state when the job is submitted. The job will not be scheduled until you tell the system to resume the job (see
bresume(1)
).-I | -Ip | -Is
Submits a batch interactive job. A new job cannot be submitted until the interactive job is completed or terminated.
Sends the job's standard output (or standard error) to the terminal. Does not send mail to you when the job is done unless you specify the
-N
option.Terminal support is available for a batch interactive job.
When you specify the
-Ip
option, submits a batch interactive job and creates a pseudo-terminal when the job starts. Some applications (for example,vi
) require a pseudo-terminal in order to run correctly.When you specify the
-Is
option, submits a batch interactive job and creates a pseudo-terminal with shell mode support when the job starts. This option should be specified for submitting interactive shells, or applications which redefine the CTRL-C and CTRL-Z keys (for example,jove
).If the
-i
input_file option is specified, you cannot interact with the job's standard input via the terminal.If the
-o
out_file option is specified, sends the job's standard output to the specified output file. If the-e
err_file option is specified, sends the job's standard error to the specified error file.You cannot use
-I
,-Ip
, or-Is
with the-K
option.Interactive jobs cannot be checkpointed.
Interactive jobs cannot be rerunnable (
bsub -r
).The options that create a pseudo-terminal (
-Ip
and-Is
) are not supported on Windows.-K
Submits a batch job and waits for the job to complete. Sends the message "
Waiting for dispatch
" to the terminal when you submit the job. Sends the message "Job is finished
" to the terminal when the job is done.You will not be able to submit another job until the job is completed. This is useful when completion of the job is required in order to proceed, such as a job script. If the job needs to be rerun due to transient failures,
bsub
returns after the job finishes successfully.bsub
will exit with the same exit code as the job so that job scripts can take appropriate actions based on the exit codes.bsub
exits with value 126 if the job was terminated while pending.You cannot use the
-K
option with the-I
,-Ip
, or-Is
options.-N
Sends the job report to you by mail when the job finishes. When used without any other options, behaves the same as the default.
Use only with
-o
,-I
,-Ip
, and-Is
options, which do not send mail, to force LSF to send you a mail message when the job is done.-r
If the execution host becomes unavailable while a job is running, specifies that the job will rerun on another host. LSF requeues the job in the same job queue with the same job ID. When an available execution host is found, reruns the job as if it were submitted new, even if the job has been checkpointed. You receive a mail message informing you of the host failure and requeuing of the job.
If the system goes down while a job is running, specifies that the job will be requeued when the system restarts.
Reruns a job if the execution host or the system fails; it does not rerun a job if the job itself fails.
Members of a chunk job can be rerunnable. If the execution host becomes unavailable, rerunnable chunk job members are removed from the queue and dispatched to a different execution host.
Interactive jobs (
bsub -I
) cannot be rerunnable.-x
Puts the host running your job into exclusive execution mode.
In exclusive execution mode, your job runs by itself on a host. It is dispatched only to a host with no other jobs running, and LSF does not send any other jobs to the host until the job completes.
To submit a job in exclusive execution mode, the queue must be configured to allow exclusive jobs.
When the job is dispatched,
bhosts(1)
reports the host status as closed_Excl, andlsload(1)
reports the host status as lockU.Until your job is complete, the host is not selected by LIM in response to placement requests made by
lsplace(1)
,lsrun(1)
orlsgrun(1)
or any other load sharing applications.You can force other batch jobs to run on the host by using the
-m
host_name option ofbrun(1)
to explicitly specify the locked host.You can force LIM to run other interactive jobs on the host by using the
-m
host_name option oflsrun(1)
orlsgrun(1)
to explicitly specify the locked host.-a esub_parameters
Arbitrary string that provides additional parameters to be passed to the master
esub
. The masteresub
(mesub
) handles job submission requirements of your applications. Application-specificesub
programs can specify their own job submission requirements. Use the-a
option to specify which application- specificesub
is invoked bymesub
.For example, to submit a job to
hostA
that invokes anesub
namedesub.license
:%bsub -a license -m hostA my_job
The method name
license
uses theesub
namedLSF_SERVERDIR/esub.license
.-b [[month:]day:]hour:minute
Dispatches the job for execution on or after the specified date and time. The date and time are in the form of [[month
:
]day:
]hour:
minute where the number ranges are as follows: month 1-12, day 1-31, hour 0-23, minute 0-59.At least two fields must be specified. These fields are assumed to be hour:minute. If three fields are given, they are assumed to be day:hour:minute, and four fields are assumed to be month:day:hour:minute.
-C core_limit
Sets a per-process (soft) core file size limit for all the processes that belong to this batch job (see
getrlimit(2)
). The core limit is specified in KB.The behavior of this option depends on platform-specific UNIX systems.
In some cases, the process is sent a SIGXFSZ signal if the job attempts to create a core file larger than the specified limit. The SIGXFSZ signal normally terminates the process.
In other cases, the writing of the core file terminates at the specified limit.
-c [hours:]minutes[/host_name | /host_model]
Limits the total CPU time the job can use. This option is useful for preventing runaway jobs or jobs that use up too many resources. When the total CPU time for the whole job has reached the limit, a SIGXCPU signal is first sent to the job, then SIGINT, SIGTERM, and SIGKILL.
If LSB_JOB_CPULIMIT in
lsf.conf
is set to n, LSF-enforced CPU limit is disabled and LSF passes the limit to the operating system. When one process in the job exceeds the CPU limit, the limit is enforced by the operating system.The CPU limit is in the form of [hours
:
]minutes. The minutes can be specified as a number greater than 59. For example, three and a half hours can either be specified as 3:30, or 210.The CPU time you specify is the normalized CPU time. This is done so that the job does approximately the same amount of processing for a given CPU limit, even if it is sent to host with a faster or slower CPU. Whenever a normalized CPU time is given, the actual time on the execution host is the specified time multiplied by the CPU factor of the normalization host then divided by the CPU factor of the execution host.
Optionally, you can supply a host name or a host model name defined in LSF. You must insert a slash (
/
) between the CPU limit and the host name or model name. (Seelsinfo
(1) to get host model information.) If a host name or model name is not given, LSF uses the default CPU time normalization host defined at the queue level (DEFAULT_HOST_SPEC inlsb.queues
) if it has been configured, otherwise uses the default CPU time normalization host defined at the cluster level (DEFAULT_HOST_SPEC inlsb.params
) if it has been configured, otherwise uses the submission host.Jobs submitted to a chunk job queue are not chunked if the CPU limit is greater than 30 minutes.
-D data_limit
Sets a per-process (soft) data segment size limit for each of the processes that belong to the batch job (see
getrlimit
(2)). The data limit is specified in KB. Asbrk
call to extend the data segment beyond the data limit will return an error.-e err_file
Specify a file path. Appends the standard error output of the job to the specified file.
If the parameter LSB_STDOUT_DIRECT in
lsf.conf
is set toY
ory
, the standard error output of a job is written to the file you specify as the job runs. If LSB_STDOUT_DIRECT is not set, it is written to a temporary file and copied to the specified file after the job finishes. LSB_STDOUT_DIRECT is not supported on Windows.If you use the special character
%J
in the name of the error file, then%J
is replaced by the job ID of the job. If you use the special character%I
in the name of the error file, then%I
is replaced by the index of the job in the array if the job is a member of an array. Otherwise,%I
is replaced by 0 (zero).If the current working directory is not accessible on the execution host after the job starts, LSF writes the standard error output file to
/tmp/
.-E "pre_exec_command [arguments ...]"
Runs the specified pre-exec command on the batch job's execution host before actually running the job. For a parallel job, the pre-exec command runs on the first host selected for the parallel job.
If the pre-exec command exits with 0 (zero), then the real job is started on the selected host. Otherwise, the job (including the pre-exec command) goes back to PEND status and is rescheduled.
If your job goes back into PEND status, LSF will keep on trying to run the pre- exec command and the real job when conditions permit. For this reason, be sure that your pre-exec command can be run many times without having side effects.
The standard input and output for the pre-exec command are directed to the same files as for the real job. The pre-exec command runs under the same user ID, environment, home, and working directory as the real job. If the pre-exec command is not in the user's normal execution path (the $PATH variable), the full path name of the command must be specified.
-ext[sched] "external_scheduler_options"
Application-specific external scheduling options for the job.
To enable jobs to accept external scheduler options, set LSF_ENABLE_EXTSCHEDULER=y in
lsf.conf
.You can abbreviate the
-extsched
option to-ext
.You can specify only one type of external scheduler option in a single
-extsched
string.For example, SGI IRIX hosts and AlphaServer SC hosts running RMS can exist in the same cluster, but they accept different external scheduler options. Use external scheduler options to define job requirements for either IRIX cpusets OR RMS, but not both. Your job will run either on IRIX or RMS. If external scheduler options are not defined, the job may run on IRIX but it will not run on an RMS host.
The options set by
-extsched
can be combined with the queue-level MANDATORY_EXTSCHED or DEFAULT_EXTSCHED parameters. If-extsched
and MANDATORY_EXTSCHED set the same option, the MANDATORY_EXTSCHED setting is used. If-extsched
and DEFAULT_EXTSCHED set the same options, the-extsched
setting is used.Use DEFAULT_EXTSCHED in
lsb.queues
to set default external scheduler options for a queue.To make certain external scheduler options mandatory for all jobs submitted to a queue, specify MANDATORY_EXTSCHED in
lsb.queues
with the external scheduler options you need or your jobs.-f "local_file operator [remote_file]" ...
Copies a file between the local (submission) host and the remote (execution) host. Specify absolute or relative paths, including the file names. You should specify the remote file as a file name with no path when running in non-shared systems.
If the remote file is not specified, it defaults to the local file, which must be given. Use multiple
-f
options to specify multiple files.operator
An operator that specifies whether the file is copied to the remote host, or whether it is copied back from the remote host. The operator must be surrounded by white space.
The following describes the operators:
>
Copies the local file to the remote file before the job starts. Overwrites the remote file if it exists.
<
Copies the remote file to the local file after the job completes. Overwrites the local file if it exists.
<<
Appends the remote file to the local file after the job completes. The local file must exist.
><
Copies the local file to the remote file before the job starts. Overwrites the remote file if it exists. Then copies the remote file to the local file after the job completes. Overwrites the local file.
<>
Copies the local file to the remote file before the job starts. Overwrites the remote file if it exists. Then copies the remote file to the local file after the job completes. Overwrites the local file.If you use the
-i
input_file option, then you do not have to use the-f
option to copy the specified input file to the execution host. LSF does this for you, and removes the input file from the execution host after the job completes.If you use the
-e
err_file or the-o
out_file option, and you want the specified file to be copied back to the submission host when the job completes, then you must use the-f
option.If the submission and execution hosts have different directory structures, you must make sure that the directory where the remote file and local file will be placed exists.
If the local and remote hosts have different file name spaces, you must always specify relative path names. If the local and remote hosts do not share the same file system, you must make sure that the directory containing the remote file exists. It is recommended that only the file name be given for the remote file when running in heterogeneous file systems. This places the file in the job's current working directory. If the file is shared between the submission and execution hosts, then no file copy is performed.
LSF uses
lsrcp
to transfer files (seelsrcp
(1) command).lsrcp
contacts RES on the remote host to perform the file transfer. If RES is not available,rcp
is used (seercp
(1)). The user must make sure that thercp
binary is in the user's $PATH on the execution host.Jobs that are submitted from LSF client hosts should specify the
-f
option only ifrcp
is allowed. Similarly,rcp
must be allowed if account mapping is used.-F file_limit
Sets a per-process (soft) file size limit for each of the processes that belong to the batch job (see
getrlimit
(2)). The file size limit is specified in KB. If a job process attempts to write to a file that exceeds the file size limit, then that process is sent a SIGXFSZ signal. The SIGXFSZ signal normally terminates the process.-g job_group_name
Submits jobs in the job group specified by job_group_name The job group does not have to exist before submitting the job. For example:
% bsub -g /risk_group/portfolio1/current myjob Job <105> is submitted to default queue.Submits
myjob
to the job group/risk_group/portfolio1/current
.If group
/risk_group/portfolio1/current
exists, job 105 is attached to the job group.If group
/risk_group/portfolio1/current
does not exist, LSF checks its parent recursively, and if no groups in the hierarchy exist, all three job groups are created with the specified hierarchy and the job is attached to group.You cannot use
-g
with-sla
. A job can either be attached to a job group or a service class, but not both.-G user_group
Only useful with fairshare scheduling.
Associates the job with the specified group. Specify any group that you belong to that does not contain any subgroups. You must be a direct member of the specified user group.
-i input_file | -is input_file
Gets the standard input for the job from specified file. Specify an absolute or relative path. The input file can be any type of file, though it is typically a shell script text file.
If the file exists on the execution host, LSF uses it. Otherwise, LSF attempts to copy the file from the submission host to the execution host. For the file copy to be successful, you must allow remote copy (
rcp
) access, or you must submit the job from a server host where RES is running. The file is copied from the submission host to a temporary file in the directory specified by the JOB_SPOOL_DIR parameter, or your$HOME/.lsbatch
directory on the execution host. LSF removes this file when the job completes.By default, the input file is spooled to
LSB_SHAREDIR/
cluster_name/lsf_indir
. If thelsf_indir
directory does not exist, LSF creates it before spooling the file. LSF removes the spooled file when the job completes. Use the-is
option if you need to modify or remove the input file before the job completes. Removing or modifying the original input file does not affect the submitted job.If JOB_SPOOL_DIR in
lsb.params
is specified, the-is
option spools the input file to the specified directory and uses the spooled file as the input file for the job.JOB_SPOOL_DIR must be readable and writable by the job submission user, and it must be shared by the master host and the submission host. If the specified directory is not accessible or does not exist,
bsub -is
cannot write to the default directoryLSB_SHAREDIR/
cluster_name/lsf_indir
and the job will fail.Unless you use
-is
, you can use the special characters%J
and%I
in the name of the input file.%J
is replaced by the job ID.%I
is replaced by the index of the job in the array, if the job is a member of an array, otherwise by 0 (zero). The special characters%J
and%I
are not valid with the-is
option.-J job_name | -J "job_name[index_list]%job_slot_limit"
Assigns the specified name to the job, and, for job arrays, specifies the indices of the job array and optionally the maximum number of jobs that can run at any given time.
The job name need not be unique.
To specify a job array, enclose the index list in square brackets, as shown, and enclose the entire job array specification in quotation marks, as shown. The index list is a comma-separated list whose elements have the syntax start[-end[
:
step]] where start, end and step are positive integers. If the step is omitted, a step of one is assumed. The job array index starts at one. By default, the maximum job array index is 1000.You may also use a positive integer to specify the system-wide job slot limit (the maximum number of jobs that can run at any given time) for this job array.
All jobs in the array share the same job ID and parameters. Each element of the array is distinguished by its array index.
After a job is submitted, you use the job name to identify the job. Specify
"
job_ID[
index]"
towork with elements of a particular array. Specify
"
job_name[
index]"
to work with elements of all arrays with the same name. Since job names are not unique, multiple job arrays may have the same name with a different or same set of indices.-k "checkpoint_dir [checkpoint_period][method=method_name]"
Makes a job checkpointable and specifies the checkpoint directory. If you omit the checkpoint period, the quotes are not required. Specify a relative or absolute path name.
When a job is checkpointed, the checkpoint information is stored in checkpoint_dir/job_ID/file_name. Multiple jobs can checkpoint into the same directory. The system can create multiple files.
The checkpoint directory is used for restarting the job (see
brestart
(1)).Optionally, specifies a checkpoint period in minutes. Specify a positive integer. The running job is checkpointed automatically every checkpoint period. The checkpoint period can be changed using
bchkpnt
(1). Because checkpointing is a heavyweight operation, you should choose a checkpoint period greater than half an hour.Optionally, specifies a custom checkpoint and restart method to use with the job. Use
method=default
to indicate to use LSF's default checkpoint and restart programs for the job,echkpnt.default
anderestart.default
.The
echkpnt.
method_name
anderestart.
method_name programs must be in LSF_SERVERDIR or in the directory specified by LSB_ECHKPNT_METHOD_DIR (environment variable or set inlsf.conf
).If a custom checkpoint and restart method is already specified with LSB_ECHKPNT_METHOD (environment variable or in
lsf.conf
), the method you specify withbsub -k
overrides this.Process checkpointing is not available on all host types, and may require linking programs with a special libraries (see
libckpt.a
(3)). LSF invokesechkpnt
(seeechkpnt
(8)) found in LSF_SERVERDIR to checkpoint the job. You can override the defaultechkpnt
for the job by defining as environment variables or inlsf.conf
LSB_ECHKPNT_METHOD and LSB_ECHKPNT_METHOD_DIR to point to your ownechkpnt
. This allows you to use other checkpointing facilities, including application-level checkpointing.The checkpoint method directory should be accessible by all users who need to run the custom
echkpnt
anderestart
programs.Only running members of a chunk job can be checkpointed.
-L login_shell
Initializes the execution environment using the specified login shell. The specified login shell must be an absolute path. This is not necessarily the shell under which the job will be executed.
Login shell is not supported on Windows.
-m "host_name[@cluster_name][+[pref_level]] | host_group[+[pref_level]] ..."
Runs the job on one of the specified hosts.
By default, if multiple hosts are candidates, runs the job on the least-loaded host.
To change the order of preference, put a plus (+) after the names of hosts or host groups that you would prefer to use, optionally followed by a preference level. For preference level, specify a positive integer, with higher numbers indicating greater preferences for those hosts. For example,
-m "hostA groupB+2 hostC+1"
indicates thatgroupB
is the most preferred andhostA
is the least preferred.The keyword
others
can be specified with or without a preference level to refer to other hosts not otherwise listed. The keywordothers
must be specified with at least one host name or host group, it cannot be specified by itself. For example,-m "hostA+ others"
means thathostA
is preferred over all other hosts.If you also use
-q
, the specified queue must be configured to include all the hosts in the your host list. Otherwise, the job is not submitted. To find out what hosts are configured for the queue, usebqueues -l
.To display configured host groups, use
bmgroup
.For the MultiCluster job forwarding model, you cannot specify a remote host by name.
-M mem_limit
Sets a per-process (soft) memory limit for all the processes that belong to this batch job (see
getrlimit(2)
). The memory limit is specified in KB.If LSB_MEMLIMIT_ENFORCE or LSB_JOB_MEMLIMIT are set to
y
inlsf.conf
, LSF kills the job when it exceeds the memory limit. Otherwise, LSF passes the memory limit to the operating system. UNIX operating systems that support RUSAGE_RSS forsetrlimit()
can apply the memory limit to each process.The following operating systems do not support the memory limit at the OS level:
- Windows
- Sun Solaris 2.x
-n min_proc[,max_proc]
Submits a parallel job and specifies the number of processors required to run the job (some of the processors may be on the same multiprocessor host).
You can specify a minimum and maximum number of processors to use. The job can start if at least the minimum number of processors is available. If you do not specify a maximum, the number you specify represents the exact number of processors to use.
Jobs that request fewer slots than the minimum PROCLIMIT defined for the queue to which the job is submitted, or more slots than the maximum PROCLIMIT cannot use the queue and are rejected. If the job requests minimum and maximum job slots, the maximum slots requested cannot be less than the minimum PROCLIMIT, and the minimum slots requested cannot be more than the maximum PROCLIMIT.
For example, if the queue defines PROCLIMIT=4 8:
bsub -n 6
is accepted because it requests slots within the range of PROCLIMITbsub -n 7
is rejected because it requests more slots than the PROCLIMIT allowsbsub -n 1
is rejected because it requests fewer slots than the PROCLIMIT allowsbsub -n 6,10
is accepted because the minimum value 6 is within the range of the PROCLIMIT settingbsub -n 1,6
is accepted because the maximum value 6 is within the range of the PROCLIMIT settingbsub -n 10,16
is rejected because its range is outside the range of PROCLIMITbsub -n 1,3
is rejected because its range is outside the range of PROCLIMITSee the PROCLIMIT parameter in
lsb.queues
(5) for more information.In a MultiCluster environment, if a queue exports jobs to remote clusters (see the SNDJOBS_TO parameter in
lsb.queues
(5)), then the process limit is not imposed on jobs submitted to this queue.Once at the required number of processors is available, the job is dispatched to the first host selected. The list of selected host names for the job are specified in the environment variables LSB_HOSTS and LSB_MCPU_HOSTS. The job itself is expected to start parallel components on these hosts and establish communication among them, optionally using RES.
-o out_file
Specify a file path. Appends the standard output of the job to the specified file. Sends the output by mail if the file does not exist, or the system has trouble writing to it.
If only a file name is specified, LSF writes the output file to the current working directory. If the current working directory is not accessible on the execution host after the job starts, LSF writes the standard output file to
/tmp/
.If the parameter LSB_STDOUT_DIRECT in
lsf.conf
is set toY
ory
, the standard output of a job is written to the file you specify as the job runs. If LSB_STDOUT_DIRECT is not set, it is written to a temporary file and copied to the specified file after the job finishes. LSB_STDOUT_DIRECT is not supported on Windows.If you use
-o
without-e
, the standard error of the job is stored in the output file.If you use
-o
without-N
, the job report is stored in the output file as the file header.If you use both
-o
and-N
, the output is stored in the output file and the job report is sent by mail. The job report itself does not contain the output, but the report will advise you where to find your output.If you use the special character
%J
in the name of the output file, then%J
is replaced by the job ID of the job. If you use the special character%I
in the name of the output file, then%I
is replaced by the index of the job in the array, if the job is a member of an array. Otherwise,%I
is replaced by 0 (zero).-P project_name
Assigns the job to the specified project.
On IRIX 6, you must be a member of the project as listed in
/etc/project
(4). If you are a member of the project, then/etc/projid
(4) maps the project name to a numeric project ID. Before the submitted job executes, a new array session (newarraysess
(2)) is created and the project ID is assigned to it usingsetprid
(2).-p process_limit
Sets the limit of the number of processes to process_limit for the whole job. The default is no limit. Exceeding the limit causes the job to terminate.
-q "queue_name ..."
Submits the job to one of the specified queues. Quotes are optional for a single queue. The specified queues must be defined for the local cluster. For a list of available queues in your local cluster, use
bqueues
.When a list of queue names is specified, LSF selects the most appropriate queue in the list for your job based on the job's resource limits, and other restrictions, such as the requested hosts, your accessibility to a queue, queue status (closed or open), etc. The order in which the queues are considered is the same order in which these queues are listed. The queue listed first is considered first.
-R "res_req"
Runs the job on a host that meets the specified resource requirements. A resource requirement string describes the resources a job needs. LSF uses resource requirements to select hosts for remote execution and job execution.
The size of the resource requirement string is limited to 512 characters.
Any run-queue-length-specific resource, such as
r15s
,r1m
orr15m
, specified in the resource requirements refers to the normalized run queue length.A resource requirement string is divided into the following sections:
- A selection section (
select
). The selection section specifies the criteria for selecting hosts from the system.- An ordering section (
order
). The ordering section indicates how the hosts that meet the selection criteria should be sorted.- A resource usage section (
rusage
). The resource usage section specifies the expected resource consumption of the task.- A job spanning section (
span
). The job spanning section indicates if a parallel batch job should span across multiple hosts.- A same resource section (
same
). The same section indicates that all processes of a parallel job must run on the same type of host.If no section name is given, then the entire string is treated as a selection string. The
select
keyword may be omitted if the selection string is the first string in the resource requirement.The resource requirement string has the following syntax:
select[
selection_string] order[
order_string] rusage[
usage_string [,
usage_string] ...] span[
span_string] same[
same_string]
The square brackets must be typed as shown.
The section names are s
elect
,order
,rusage
,span
, andsame
. Sections that do not apply for a command are ignored.Each section has a different syntax.
For example, to submit a job which will run on Solaris 7 or Solaris 8:
%bsub -R "sol7 || sol8" myjob
The following command runs the job called
myjob
on an HP-UX host that is lightly loaded (CPU utilization) and has at least 15 MB of swap memory available.%bsub -R "swp > 15 && hpux order[cpu]" myjob
You configured a static shared resource for licenses for the Verilog application as a resource called
verilog_lic
. To submit a job that will run on a host when there is a license available:%bsub -R "select[defined(verilog_lic)] rusage[verilog_lic=1]" myjob
The following job requests 20 MB memory for the duration of the job, and 1 license for 2 minutes:
%bsub -R "rusage[mem=20, license=1:duration=2]" myjob
The following job requests 20 MB of memory and 50 MB of swap space for 1 hour, and 1 license for 2 minutes:
%bsub -R "rusage[mem=20:swap=50:duration=1h, license=1:duration=2]" myjob
The following job requests 50 MB of swap space, linearly decreasing the amount reserved over a duration of 2 hours, and requests 1 license for 2 minutes:
%bsub -R "rusage[swp=50:duration=2h:decay=1, license=1:duration=2]" myjob
The following job requests two resources with same duration but different decay:
%bsub -R "rusage[mem=20:duration=30:decay=1, lic=1:duration=30]
-sla service_class_name
Specifies the service class where the job is to run.
If the SLA does not exist or the user is not a member of the service class, the job is rejected.
You cannot use
-sla
with-g
. A job can either be attached to a job group or a service class, but not both.
You should submit your jobs with a run time limit (-W
option) or the queue should specify a run time limit (RUNLIMIT in the queue definition inlsb.queues
). If you do not specify a run time limit, LSF automatically adjusts the optimum number of running jobs according to the observed run time of finished jobs.
Use
bsla
to display the properties of service classes configured inLSB_CONFDIR/
cluster_name/configdir/lsb.serviceclasses
(seelsb.serviceclasses
(5)) and dynamic information about the state of each service class.-sp priority
Specifies user-assigned job priority which allow users to order their jobs in a queue. Valid values for priority are any integers between 1 and MAX_USER_PRIORITY (displayed by
bparams -l
). Invalid job priorities are rejected. LSF and queue administrators can specify priorities beyond MAX_USER_PRIORITY.The job owner can change the priority of their own jobs. LSF and queue administrators can change the priority of all jobs in a queue.
Job order is the first consideration to determine job eligibility for dispatch. Jobs are still subject to all scheduling policies regardless of job priority. Jobs with the same priority are ordered first come first served.
User-assigned job priority can be configured with automatic job priority escalation to automatically increase the priority of jobs that have been pending for a specified period of time.
-S stack_limit
Sets a per-process (soft) stack segment size limit for each of the processes that belong to the batch job (see
getrlimit
(2)). The limit is specified in KB.-t [[month:]day:]hour:minute
Specifies the job termination deadline.
If a UNIX job is still running at the termination time, the job is sent a SIGUSR2 signal, and is killed if it does not terminate within ten minutes.
If a Windows job is still running at the termination time, it is killed immediately. (For a detailed description of how these jobs are killed, see
bkill
.)In the queue definition, a TERMINATE action can be configured to override the
bkill
default action (see the JOB_CONTROLS parameter inlsb.queues
(5)).The format for the termination time is [[month:]day:]hour:minute where the number ranges are as follows: month 1-12, day 1-31, hour 0-23, minute 0-59.
At least two fields must be specified. These fields are assumed to be hour:minute. If three fields are given, they are assumed to be day:hour:minute, and four fields are assumed to be month:day:hour:minute.
-T thread_limit
Sets the limit of the number of concurrent threads to thread_limit for the whole job. The default is no limit.
Exceeding the limit causes the job to terminate. The system sends the following signals in sequence to all processes belongs to the job: SIGINT, SIGTERM, and SIGKILL.
-U reservation_ID
If an advance reservation has been created with the
brsvadd
command, the-U
option makes use of the reservation.For example, if the following command was used to create the reservation user1#0,
% brsvadd -n 1024 -m hostA -u user1 -b 13:0 -e 18:0 Reservation "user1#0" is createdthe following command uses the reservation:
%bsub -U user1#0 myjobThe job can only use hosts reserved by the reservation
user1#0
. LSF only selects hosts in the reservation. You can use the-m
option to specify particular hosts within the list of hosts reserved by the reservation, but you cannot specify other hosts not included in the original reservation.A job can only use one reservation. There is no restriction on the number of jobs that can be submitted to a reservation; however, the number of slots available on the hosts in the reservation may run out. For example, reservation
user2#0
reserves 128 slots onhostA
. When all 128 slots onhostA
are used by jobs referencinguser2#0
,hostA
is no longer available to other jobs using reservationuser2#0
.Jobs referencing the reservation are killed when the reservation expires. LSF administrators can prevent running jobs from being killed when the reservation expires by changing the termination time of the job using the reservation (bmod -t) before the reservation window closes.
To use an advance reservation on a remote host, submit the job and specify the remote advance reservation ID. For example:
bsub -U user1#01@cluster1In this example, we assume the default queue is configured to forward jobs to the remote cluster.
-u mail_user
Sends mail to the specified email destination.
-v swap_limit
Set the total process virtual memory limit to swap_limit in KB for the whole job. The default is no limit. Exceeding the limit causes the job to terminate.
-w 'dependency_expression'
LSF will not place your job unless the dependency expression evaluates to TRUE. If you specify a dependency on a job that LSF cannot find (such as a job that has not yet been submitted), your job submission fails.
The dependency expression is a logical expression composed of one or more dependency conditions. To make dependency expression of multiple conditions, use the following logical operators:
&& (AND)
|| (OR)
! (NOT)
Use parentheses to indicate the order of operations, if necessary.
Enclose the dependency expression in single quotes (') to prevent the shell from interpreting special characters (space, any logic operator, or parentheses). If you use single quotes for the dependency expression, use double quotes for quoted items within it, such as job names.
In dependency conditions, job names specify only your own jobs, unless you are an LSF administrator. By default, if you use the job name to specify a dependency condition, and more than one of your jobs has the same name, all of your jobs that have that name must satisfy the test. If JOB_DEP_LAST_SUB in
lsb.params
is set to 1, the test is done on the job submitted most recently. Use double quotes ("
) around job names that begin with a number. In the job name, specify the wildcard character asterisk (*) at the end of a string, to indicate all jobs whose name begins with the string. For example, if you usejobA*
as the job name, it specifies jobs namedjobA
,jobA1
,jobA_test
,jobA.log
, etc.Use the * with dependency conditions to define one-to-one dependency among job array elements such that each element of one array depends on the corresponding element of another array. The job array size must be identical. For example,
bsub -w "done(myarrayA[*])" -J "myArrayB[1-10]" myJob2
indicates that before element 1 ofmyArrayB
can start, element 1 ofmyArrayA
must be completed, and so on.You can also use the * to establish one-to-one array element dependencies with
bmod
after an array has been submitted.If you want to specify array dependency by array name, set JOB_DEP_LAST_SUB in
lsb.params
. If you do not have this parameter set, the job will be rejected if one of your previous arrays has the same name but a different index.In dependency conditions, the variable op represents one of the following relational operators:
>
>=
<
<=
==
!=
Use the following conditions to form the dependency expression.
done(job_ID |"job_name" ...)
The job state is DONE.
LSF refers to the oldest job of job_name in memory.
ended(job_ID | "job_name")
The job state is EXIT or DONE.
exit(job_ID | "job_name" [,[operator] exit_code])
The job state is EXIT, and the job's exit code satisfies the comparison test.
If you specify an exit code with no operator, the test is for equality (== is assumed).
If you specify only the job, any exit code satisfies the test.
external(job_ID | "job_name", "status_text")
The job has the specified job status.
If you specify the first word of the message description (no spaces), the text of the job's status begins with the specified word. Only the first word is evaluated.
job_ID | "job_name"
If you specify a job without a dependency condition, the test is for the DONE state (LSF assumes the "done" dependency condition by default).
numdone(job_ID, operator number | *)
For a job array, the number of jobs in the DONE state satisfies the test. Use * (with no operator) to specify all the jobs in the array.
numended(job_ID, operator number | *)
For a job array, the number of jobs in the DONE or EXIT states satisfies the test. Use * (with no operator) to specify all the jobs in the array.
numexit(job_ID, operator number | *)
For a job array, the number of jobs in the EXIT state satisfies the test. Use * (with no operator) to specify all the jobs in the array.
numhold(job_ID, operator number | *)
For a job array, the number of jobs in the PSUSP state satisfies the test. Use * (with no operator) to specify all the jobs in the array.
numpend(job_ID, operator number | *)
For a job array, the number of jobs in the PEND state satisfies the test. Use * (with no operator) to specify all the jobs in the array.
numrun(job_ID, operator number | *)
For a job array, the number of jobs in the RUN state satisfies the test. Use * (with no operator) to specify all the jobs in the array.
numstart(job_ID, operator number | *)
For a job array, the number of jobs in the RUN, USUSP, or SSUSP states satisfies the test. Use * (with no operator) to specify all the jobs in the array.
post_done(job_ID | "job_name")
The job state is POST_DONE (the post-processing of specified job has completed without errors).
post_err(job_ID | "job_name")
The job state is POST_ERR (the post-processing of the specified job has completed with errors).
started(job_ID | "job_name")
The job state is:
- RUN, DONE, or EXIT
- PEND or PSUSP, and the job has a pre-execution command (
bsub -E
) that is running.-wa '[signal | command | CHKPNT]'
Specifies the job action to be taken before a job control action occurs.
A job warning action must be specified with a job action warning time in order for job warning to take effect.
If
-wa
is specified, LSF sends the warning action to the job before the actual control action is taken. This allows the job time to save its result before being terminated by the job control action.You can specify actions similar to the JOB_CONTROLS queue level parameter: send a signal, invoke a command, or checkpoint the job.
The warning action specified by
-wa
option overrides JOB_WARNING_ACTION in the queue. JOB_WARNING_ACTION is used as the the default when no command line option is specified.For example the following specifies that 2 minutes before the job reaches its run time limit, an URG signal is sent to the job:
% bsub -W 60 -wt '2' -wa 'URG' myjob-wt '[hours:]minutes'
Specifies the amount of time before a job control action occurs that a job warning action is to be taken. Job action warning time is not normalized.
A job action warning time must be specified with a job warning action in order for job warning to take effect.
The warning time specified by the bsub -wt option overrides JOB_ACTION_WARNING_TIME in the queue. JOB_ACTION_WARNING_TIME is used as the the default when no command line option is specified.
For example the following specifies that 2 minutes before the job reaches its run time limit, an URG signal is sent to the job:
% bsub -W 60 -wt '2' -wa 'URG' myjob-W [hours:]minutes[/host_name | /host_model]
Sets the run time limit of the batch job. If a UNIX job runs longer than the specified run limit, the job is sent a SIGUSR2 signal, and is killed if it does not terminate within ten minutes. If a Windows job runs longer than the specified run limit, it is killed immediately. (For a detailed description of how these jobs are killed, see
bkill
.) In the queue definition, a TERMINATE action can be configured to override thebkill
default action (see the JOB_CONTROLS parameter inlsb.queues
(5)).The run limit is in the form of [hours
:
]minutes. The minutes can be specified as a number greater than 59. For example, three and a half hours can either be specified as 3:30, or 210.The run limit you specify is the normalized run time. This is done so that the job does approximately the same amount of processing, even if it is sent to host with a faster or slower CPU. Whenever a normalized run time is given, the actual time on the execution host is the specified time multiplied by the CPU factor of the normalization host then divided by the CPU factor of the execution host.
If ABS_RUNLIMIT=Y is defined in
lsb.params
, the run time limit is not normalized by the host CPU factor. Absolute wall-clock run time is used for all jobs submitted with a run limit.Optionally, you can supply a host name or a host model name defined in LSF. You must insert `
/
' between the run limit and the host name or model name. (Seelsinfo
(1) to get host model information.)If no host or host model is given, LSF uses the default run time normalization host defined at the queue level (DEFAULT_HOST_SPEC in
lsb.queues
) if it has been configured; otherwise, LSF uses the default CPU time normalization host defined at the cluster level (DEFAULT_HOST_SPEC inlsb.params
) if it has been configured; otherwise, LSF uses the submission host.For MultiCluster jobs, if no other CPU time normalization host is defined and information about the submission host is not available, LSF uses the host with the largest CPU factor (the fastest host in the cluster).
If the job also has termination time specified through the
bsub -t
option, LSF determines whether the job can actually run for the specified length of time allowed by the run limit before the termination time. If not, then the job will be aborted.If the IGNORE_DEADLINE parameter is set in
lsb.queues
(5), this behavior is overridden and the run limit is ignored.Jobs submitted to a chunk job queue are not chunked if the run limit is greater than 30 minutes.
-Zs
Spools a job command file to the directory specified by the JOB_SPOOL_DIR parameter in
lsb.params
, and uses the spooled file as the command file for the job.By default, the command file is spooled to
LSB_SHAREDIR/
cluster_name/lsf_cmddir
. If thelsf_cmddir
directory does not exist, LSF creates it before spooling the file. LSF removes the spooled file when the job completes.If JOB_SPOOL_DIR in
lsb.params
is specified, the-is
option spools the command file to the specified directory and uses the spooled file as the input file for the job.JOB_SPOOL_DIR must be readable and writable by the job submission user, and it must be shared by the master host and the submission host. If the specified directory is not accessible or does not exist,
bsub -is
cannot write to the default directoryLSB_SHAREDIR/
cluster_name/lsf_cmddir
and the job will fail.The
-Zs
option is not supported for embedded job commands because LSF is unable to determine the first command to be spooled in an embedded job command.-h
Prints command usage to
stderr
and exits.-V
Prints LSF release version to
stderr
and exits.command [argument]
The job can be specified by a command line argument command, or through the standard input if the command is not present on the command line. The command can be anything that is provided to a UNIX Bourne shell (see
sh
(1)). command is assumed to begin with the first word that is not part of absub
option. All arguments that follow command are provided as the arguments to the command.If the batch job is not given on the command line,
bsub
reads the job commands from standard input. If the standard input is a controlling terminal, the user is prompted withbsub>
for the commands of the job. The input is terminated by entering CTRL-D on a new line. You can submit multiple commands through standard input.The commands are executed in the order in which they are given.
bsub
options can also be specified in the standard input if the line begins with#BSUB
; e.g.,#BSUB -x
. If an option is given on both thebsub
command line, and in the standard input, the command line option overrides the option in the standard input. The user can specify the shell to run the commands by specifying the shell path name in the first line of the standard input, such as#!/bin/csh
. If the shell is not given in the first line, the Bourne shell is used. The standard input facility can be used to spool a user's job script; such asbsub < script
.See EXAMPLES below for examples of specifying commands through standard input.
OUTPUT
If the job is successfully submitted, displays the job ID and the queue to which the job has been submitted.
EXAMPLES
%
bsub sleep 100
Submit the UNIX command
sleep
together with its argument100
as a batch job.
%
bsub -q short -o my_output_file "pwd; ls"
Submit the UNIX command
pwd
andls
as a batch job to the queue namedshort
and store the job output inmy_output
file.
%
bsub -m "host1 host3 host8 host9" my_program
Submit
my_program
to run on one of the candidate hosts: host1, host3, host8 and host9.
%
bsub -q "queue1 queue2 queue3" -c 5 my_program
Submit
my_program
to one of the candidate queues:queue1
,queue2
, andqueue3
which are selected according to the CPU time limit specified by-c 5
.
%
bsub -I ls
Submit a batch interactive job which displays the output of
ls
at the user's terminal.
%
bsub -Ip vi myfile
Submit a batch interactive job to edit
myfile
.
%
bsub -Is csh
Submit a batch interactive job that starts
csh
as an interactive shell.
%
bsub -b 20:00 -J my_job_name my_program
Submit
my_program
to run after 8 p.m. and assign it the job name my_job_name.
%
bsub my_script
Submit
my_script
as a batch job. Since my_script is specified as a command line argument, themy_script
file is not spooled. Later changes to themy_script
file before the job completes may affect this job.
%
bsub < default_shell_script
where default_shell_script contains:
sim1.exe sim2.exe
The file
default_shell_script
is spooled, and the commands will be run under the Bourne shell since a shell specification is not given in the first line of the script.
%
bsub < csh_script
where
csh_script
contains:#!/bin/csh sim1.exe sim2.exe
csh_script
is spooled and the commands will be run under/bin/csh
.
%
bsub -q night < my_script
where
my_script
contains:#!/bin/sh #BSUB -q test #BSUB -o outfile -e errfile # my default stdout, stderr files #BSUB -m "host1 host2" # my default candidate hosts #BSUB -f "input > tmp" -f "output << tmp" #BSUB -D 200 -c 10/host1 #BSUB -t 13:00 #BSUB -k "dir 5" sim1.exe sim2.exeThe job is submitted to the
night
queue instead oftest
, because the command line overrides the script.
%
bsub -b 20:00 -J my_job_name
bsub> sleep 1800 bsub> my_program bsub> CTRL-D
The job commands are entered interactively.
%bsub -T 4 myjob
Submits
myjob
with a maximum number of concurrent threads of 4.
%
bsub -W 15 -sla Kyuquot sleep 100
Submit the UNIX command
sleep
together with its argument100
as a batch job to the service class namedKyuquot
.LIMITATIONS
When using account mapping the command
bpeek
(1) will not work. File transfer via the-f
option tobsub
(1) requiresrcp
(1) to be working between the submission and execution hosts. Use the-N
option to request mail, and/or the-o
and-e
options to specify an output file and error file, respectively.SEE ALSO
bjobs
(1),bkill
(1),bqueues
(1),bhosts
(1),bmgroup
(1),bmod
(1),bchkpnt
(1),brestart
(1),bgadd
(1),bgdel
(1),bjgroup
(1),sh
(1),getrlimit
(2),sbrk
(2),libckpt.a
(3),lsb.users
(5),lsb.queues
(5),lsb.params
(5),lsb.hosts
(5),lsb.serviceclasses
(5),mbatchd
(8)
[ Top ]
[ Platform Documentation ] [ Title ] [ Contents ] [ Previous ] [ Next ] [ Index ]
Date Modified: February 24, 2004
Platform Computing: www.platform.com
Platform Support: support@platform.com
Platform Information Development: doc@platform.com
Copyright © 1994-2004 Platform Computing Corporation. All rights reserved.