Learn more about Platform products at http://www.platform.com

[ Platform Documentation ] [ Title ] [ Contents ] [ Previous ] [ Next ] [ Index ]



lsb.resources


The lsb.resources file contains configuration information for resource allocation limits, exports, and resource usage limits. This file is optional.

The lsb.resources file is stored in the directory LSB_CONFDIR/cluster_name/configdir, where LSB_CONFDIR is defined in lsf.conf.

Contents

[ Top ]


Limit Section

Description

Sets limits for the maximum amount of the specified resources must be available for different classes of jobs to start, and which resource consumers the limits apply to. Limits are enforced during job resource allocation.


For limits to be enforced, jobs must specify rusage resource requirements (bsub -R or RES_REQ in lsb.queues).

The blimits command displays view current usage of resource allocation limit configured in Limit sections in lsb.resources:

Limit section structure

Each set of limits is defined in a Limit section enclosed by Begin Limit and End Limit. Limit sections set limits for how much resources must be available for different classes of jobs to start.

A Limit section has two formats:

The file can contain sections in both formats. In either format, you must configure a limit for at least one resource. The Limit section cannot be empty.

Vertical tabular format

Use the vertical format for simple configuration conditions involving only a few consumers and resource limits.

The first row consists of the following keywords for:

Each subsequent row describes the configuration information for resource consumers and the limits that apply to them. Each line must contain an entry for each keyword. Use empty parentheses () or a dash (-) to specify the default value for an entry. Fields cannot be left blank. For resources, the default is no limit; for consumers, the default is all consumers.

Horizontal format

Use the horizontal format to give a name for your limits and to configure more complicated combinations of consumers and resource limits.

The first line of the Limit section gives the name of the limit configuration.

Each subsequent line in the Limit section consists of keywords identifying the resource limits:

and the resource consumers to which the limits apply:

Example Limit sections

Vertical tabular format

In the following limit configuration, user1 is limited to 2 job slots on hostA, and jobs from user2 on queue normal are limited to 20 MB of memory:

Begin Limit
USERS       QUEUES        HOSTS     SLOTS  MEM   SWP  TMP
user1       -             hostA     2      -      -    -
user2       normal        -         -      20     -    -
End Limit

Jobs that do not match these limits; that is, all users except user1 running jobs on hostA and all users except user2 submitting jobs to queue normal, have no limits.

Horizontal format

All users in user group ugroup1 except user1 using queue1 and queue2 and running jobs on hosts in host group hgroup1 are limited to 2 job slots per processor on each host:

Begin Limit
# ugroup1 except user1 uses queue1 and queue2 with 2 job slots
# on each host in hgroup1
NAME          = limit1
# Resources
SLOTS_PER_PROCESSOR = 2
#Consumers
QUEUES       = queue1 queue2
USERS        = ugroup1 ~user1
PER_HOST     = hgroup1
End Limit

Compatibility with lsb.queues, lsb.users, and lsb.hosts

The Limit section of lsb.resources does not support the keywords or format used in lsb.users, lsb.hosts, and lsb.queues. However, your existing job slot limit configuration in these files will continue to apply.

Job slot limits are the only type of limit you can configure in lsb.users, lsb.hosts, and lsb.queues. You cannot configure limits for user groups, host groups, and projects in lsb.users, lsb.hosts, and lsb.queues. You should not configure any new resource allocation limits in lsb.users, lsb.hosts, and lsb.queues. Use lsb.resources to configure all new resource allocation limits, including job slot limits.

Existing limits in lsb.users, lsb.hosts, and lsb.queues with the same scope as a new limit in lsb.resources, but with a different value are ignored. The value of the new limit in lsb.resources is used. Similar limits with different scope enforce the most restrictive limit.

HOSTS

Syntax

HOSTS = [~]host_name | [~]host_group | all ...

HOSTS
(
[-] | [~]host_name | [~ ]host_group | all ... )

Description

A space-separated list of hosts, host groups defined in lsb.hosts on which limits are enforced. Limits are enforced on all hosts or host groups listed.

If a group contains a subgroup, the limit also applies to each member in the subgroup recursively.

To specify a per-host limit, use the PER_HOST keyword. Do not configure HOSTS and PER_HOST limits in the same Limit section.

If you specify MEM, TMP, or SWP as a percentage, you must specify PER_HOST and list the hosts that the limit is to be enforced on. You cannot specify HOSTS.

In horizontal format, use only one HOSTS line per Limit section.

Use the keyword all to configure limits that apply to all hosts in a cluster. This is useful if you have a large cluster but only want to exclude a few hosts from the limit definition.

Use the not operator (~) to exclude hosts or host groups from the list of hosts to which the limits apply.

In vertical format, multiple names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate all hosts. Fields cannot be left blank.

Default

all (limits are enforced on all hosts in the cluster).

Example 1

HOSTS = Group1 ~hostA hostB hostC

Enforces limits on hostB, hostC, and all hosts in Group1 except for hostA.

Example 2

HOSTS = all ~group2 ~hostA

Enforces limits on all hosts in the cluster, except for hostA and the hosts in group2.

Example 3

HOSTS                        SWP
(all ~hostK ~hostM)           10

Enforces a 10 MB swap limit on all hosts in the cluster, except for hostK and hostM

LICENSE

Syntax

LICENSE = [license_name,integer] [[license_name,integer] ...]

LICENSE
( [
license_name,integer] [[license_name,integer] ...] )

Description

Maximum number of specified software licenses available to resource consumers. The value must be a positive integer greater than or equal to zero.

Software licenses must be defined as decreasing numeric shared resources in lsf.shared.

The RESOURCE keyword is a synonym for the LICENSE keyword. You cannot specify RESOURCE and LICENSE in the same Limit section.

In horizontal format, use only one LICENSE line per Limit section.

In vertical format, multiple entries must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

Default

None

Example

LICENSE = [verilog,4] [spice,2]

MEM

Syntax

MEM = integer[%]

MEM
-
| integer[%]

Description

Maximum amount of memory available to resource consumers. Specify a value in MB or a percentage (%) as a positive integer greater than or equal 0. If you specify a percentage, you must also specify PER_HOST and list the hosts that the limit is to be enforced on.

The Limit section is ignored if MEM is specified as a percentage:

OR

In horizontal format, use only one MEM line per Limit section.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

If only QUEUES are configured in the Limit section, MEM must be an integer value. MEM is the maximum amount of memory available to the listed queues for any hosts, users, or projects.

If only USERS are configured in the Limit section, MEM must be an integer value. MEM is the maximum amount of memory that the users or user groups can use on any hosts, queues, or projects.

If only HOSTS are configured in the Limit section, MEM must be an integer value. It cannot be a percentage. MEM is the maximum amount of memory available to the listed hosts for any users, queues, or projects.

If only PROJECTS are configured in the Limit section, MEM must be an integer value. MEM is the maximum amount of memory available to the listed projects for any users, queues, or hosts.

Use QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, and PROJECTS or PER_PROJECT in combination to further limit memory available to resource consumers.

Default

No limit

Example

MEM = 20

NAME

Syntax

NAME = text

Description

Required. Name of the Limit section

Specify any ASCII string 40 characters or less. You can use letters, digits, underscores (_) or dashes (-). You cannot use blank spaces.

Format

Horizontal only

Default

None. You must provide a name for the Limit section.

Example

NAME = short_limits

PER_HOST

Syntax

PER_HOST = [~]host_name | [~]host_group | all ...

PER_HOST
(
[-] | [~]host_name | [~ ]host_group | all ... )

Description

A space-separated list of host or host groups defined in lsb.hosts on which limits are enforced. Limits are enforced on each host or individually to each host of the host group listed. If a group contains a subgroup, the limit also applies to each member in the subgroup recursively.

Do not configure PER_HOST and HOSTS limits in the same Limit section.

In horizontal format, use only one PER_HOST line per Limit section.

If you specify MEM, TMP, or SWP as a percentage, you must specify PER_HOST and list the hosts that the limit is to be enforced on. You cannot specify HOSTS.

Use the keyword all to configure limits that apply to each host in a cluster. If host groups are configured, the limit applies to each member of the host group, not the group as a whole. This is useful if you have a large cluster but only want to exclude a few hosts from the limit definition.

Use the not operator (~) to exclude hosts or host groups from the list of hosts to which the limits apply.

In vertical format, multiple names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate each host or host group member. Fields cannot be left blank.

Default

None. If no limit is specified for PER_HOST or HOST, no limit is enforced on any host or host group.

Example

PER_HOST = hostA hgroup1 ~hostC

PER_PROJECT

Syntax

PER_PROJECT = [~]project_name [ project_name ...] | all

PER_PROJECT
(
[-] | [~]project_name [ project_name ...] | all)

Description

A space-separated list of project names on which limits are enforced. Limits are enforced on each project listed.

Do not configure PER_PROJECT and PROJECTS limits in the same Limit section.

In horizontal format, use only one PER_PROJECT line per Limit section.

Use the keyword all to configure limits that apply to each project in a cluster. This is useful if you have a large number of projects but only want to exclude a few projects from the limit definition.

Use the not operator (~) to exclude projects from the list of projects to which the limits apply.

In vertical format, multiple names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate each project. Fields cannot be left blank.

Default

None. If no limit is specified for PER_PROJECT or PROJECTS, no limit is enforced on any project.

Example

PER_PROJECT = proj1 proj2

PER_QUEUE

Syntax

PER_QUEUE = [~]queue_name [ queue_name ...] | all

PER_QUEUES
(
[-] | [~]queue_name [ queue_name ...] | all)

Description

A space-separated list of queue names on which limits are enforced. Limits are enforced on jobs submitted to each queue listed.

Do not configure PER_QUEUE and QUEUES limits in the same Limit section.

In horizontal format, use only one PER_QUEUE line per Limit section.

Use the keyword all to configure limits that apply to each queue in a cluster. This is useful if you have a large number of queues but only want to exclude a few queues from the limit definition.

Use the not operator (~) to exclude queues from the list of queues to which the limits apply.

In vertical format, multiple names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate each queue. Fields cannot be left blank.

Default

None. If no limit is specified for PER_QUEUE or QUEUES, no limit is enforced on any queue.

Example

PER_QUEUE = priority night

PER_USER

Syntax

PER_USER = [~]user_name | [~]user_group ... | all

PER_USER
(
[-] | [~]user_name | [~ ]user_group | all ... )

Description

A space-separated list of user names or user groups on which limits are enforced. Limits are enforced on each user or individually to each user in the user group listed. If a user group contains a subgroup, the limit also applies to each member in the subgroup recursively.

User names must be valid login names. User group names can be LSF user groups or UNIX and Windows user groups.

Do not configure PER_USER and USERS limits in the same Limit section.

In horizontal format, use only one PER_USER line per Limit section.

Use the keyword all to configure limits that apply to each user in a cluster. If user groups are configured, the limit applies to each member of the user group, not the group as a whole. This is useful if you have a large number of users but only want to exclude a few users from the limit definition.

Use the not operator (~) to exclude users or user groups from the list of users to which the limits apply.

In vertical format, multiple names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate user or user group member. Fields cannot be left blank.

Default

None. If no limit is specified for PER_USER or USERS, no limit is enforced on any user or user group.

Example

PER_USER = user1 user2 ugroup1 ~user3

PROJECTS

Syntax

PROJECTS = [~]project_name [ project_name ...] | all

PROJECTS
(
[-] | [~]project_name [ project_name ...] | all)

Description

A space-separated list of project names on which limits are enforced. Limits are enforced on all projects listed.

To specify a per-project limit, use the PER_PROJECT keyword. Do not configure PROJECTS and PER_PROJECT limits in the same Limit section.

In horizontal format, use only one PROJECTS line per Limit section.

Use the keyword all to configure limits that apply to all projects in a cluster. This is useful if you have a large number of projects but only want to exclude a few projects from the limit definition.

Use the not operator (~) to exclude projects from the list of projects to which the limits apply.

In vertical format, multiple names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate all projects. Fields cannot be left blank.

Default

all (limits are enforced on all projects in the cluster)

Example

PROJECTS = projA projB

QUEUES

Syntax

QUEUES = [~]queue_name [ queue_name ...] | all

QUEUES
(
[-] | [~]queue_name [ queue_name ...] | all)

Description

A space-separated list of queue names on which limits are enforced. Limits are enforced on all queues listed.

The list must contain valid queue names defined in lsb.queues.

To specify a per-queue limit, use the PER_QUEUE keyword. Do not configure QUEUES and PER_QUEUE limits in the same Limit section.

In horizontal format, use only one QUEUES line per Limit section.

Use the keyword all to configure limits that apply to all queues in a cluster. This is useful if you have a large number of queues but only want to exclude a few queues from the limit definition.

Use the not operator (~) to exclude queues from the list of queues to which the limits apply.

In vertical format, multiple names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate all queues. Fields cannot be left blank.

Default

all (limits are enforced on all queues in the cluster)

Example

QUEUES = normal night

RESOURCE

Syntax

RESOURCE = [shared_resource,integer] [[shared_resource,integer] ...]

RESOURCE
(
[[shared_resource,integer] [[shared_resource,integer] ...])

Description

Maximum amount of any user-defined shared resource available to consumers.

The RESOURCE keyword is a synonym for the LICENSE keyword. You can use RESOURCE to configure software licenses. You cannot specify RESOURCE and LICENSE in the same Limit section.

In horizontal format, use only one RESOURCE line per Limit section.

In vertical format, multiple entries must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

Default

None

Example

RESOURCE = [stat_shared,4]

SLOTS

Syntax

SLOTS = integer

SLOTS
-
| integer

Description

Maximum number of job slots available to resource consumers. Specify a positive integer greater than or equal 0.

With MultiCluster resource lease model, this limit applies only to local hosts being used by the local cluster. The job slot limit for hosts exported to a remote cluster is determined by the host export policy, not by this parameter. The job slot limit for borrowed hosts is determined by the host export policy of the remote cluster.

If HOSTS are configured in the Limit section, SLOTS is the number of running and suspended jobs on a host cannot exceed the number of job slots. If preemptive scheduling is used, the suspended jobs are not counted as using a job slot.

To fully use the CPU resource on multiprocessor hosts, make the number of job slots equal to or greater than the number of processors.

Use this parameter to prevent a host from being overloaded with too many jobs, and to maximize the throughput of a machine.

If only QUEUES are configured in the Limit section, SLOTS is the maximum number of job slots available to the listed queues for any hosts, users, or projects.

If only USERS are configured in the Limit section, SLOTS is the maximum number of job slots that the users or user groups can use on any hosts, queues, or projects.

If only HOSTS are configured in the Limit section, SLOTS is the maximum number of job slots that are available to the listed hosts for any users, queues, or projects.

If only PROJECTS are configured in the Limit section, SLOTS is the maximum number of job slots that are available to the listed projects for any users, queues, or hosts.

Use QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, and PROJECTS or PER_PROJECT in combination to further limit job slots per processor available to resource consumers.

In horizontal format, use only one SLOTS line per Limit section.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

Default

No limit

Example

SLOTS = 20

SLOTS_PER_PROCESSOR

Syntax

SLOTS_PER_PROCESSOR = number

SLOTS_PER_PROCESSOR
-
| number

Description

Per processor job slot limit, based on the number of processors on each host affected by the limit.

Maximum number of job slots that each resource consumer can use per processor. This job slot limit is configured per processor so that multiprocessor hosts will automatically run more jobs.

You must also specify PER_HOST and list the hosts that the limit is to be enforced on. The Limit section is ignored if SLOTS_PER_PROCESSOR is specified:

OR

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

To fully use the CPU resource on multiprocessor hosts, make the number of job slots equal to or greater than the number of processors.

Use this parameter to prevent a host from being overloaded with too many jobs, and to maximize the throughput of a machine.

This number can be a fraction such as 0.5, so that it can also serve as a per- CPU limit on multiprocessor machines. This number is rounded up to the nearest integer equal to or greater than the total job slot limits for a host. For example, if SLOTS_PER_PREOCESSOR is 0.5, on a 4-CPU multiprocessor host, users can only use up to 2 job slots at any time. On a single-processor machine, users can use 1 job slot.

If the number of CPUs in a host changes dynamically, mbatchd adjusts the maximum number of job slots per host accordingly. Allow the mbatchd up to 10 minutes to get the number of CPUs for a host. During this period the number of CPUs is 1.

If only QUEUES and PER_HOST are configured in the Limit section, SLOTS_PER_PROCESSOR is the maximum amount of job slots per processor available to the listed queues for any hosts, users, or projects.

If only USERS and PER_HOST are configured in the Limit section, SLOTS_PER_PROCESSOR is the maximum amount of job slots per processor that the users or user groups can use on any hosts, queues, or projects.

If only PER_HOST is configured in the Limit section, SLOTS_PER_PROCESSOR is the maximum amount of job slots per processor available to the listed hosts for any users, queues, or projects.

If only PROJECTS and PER_HOST are configured in the Limit section, SLOTS_PER_PROCESSOR is the maximum amount of job slots per processor available to the listed projects for any users, queues, or hosts.

Use QUEUES or PER_QUEUE, USERS or PER_USER, PER_HOST, and PROJECTS or PER_PROJECT in combination to further limit job slots per processor available to resource consumers.

Default

No limit

Example

SLOTS_PER_PROCESSOR = 2

SWP

Syntax

SWP = integer[%]

SWP
-
| integer[%]

Description

Maximum amount of swap space available to resource consumers. Specify a value in MB or a percentage (%) as a positive integer greater than or equal 0. If you specify a percentage, you must also specify PER_HOST and list the hosts that the limit is to be enforced on.

The Limit section is ignored if SWP is specified as a percentage:

OR

In horizontal format, use only one SWP line per Limit section.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

If only QUEUES are configured in the Limit section, SWP must be an integer value. SWP is the maximum amount of swap space available to the listed queues for any hosts, users, or projects.

If only USERS are configured in the Limit section, SWP must be an integer value. SWP is the maximum amount of swap space that the users or user groups can use on any hosts, queues, or projects.

If only HOSTS are configured in the Limit section, SWP must be an integer value. SWP is the maximum amount of swap space available to the listed hosts for any users, queues, or projects.

If only PROJECTS are configured in the Limit section, SWP must be an integer value. SWP is the maximum amount of swap space available to the listed projects for any users, queues, or hosts.

Use QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, and PROJECTS or PER_PROJECT in combination to further limit swap space available to resource consumers.

Default

No limit

Example

SWP = 60

TMP

Syntax

TMP = integer[%]

TMP
-
| integer[%]

Description

Maximum amount of tmp space available to resource consumers. Specify a value in MB or a percentage (%) as a positive integer greater than or equal 0. If you specify a percentage, you must also specify PER_HOST and list the hosts that the limit is to be enforced on.

The Limit section is ignored if TMP is specified as a percentage:

OR

In horizontal format, use only one TMP line per Limit section.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

If only QUEUES are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space available to the listed queues for any hosts, users, or projects.

If only USERS are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space that the users or user groups can use on any hosts, queues, or projects.

If only HOSTS are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space available to the listed hosts for any users, queues, or projects.

If only PROJECTS are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space available to the listed projects for any users, queues, or hosts.

Use QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, and PROJECTS or PER_PROJECT in combination to further limit tmp space available to resource consumers.

Default

No limit

Example

TMP = 20%

USERS

Syntax

USERS = [~]user_name | [~]user_group ... | all

USERS
(
[-] | [~]user_name | [~ ]user_group | all ... )

Description

A space-separated list of user names or user groups on which limits are enforced. Limits are enforced on all users or groups listed. Limits apply to a group as a whole.

If a group contains a subgroup, the limit also applies to each member in the subgroup recursively.

User names must be valid login names. User group names can be LSF user groups or UNIX and Windows user groups.

To specify a per-user limit, use the PER_USER keyword. Do not configure USERS and PER_USER limits in the same Limit section.

In horizontal format, use only one USERS line per Limit section.

Use the keyword all to configure limits that apply to all users or user groups in a cluster. This is useful if you have a large number of users but only want to exclude a few users or groups from the limit definition.

Use the not operator (~) to exclude users or groups from the list to which the limits apply.

In vertical format, multiple names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate all users or groups. Fields cannot be left blank.

Default

all (limits are enforced on all users in the cluster)

Example

USERS = user1 user2

[ Top ]


HostExport Section

Description

Defines an export policy for a host or a group of related hosts. Defines how much of each host's resources are exported, and how the resources are distributed among the consumers.

Each export policy is defined in a separate HostExport section, so it is normal to have multiple HostExport sections in lsb.resources.

Example HostExport section

Begin HostExport
PER_HOST= hostA hostB
SLOTS= 4
DISTRIBUTION= [cluster1, 1] [cluster2, 3]
MEM= 100
SWAP= 100
End HostExport

HostExport section structure

Use empty parentheses ( ) or a dash (-) to specify the default value for an entry. Fields cannot be left blank.

PER_HOST

Syntax

PER_HOST=host_name...

Description

Required when exporting special hosts.

Determines which hosts to export. Specify one or more LSF hosts by name. Separate names by space.

RES_SELECT

Syntax

RES_SELECT=res_req

Description

Required when exporting workstations.

Determines which hosts to export. Specify the selection part of the resource requirement string (without quotes or parentheses), and LSF will automatically select hosts that meet the specified criteria. For this parameter, if you do not specify the required host type, the default is "type==any".

The criteria is only evaluated once, when a host is exported.

NHOSTS

Syntax

NHOSTS=integer

Description

Required when exporting workstations.

Maximum number of hosts to export. If there are not this many hosts meeting the selection criteria, LSF exports as many as it can.

DISTRIBUTION

Syntax

DISTRIBUTION=([cluster_name, number_shares]...)

Description

Required. Specifies how the exported resources are distributed among consumer clusters.

The syntax for the distribution list is a series of share assignments. The syntax of each share assignment is the cluster name, a comma, and the number of shares, all enclosed in square brackets, as shown. Use a space to separate multiple share assignments. Enclose the full distribution list in a set of round brackets.

MEM

Syntax

MEM=megabytes

Description

Used when exporting special hosts. Specify the amount of memory to export on each host, in MB.

Default

- (provider and consumer clusters compete for available memory)

SLOTS

Syntax

SLOTS=integer

Description

Required when exporting special hosts. Specify the number of job slots to export on each host.

To avoid overloading a partially exported host, you can reduce the number of job slots in the configuration of the local cluster.

SWAP

Syntax

SWAP=megabytes

Description

Used when exporting special hosts. Specify the amount of swap space to export on each host, in MB.

Default

- (provider and consumer clusters compete for available swap space)

TYPE

Syntax

TYPE=shared

Description

Changes the lease type from exclusive to shared.

If you export special hosts with a shared lease (using PER_HOST), you cannot specify multiple consumer clusters in the distribution policy.

Default

Undefined (the lease type is exclusive; exported resources are never available to the provider cluster)

[ Top ]


SharedResourceExport Section

Description

Optional. Requires HostExport section. Defines an export policy for a shared resource. Defines how much of the shared resource is exported, and the distribution among the consumers.

The shared resource must be available on hosts defined in the HostExport sections.

Example SharedResourceExport section

Begin SharedResourceExport
NAME= AppLicense
NINSTANCES= 10
DISTRIBUTION= [C1, 30] [C2, 70]
End SharedResourceExport

SharedResourceExport section structure

All parameters are required.

NAME

Syntax

NAME=shared_resource_name

Description

Shared resource to export. This resource must be available on the hosts that are exported to the specified clusters; you cannot export resources without hosts.

NINSTANCES

Syntax

NINSTANCES=integer

Description

Maximum quantity of shared resource to export. If the total number available is less than the requested amount, LSF exports all that are available.

DISTRIBUTION

Syntax

DISTRIBUTION=([cluster_name, number_shares]...)

Description

Specifies how the exported resources are distributed among consumer clusters.

The syntax for the distribution list is a series of share assignments. The syntax of each share assignment is the cluster name, a comma, and the number of shares, all enclosed in square brackets, as shown. Use a space to separate multiple share assignments. Enclose the full distribution list in a set of round brackets.

cluster_name

Specify the name of a cluster allowed to use the exported resources.

number_shares

Specify a positive integer representing the number of shares of exported resources assigned to the cluster.

The number of shares assigned to a cluster is only meaningful when you compare it to the number assigned to other clusters, or to the total number. The total number of shares is the sum of all the shares assigned in each share assignment.

[ Top ]


ResourceReservation section

Description

By default, only LSF administrators or root can add or delete advance reservations.

The ResourceReservation section defines an advance reservation policy. It specifies:

Each advance reservation policy is defined in a separate ResourceReservation section, so it is normal to have multiple ResourceReservation sections in lsb.resources.

Example ResourceReservation section

Only user1 and user2 can make advance reservations on hostA and hostB. The reservation time window is between 8:00 a.m. and 6:00 p.m. every day:

Begin ResourceReservation
NAME        = dayPolicy
USERS       = user1 user2     # optional
HOSTS       = hostA hostB     # optional
TIME_WINDOW = 8:00-18:00      # weekly recurring reservation
End ResourceReservation 

user1 can add the following reservation for user user2 to use on hostA every Friday between 9:00 a.m. and 11:00 a.m.:

% user1@hostB> brsvadd -m "hostA" -n 1 -u "user2" -t "5:9:0-
5:11:0"
Reservation "user2#2" is created

Users can only delete reservations they created themselves. In the example, only user user1 can delete the reservation; user2 cannot. Administrators can delete any reservations created by users.

HOSTS

Syntax

HOSTS = [~]host_name | [~]host_group | all | allremote | all@cluster_name ...

Description

A space-separated list of hosts, host groups defined in lsb.hosts on which administrators or users specified in the USERS parameter can create advance reservations.

The hosts can be local to the cluster or hosts leased from remote clusters.

If a group contains a subgroup, the reservation configuration applies to each member in the subgroup recursively.

Use the keyword all to configure reservation policies that apply to all local hosts in a cluster not explicitly excluded. This is useful if you have a large cluster but you want to use the not operator (~) to exclude a few hosts from the list of hosts where reservations can be created.

Use the keyword allremote to specify all hosts borrowed from all remote clusters.


You cannot specify host groups or host partitions that contain the allremote keyword.

Use all@cluster_name to specify the group of all hosts borrowed from one remote cluster. You cannot specify a host group or partition that includes remote resources.

With MultiCluster resource leasing model, the not operator (~) can be used to exclude local hosts or host groups. You cannot use the not operator (~) with remote hosts.

Examples

Default

all allremote (users can create reservations on all server hosts in the local cluster, and all leased hosts in a remote cluster).

NAME

Syntax

NAME = text

Description

Required. Name of the ResourceReservation section

Specify any ASCII string 40 characters or less. You can use letters, digits, underscores (_) or dashes (-). You cannot use blank spaces.

Example

NAME = reservation1

Default

None. You must provide a name for the ResourceReservation section.

TIME_WINDOW

Syntax

TIME_WINDOW = time_window ...

Description

Optional. Time window for users to create advance reservations. The time for reservations that users create must fall within this time window.

Use the same format for time_window as the recurring reservation option (- t) of brsvadd:

[day:]hour[:minute]

with the following ranges:

Specify a time window one of the following ways:

You must specify at least the hour. Day of the week and minute are optional. Both the start time and end time values must use the same syntax. If you do not specify a minute, LSF assumes the first minute of the hour (:00). If you do not specify a day, LSF assumes every day of the week. If you do specify the day, you must also specify the minute.

You can specify multiple time windows, but they cannot overlap. For example:

TIME_WINDOW = 8:00-14:00 18:00-22:00

is correct, but

TIME_WINDOW = 8:00-14:00 11:00-15:00

is not valid.

Example

TIME_WINDOW = 8:00-14:00

Users can create advance reservations with begin time (brsvadd -b), end time (brsvadd -e), or time window (brsvadd -t) on any day between 8:00 a.m. and 2:00 p.m.

Default

Undefined (any time)

USERS

Syntax

USERS = [~]user_name | [~]user_group ... | all

Description

A space-separated list of user names or user groups who are allowed to create advance reservations. Administrators, root, and all users or groups listed can create reservations.

If a group contains a subgroup, the reservation policy applies to each member in the subgroup recursively.

User names must be valid login names. User group names can be LSF user groups or UNIX and Windows user groups.

Use the keyword all to configure reservation policies that apply to all users or user groups in a cluster. This is useful if you have a large number of users but you want to exclude a few users or groups from the reservation policy.

Use the not operator (~) to exclude users or user groups from the list of users who can create reservations.


The not operator does not exclude LSF administrators from the policy.

Example

USERS = user1 user2

Default

all (all users in the cluster can create reservations)

[ Top ]


Sample lsb.resources File

# $Id: lsb.resources,v 1.6 2002/01/04 19:37:44 waliu Exp $
#
# Policies for resource allocation. 
#
# This file can contain the following types of sections:
# o  Limit
# o  HostExport
# o  SharedResourceExport
# o  ResourceReservation

# Limit sections set limits for how much resources must be available
# for different classes of jobs to start, and which resource
# consumers the limits apply to.

# Each set of limits is defined in a Limit section enclosed by
# Begin Limit and End Limit.

# Limit sections have the following parameters:

# SLOTS sets a limit on the total number of slots that can be used by
# specific jobs.

# SLOTS_PER_PROCESSOR sets a limit on the number of slots based on the
# number of processor on each of the hosts affected by the limit.  It
# can only be used with the PER_HOST parameter.

# MEM, SWP, and TMP set limits (in MB) on the amount of memory, swap and
# temp space.  If the PER_HOST parameter is set for the limit, then the
# amount can also be given as a percentage of total memory, swap or temp
# on each host in the limit.

# LICENSE and RESOURCES set limits on the total amount of shared resources
# used by specific jobs.

# QUEUES, USERS, HOSTS, and PROJECTS specify which jobs the limits apply
# to.  They are space-separated lists of names of queues in lsb.queues,
# users and user groups in lsb.users, hosts and host groups in
# lsb.hosts, and projects.

# You can use PER_QUEUE, PER_USER, PER_HOST, and PER_PROJECT instead of
# QUEUES, USERS, HOSTS, and PROJECTS respectively. In this case, a separate
# limit is created for each queue, or each user, or each host, or each project
# specified.

#Begin Limit
#NAME = develop_group_limit
#USERS = develop
#PER_HOST = all
#SLOTS = 1
#MEM = 50%
#End Limit

# Example: limit usage of hosts in 'license1' group: 
# - 10 jobs can run from normal queue
# - any number can run from short queue, but only can use 200M mem in total
# - each other queue can run 30 jobs, each queue using up to 300M mem in total
#Begin Limit
#PER_QUEUE               HOSTS       SLOTS   MEM  # Example
#normal                  license1    10      -
#short                   license1    -       200
#(all ~normal ~short)    license1    30      300
#End Limit

# Example: Jobs from 'crash' project can use 10 'lic1' licenses, while jobs
# from all other projects together can use 5.
#Begin Limit
#PROJECTS        LICENSE
#crash           ([lic1,10])
#(all ~crash)    ([lic1,5])
#End Limit

# The sections HostExport and SharedResourceExport export 
# hosts and shared resources from this cluster to other clusters.  
# Requires MultiCluster license in each cluster.

# Export specific hosts to other clusters
#Begin HostExport
#PER_HOST     = hostA hostB         # export host list
#SLOTS        = 5                   # for each host, export 5 job slots
#DISTRIBUTION = [cluster1, 20] [cluster2, 80]  # share distribution for remote
	 	 	 	 	 	 	 	 	 	 	 # clusters:
                                    # cluster <cluster1> has 20 shares, 
                                    # cluster <cluster2> has 80 shares, 
#MEM          = 100                 # export 100M mem of each host 
	 	 	 	 	 	 	 	 	 	 	 # [optional parameter] 
#SWP          = 100                 # export 100M swp of each host 
	 	 	 	 	 	 	 	 	 	 	 # [optional parameter]
#End HostExport
#
# Export a group of workstations
#Begin HostExport
#RES_SELECT   =  type == LINUX      # selection criteria for the export hosts
#NHOSTS       = 10                  # export 10 machines at most
#DISTRIBUTION = [cluster1, 60] [cluster2, 40]  # share distribution for remote
	 	 	 	 	 	 	 	 	 	 	 # clusters:
	 	 	 	 	 	 	 	 	 	 	 # cluster <cluster1> has 60 shares
	 	 	 	 	 	 	 	 	 	 	 # cluster <cluster2> has 40 shares
#MEM          = 100	                  # export 100M mem of each host 
	 	 	 	 	 	 	 	 	 	 	 # [optional parameter] 
#SWP          = 100                 # export 100M swp of each host 
	 	 	 	 	 	 	 	 	 	 	 #[optional parameter]
#End HostExport
#
# Export shared resource to remote clusters (user-defined host-based load 
indices
# can't be exported).
#Begin SharedResourceExport
#NAME         = licenseX               # export resource "licenseX", which is
	 	 	 	 	 	 	 	 	 	 	    # defined in lsf.shared 
	 	 	 	 	 	 	 	 	           # each section can only export one
	 	 	 	 	 	 	 	 	 	 	    # type of shared resource
#NINSTANCES   = 10                     # export 10 instances of licenseX at 
most
#DISTRIBUTION = [cluster1, 30] [cluster2, 70]  # share distribution for remote
	 	 	 	 	 	 	 	 	 	 	 	 # clusters:
	 	 	 	 	 	 	 	 	 	        # cluster <cluster1> has 30 shares
	 	 	 	 	 	 	 	 	 	        # cluster <cluster2> has 70 shares

#End SharedResourceExport
#
# The ResourceReservation section defines an advance reservation policy. 
# It specifies:
# o  Users or user groups that can create reservations
# o  Hosts that can be used for the reservation
# o  Time window when reservations can be created

# Begin ResourceReservation
# NAME        = reservation1
# USERS       = user1 user2
# HOSTS       = hostA hostB
# TIME_WINDOW = 8:00-13:00
# End ResourceReservation
#

[ Top ]


SEE ALSO

lsb.queues(5), lsb.users(5), lsb.hosts(5)

[ Top ]


[ Platform Documentation ] [ Title ] [ Contents ] [ Previous ] [ Next ] [ Index ]


      Date Modified: February 24, 2004
Platform Computing: www.platform.com

Platform Support: support@platform.com
Platform Information Development: doc@platform.com

Copyright © 1994-2004 Platform Computing Corporation. All rights reserved.