The swift.properties file format
Site definitions
Site definitions in the swift.properties files begin with "site".
The second word is the name of the site you are defining. In these examples we will define a site called westmere.
The third word is the property.
For example:
site.westmere.jobQueue=fast
Before the site properties are listed, it’s important to understand the terminology used.
A task, or app task is an instance of a program as defined in a Swift app() function.
A worker is the program that launches app tasks.
A job is related to schedulers. It is the mechanism by which workers are launched.
Below is the list of valid site properties with brief explanations of what they do, and an example swift.properties entry.
Property | Description | Example |
---|---|---|
condor | Pass parameters directly through to the submit script generated for the condor scheduler. For example, the setting "site.osgconnect.condor.+projectname=Swift" will generate the line "+projectname = Swift". | site.osgconnect.condor.+projectname=Swift |
filesystem | Defines how files should be accessed | site.westmere.filesystem=local |
jobGranularity | Specifies the granularity of a job, in nodes | site.westmere.jobGranularity=2 |
jobManager | Specifies how jobs will be launched. The supported job managers are "cobalt", "slurm", "condor", "pbs", "lsf", "local", and "sge" | site.westmere.jobManager=slurm |
jobProject | Set the project name for the job scheduler | site.westmere.project=myproject |
jobQueue | Set the name of the scheduler queue to use. | site.westmere.jobQueue=westmere |
jobWalltime | The maximum number amount of time allocated in a scheduler job, in hh:mm:ss format | site.westmere.jobWalltime=01:00:00 |
maxJobs | Maximum number of scheduler jobs to submit | site.westmere.maxJobs=20 |
maxNodesPerJob | The maximum number of nodes to request per scheduler job. | site.westmere.maxNodesPerJob=2 |
pe | The parallel environment to use for SGE schedulers | site.sunhpc.pe=mpi |
providerAttributes | Allows user to pass attributes through directly to scheduler submit script. Currently only implemented for sites that use PBS. | site.beagle.providerAttributes=pbs.aprun;pbs.mp |
slurm | Pass parameters directly through to the submit script generated for the slurm scheduler. For example, the setting "site.midway.slurm.mailuser=username" generates the line "#SBATCH --mail-user=username". | site.midway.slurm.mailuser=username |
stagingMethod | When provider staging is enabled, this option will specify the staging mechanism for use for each site. If set to file, staging is done from a filesystem accessible to the coaster service (typically running on the head node). If set to proxy, staging is done from a filesystem accessible to the client machine that swift is running on, and is proxied through the coaster service. If set to sfs (short for "shared filesystem"), staging is done by copying files to and from a filesystem accessible by the compute node (such as an NFS or GPFS mount) | site.osg.stagingMethod=file |
taskDir | Tasks will be run from this directory. In the absence of a taskDir definition, Swift will run the task from workdir. | site.westmere.taskDir=/scratch/local/$USER/wo |
tasksPerWorker | The number of tasks that each worker can run simultaneously. | site.westmere.tasksPernode=12 |
taskThrottle | The maximum number of active tasks across all workers. | site.westmere.taskThrottle=100 |
taskWalltime | The maximum amount of time a task may run, in hh:mm:ss | site.westmere.taskWalltime=01:00:00 |
site | Name of site or sites to run on. This is
the same as running with swift -site
|
site=westmere |
userHomeOverride | Sets the Swift user home. This must be a shared filesystem. This defaults to $HOME. For clusters where $HOME is not accessible to the worker nodes, you may override the value to point to a shared directory that you own. | site.beagle.userHomeOverride=/lustre/beagle/use |
workdir | The workdirectory element specifies where on the site files can be stored. This directory must be available on all worker nodes that will be used for execution. A shared cluster filesystem is appropriate for this. Note that you need to specify absolute pathname for this field. | site.westmere.workdir=/scratch/midway/$USER/ |