Skip to content

Scheduler policy

The Rosalind cluster is a shared system available to all staff and students at King's College London, along with staff belonging to, or funded by, the NIHR Maudsley Biomedical Research Centre (BRC) or Guy’s and St Thomas’ BRC. The underlying compute and storage hardware was funded by one or more of the Rosalind Partners. To reflect this funding, access to different groups of hardware is prioritised based on membership (or not) of these partner organisations. In the Slurm scheduler this is acheived by grouping compute nodes in to one or more partitions. The partitions you can access determine which hardware you can run on, any limits on resources and the priority of your jobs vs those scheduled via other partitions.

Partitions

The table below describes the partions made generally available for use on the Rosalind cluster. The priority value is used to dictate which job to run first when resources become available. In this configuration jobs belonging to users in the partner organisations will always run before users from elsewhere at King's.

The shared partitions allow all the cluster hardware to be made available to all cluster users but these partitions are limited such that jobs from them will never be able to exceeded the stated restrictions, e.g. consume more than 10% of total cluster CPU cores, this is true even if the remaining 90% of cluster cores were idle.

The test partitions allow selected cluster hardware to be made available to all cluster users to test their jobs before large submissions are made. They have higher priority than the rest, but are limited such that jobs from them will never be able to exceeded the stated restrictions, e.g. have more than one running job per user, run for long period of time or have large number of nodes/GPUs per job.

Partition Name Group Resriction Description Priority Resource Limits
brc prtn_gstt, prtn_slam Users from the GSTT and SLaM BRCs 5000 none
nms_research nms_research NMS research staff and PhD students 5000 none
nms_research_gpu nms_research NMS research staff and PhD students 5000 1 GPU job per user
nms_teach nms_teach NMS taught students 4000 none
nms_teach_gpu nms_teach NMS taught students 4000 1 GPU job per user
shared none All King's staff and students 1000 10% total cluster CPU/memory
shared_gpu none All King's staff and students 1000 1 of each available GPU model
test none All King's staff and students 6000 1 job per user
up to 2 nodes per job
5 mins max runtime
test_gpu none All King's staff and students 6000 1 job per user
1 GPU per job
5 mins max runtime

Private partitions

In cases where researchers grant funding of £50,000 and above available for computational hardware we can discuss the purchase and reservation of dedicated compute nodes. To explore this further you can get in touch via a support ticket.