The Rosalind cluster is a shared system available to all staff and students at King's College London, along with staff belonging to, or funded by, the NIHR Maudsley Biomedical Research Centre (BRC) or Guy’s and St Thomas’ BRC. The underlying compute and storage hardware was funded by one or more of the Rosalind Partners. To reflect this funding, access to different groups of hardware is prioritised based on membership (or not) of these partner organisations. In the Slurm scheduler this is acheived by grouping compute nodes in to one or more partitions. The partitions you can access determine which hardware you can run on, any limits on resources and the priority of your jobs vs those scheduled via other partitions.
The table below describes the partions made generally available for use on the Rosalind cluster. The priority value is used to dictate which job to run first when resources become available. In this configuration jobs belonging to users in the partner organisations will always run before users from elsewhere at King's. The shared partitions allow all the cluster hardware to be made available to all cluster users but these partitions are limited such that jobs from them will never be able to exceeded the stated restrictions, e.g. consume more than 10% of total cluster CPU cores, this is true even if the remaining 90% of cluster cores were idle.
|Partition Name||Group Resriction||Description||Priority||Resource Limits|
|brc||prtn_gstt, prtn_slam||Users from the GSTT and SLaM BRCs||10||none|
|nms_research||nms_research||NMS research staff and PhD students||10||none|
|nms_research_gpu||nms_research||NMS research staff and PhD students||10||1 GPU job per user|
|nms_teach||nms_teach||NMS taught students||10||none|
|nms_teach_gpu||nms_teach||NMS taught students||10||1 GPU job per user|
|shared||none||All King's staff and students||5||10% total cluster CPU/memory|
|shared_gpu||none||All King's staff and students||5||1 of each available GPU model|
In cases where researchers grant funding of £50,000 and above available for computational hardware we can discuss the purchase and reservation of dedicated compute nodes. To explore this further you can get in touch via a support ticket.