small
queue. The script will execute at the working dir, and execute under the conda environment base
. library(bpt)
r_command <- "
print('hello world')
"
id <- 'my_script'
slurm_write_r(id, dir = '.', r_command)
slurm_write_script(
id,
dir = '.',
nodes = 1,
tasks = 1,
partition = 'small',
conda = 'base',
wd = '.',
walltime = '1:00:00'
)
The previous commands will generate my_script.r
and my_script.sh
under the working directory.
* In the terminal: submit Slurm script.
sbatch my_script.sh
srun --nodes=1 --ntasks-per-node=4 --mem=32gb -t 24:00:00 -p interactive --x11 --pty bash
Since the k40 nodes on mesabi does not support node sharing, we may not need to explicitly specify the --ntasks-per-node
and --mem
(?).
The website indicates that "Users are limited to a single job in the interactive and interactive-gpu partitions. However, it appears that it is OK to establish multiple interactive GPU sessions.
srun --nodes=1 -t 24:00:00 -p interactive-gpu --gres=gpu:k40:1 --x11 --pty bash
scancel
squeue -al
squeue --me
sinfo
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.