### I forgot my password - what now?¶

You can reset it here: https://www.metacenter.no/user/

### How do I change my password on Stallo?¶

The passwd command known from other Linuxes does not work. The Stallo system is using a centralised database for user management. This will override the password changes done locally on Stallo.

## Installing software¶

### I need Python package X but the one on Stallo is too old or I cannot find it¶

You can choose different Python versions with the module system. See here: Software Module Scheme

In cases where this still doesn’t solve your problem or you would like to install a package yourself, please read the next section below about installing without sudo rights.

### Can I install Python software as a normal user without sudo rights?¶

Yes. The recommended way to achieve this is using virtual environments

As an example we install the Biopython package (and here we use the Python/3.6.4-intel-2018a module as an example):

$module load Python/3.6.4-intel-2018a$ virtualenv venv
$source venv/bin/activate$ pip install biopython


Next time you log into the machine you have to activate the virtual environment:

$source venv/bin/activate  If you want to leave the virtual environment again, type: $ deactivate


And you do not have to call it “venv”. It is no problem to have many virtual environments in your home directory. Each will start as a clean Python setup which you then can modify. This is also a great system to have different versions of the same module installed side by side.

If you want to inherit system site packages into your virtual environment, do this instead:

$virtualenv --system-site-packages venv$ source venv/bin/activate
$pip install biopython  ## Compute and storage quota¶ ### How can I check my disk quota and disk usage?¶ To check how large your disk quota is, and how much of it you have used, you can use the following command: $ quota -s


Only home and project partitions have quota.

### How many CPU hours have I spent?¶

For a simple summary, you can use the command cost, for more details, you can use:

$gstatement --hours --summarize -p PROSJEKT -s YYYY-MM-DD -e YYYY-MM-DD  For a detailed overview over usage you can use: $ gstatement --hours -p PROSJEKT -s YYYY-MM-DD -e YYYY-MM-DD


For more options see:

$srun -N 1 -t 1:0:0 --pty bash -I # reserve and log in on a compute node  This example assumes that you are running an X-server on your local desktop, which should be available for most users running Linux, Unix and Mac Os X. If you are using Windows you must install some X-server on your local PC. ### How can I access a compute node from the login node?¶ Log in to stallo.uit.no and type e.g.: $ ssh compute-1-3


or use the shorter version:



### Why does my job not start or give me error feedback when submitting?¶

Most often the reason a job is not starting is that Stallo is full at the moment and there are many jobs waiting in the queue. But sometimes there is an error in the job script and you are asking for a configuration that is not possible on Stallo. In such a case the job will not start.

To find out how to monitor your jobs and check their status see Monitoring your jobs.

Below are a few cases of why jobs don’t start or error messages you might get:

Memory per core

“When I try to start a job with 2GB of memory pr. core, I get the following error: sbatch: error: Batch job submission failed: Requested node configuration is not available With 1GB/core it works fine. What might be the cause to this?”

On Stallo we have two different configurations available; 16 core and 20 core nodes - with both a total of 32 GB of memory/node. If you ask for full nodes by specifying both number of nodes and cores/node together with 2 GB of memory/core, you will ask for 20 cores/node and 40 GB of memory. This configuration does not exist on Stallo. If you ask for 16 cores, still with 2GB/core, there is a sort of buffer within SLURM no allowing you to consume absolutely all memory available (system needs some to work). 2000MB/core works fine, but not 2 GB for 16 cores/node.

The solution we want to push in general is this:

#SBATCH -ntasks=80 # (number of nodes * number of cores, i.e. 5*16 or 4*20 = 80)


If you then ask for 2000MB of memory/core, you will be given 16 cores/node and a total of 16 nodes. 4000MB will give you 8 cores/node - everyone being happy. Just note the info about PE CPU-hour quota and accounting; mem-per-cpu 4000MB will cost you twice as much as mem-per-cpu 2000MB.

You can find an example here: First time you run a Gaussian job?

Please also note that if you want to use the whole memory on a node, do not ask for 32GB, but for 31GB or 31000MB as the node needs some memory for the system itself. For an example, see here: Example on how to allocate entire memory on one node

Step memory limit

“Why do I get slurmstepd: Exceeded step memory limit in my log/output?”



### How can I run many short tasks?¶

The overhead in the job start and cleanup makes it unpractical to run thousands of short tasks as individual jobs on Stallo.

The queueing setup on stallo, or rather, the accounting system generates overhead in the start and finish of a job of about 1 second at each end of the job. This overhead is insignificant when running large parallel jobs, but creates scaling issues when running a massive amount of shorter jobs. One can consider a collection of independent tasks as one large parallel job and the aforementioned overhead becomes the serial or unparallelizable part of the job. This is because the queuing system can only start and account one job at a time. This scaling problem is described by Amdahls Law.

If the tasks are extremly short, you can use the example below. If you want to spawn many jobs without polluting the queueing system, please have a look at Running many sequential jobs in parallel using job arrays.

By using some shell trickery one can spawn and load-balance multiple independent task running in parallel within one node, just background the tasks and poll to see when some task is finished until you spawn the next:

#!/usr/bin/env bash

# Jobscript example that can run several tasks in parallel.
# All features used here are standard in bash so it should work on
# any sane UNIX/LINUX system.
# Author: roy.dragseth@uit.no
#
# This example will only work within one compute node so let's run
# on one node using all the cpu-cores:
#SBATCH --nodes=1

# We assume we will (in total) be done in 10 minutes:
#SBATCH --time=0-00:10:00

# Let us use all CPUs:
maxpartasks=$SLURM_TASKS_PER_NODE # Let's assume we have a bunch of tasks we want to perform. # Each task is done in the form of a shell script with a numerical argument: # dowork.sh N # Let's just create some fake arguments with a sequence of numbers # from 1 to 100, edit this to your liking: tasks=$(seq 100)

cd $SLURM_SUBMIT_DIR for t in$tasks; do
# Do the real work, edit this section to your liking.
# remember to background the task or else we will
# run serially
./dowork.sh $t & # You should leave the rest alone... # count the number of background tasks we have spawned # the jobs command print one line per task running so we only need # to count the number of lines. activetasks=$(jobs | wc -l)

# if we have filled all the available cpu-cores with work we poll
# every second to wait for tasks to exit.
while [ $activetasks -ge$maxpartasks ]; do
sleep 1
activetasks=$(jobs | wc -l) done done # Ok, all tasks spawned. Now we need to wait for the last ones to # be finished before we exit. echo "Waiting for tasks to complete" wait echo "done"  And here is the dowork.sh script: #!/usr/bin/env bash # Fake some work,$1 is the task number.
# Change this to whatever you want to have done.

# sleep between 0 and 10 secs
let sleeptime=10*$RANDOM/32768 echo "Task$1 is sleeping for $sleeptime seconds" sleep$sleeptime
echo "Task $1 has slept for$sleeptime seconds"