You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Logging In

ssh <username>@login.hpc.uams.edu

If this is your first time logging into the system, now is when you should change your password

passwd

You are now on the HPC login node. From here you can stage your data and jobs to be submitted to the computational nodes in the cluster. You can view the current load of the overall system from the login node with the showstate command.

Submit a Simple Job

While the login node is a relatively powerful server, it should not be used to do any actual work, as that could impede others ability to use the system. We use Torque to manage jobs and resources on the cluster. The qsub program will be your primary interface for submitting jobs to the cluster. In its simplest form you can feed it a command on standard input and it will schedule and run a job. Here we will schedule a single command lscpu to run using all of the defaults

echo lscpu | qsub

 The output from jobs will end up in your home directory as they run. Since we didn't name the job we just submitted you will find a file called STDIN.o##JOBID##, which will contain the standard output of the lscpu command once it has finished running on the node it was assigned.

less STDIN.o########

Submit a Scripted Job

The qsub program takes many arguments to control where the job will be scheduled and can be fed a script of commands and arguments to be run instead of just feeding them in through a pipe. We will now create a script which will both contain the arguments and actual commands to be run.

nano cpuinfo.script

Below you see a simple script which will also perform the lscpu command as above, except this will also set a few options.

#!/bin/bash
#PBS -M <YOUR_EMAIL>@uams.edu          #<---- Email address to notify
#PBS -m abe                            #<---- Status to notify the email (abort,begin,end)
#PHS -N CPUinfo                        #<---- Name of this job
#PBS -j oe                             #<---- Join both the standard out and standard error streams
                                       #<---- Commands below this point will be run on the assigned node
echo "Hello HPC"
lscpu
echo "Goodbye HPC"

Once this script is created it can be run by passing it to the qsub program. After this job has finished there will now be a file named cpuinfo.o###### in your home directory which will contain the output.

qsub cpuinfo.script

When submitting a script you can also pass arguments on the command line to qsub. Here we submit the lscpu script again, except this time we ask for a node with a xeon processor. Compare the outputs of the two jobs, or experiment with different features that can be requested.

qsub -l feature=xeon cpuinfo.script

Monitoring Jobs

Jobs so far have been quick to run, often though you will want to monitor longer running jobs. Remember that the showstate program will display the state of the entire cluster. There are many other programs which can help you monitor your own state and jobs.

qstat

This program will display your current and recent jobs submitted to the cluster. The S column contains the current status of your jobs.

qstat -f

This option will print the full status of current jobs and is useful for finding the exec_host of a running job. Knowing the host will allow you to peek in a few ways at what the node is currently doing.

pbsnodes <nodename>

This shows the configuration of an individual node, as well as its current status.

With the node name we can use the Parallel Shell program pdsh to execute commands directly on a node. This should only ever be used to run short non-intensive commands, as it will take CPU time from any jobs that are executing on that node. Here are some possibly useful commands.

pdsh -w <nodename> free -h
pdsh -w <nodename> uptime
pdsh -w <nodename> top -b -n1

Installing Software

The HPC has some software packages already installed, however they will need to be activated using Lmod. You can browse avaliable modules or search for them and see descriptions with these commands.

module avail
module spider <search>

If one of them is already available you simply need to load it. Do note however, that this is only changing your local environment variables. If you plan on making use of anything inside of a module during a job, you must use module load in the job script, before you try and use the commands that it enables.

module load <module_name>

 One of the most useful modules is EasyBuild. This is a build and installation framework designed for HPCs. Many scientific toolsets can be installed using it, once they are, they can be activated using the module commands above. However, EasyBuild will always have to be loaded first, before anything installed with it can be loaded, the module spider <search> command will explain this if you forget.

module load EasyBuild
wget https://raw.githubusercontent.com/easybuilders/easybuild-easyconfigs/master/easybuild/easyconfigs/m/Miniconda3/Miniconda3-4.5.12.eb
eb Miniconda3-4.5.12.eb
module avail
module load Miniconda3
conda --version
  • No labels