SA 101 - Introduction to System Administration

Originally authored by D. Shin, revisions by S. Kirklin ~ 2/3/2011

Once authenticated, sudo sets a timestamp for the user. For five minutes from the timestamp, the user may execute further commands without being prompted for her password. Be very cautions when using sudo -i, as while you are root, any command you issue will be followed, and often cannot be easily undone. To make it easier to tell if you are root, the base configuration file (see SA 256) will turn your prompt red if you are root. To stop being root, simply type:

# exit

Throughout this guide, many commands will need to be issued as root, in most cases either of these should work. There are, however, some commands that simply do not work / exist when issued as $ sudo [command], but do if issued while logged into root.As an example something like this may happen:

sjk648@josquin ~ $ cd /var/spool/pbs/sched_priv

-bash: cd: /var/spool/pbs/sched_priv/: Permission denied

sjk648@josquin ~ $ sudo cd /var/spool/pbs/sched_priv

Password:

sudo: cd: command not found

sjk648@josquin ~ $ sudo -i

josquin ~ # cd /var/spool/pbs/sched_priv

josquin sched_priv #

User management

Creating a new user

One can use useradd command to add a new user to clusters. Use the -m option to create a directory named user_id in /home.

# useradd –m [user_id]

Then a temporary password for the user_id must be created. Usually set the password same as the user id. Enter it twice.

# passwd [user_id]

Then set it to be expired at the first logon, so that a user can choose his/her own password (except on Victoria where this doesn't work)

# passwd –e [user_id]

So that this user can log into the computer nodes, this user's password must be distributed across the nodes. On our clusters, a username on master and nodes must be the same for rsh connection for mpi calculations. So once a user updated password, then an administrator should distributed password files over the nodes. One can just simply run a script, distpasswd:

josquin ~ # distpasswd

Copying shadow to nodes ...

Copying shadow- to nodes ...

Copying passwd to nodes ...

Copying passwd- to nodes ...

Copying group to nodes ...

Copying group- to nodes ...

Creating a user for ganglia/nodewatch

A user account for ganglia monitoring webpage can be updated by modifying htpasswd in /var/www. You can use the command htpasswd to modify a user in the htpasswd file. For example:

josquin www # htpasswd /var/www/htpasswd [username]

New password:Re-type new password:

Updating password for user [username]

Node management

Ganglia

An administrator can see the status of master and nodes of a cluster at a glance at ganglia webpage. One should always check the load of all clusters is always close to 100%, otherwise certain nodes are down, or somebody is holding nodes without actually running jobs on them.

http://victoria.northwestern.edu/ganglia/

http://josquin.northwestern.edu/ganglia/

http://byrd.northwestern.edu/ganglia/

http://tallis.northwestern.edu/ganglia/

Nodewatch

Please refer nodewathc section in the Microway manual. All files for nodewatch are located in

/var/www/localhost/htdocs/ganglia/data

(josquin, byrd, palestrina)

/var/www/html/ganglia/data

(victoria)

In here, one can adjusts settings for nodewatch, such as thresholds for shutdown, or warning temperatures for each node, to whom all warning email will be sent, and etc. Edit chassisconf.dat for temperature thresholds and clusterconf.dat for global settings, such as email. Don’t forget to restart nodewatch service whenever made changes in those files.

kaien@josquin ~ $ sudo /etc/init.d/nodewatch restart

* Shutting down GANGLIA gmnod: ... [ ok ]

* Starting Nodewatch server ... [ ok ]

Backup/restoration of a computing node

One can create a backup image of a computing node and use it to restore a node as a first aid. The procedure is quite straightforward.

Backup

Go to ganglia webpage and click ‘Microway control’ button on top right. Enter userid/password twice to logon and select ‘Node backup’ menu on left. Select a node to backup. It is a good idea to select a node which is idle, otherwise it will take a long time to create a backup image. Then select a slot for the backup. There are six slots for backup. Put a backup comment, like: node18 backup on 11/11/2009. The click ‘Backup Now’ button at the bottom. The backup tar ball will be created in

/home/noderestore/backup/Slot#

Restoration

Before you actually restore a system, you have to select an image, if you have more than one image by having multiple files in different ‘slots’. Go to ganglia webpage and select ‘Node restore’ option and select one from the dropdown menu. Then go to LG87/2020 Ridge and turn on or reset the problematic node and hit F12 at the boot screen for network booting. Follow the screen instruction.

Real World Examples

Batch job killing

There many jobs that I want to kill? How do I do that? I don’t want to kill them one by one.

kaien@victoria ~ $ a|grep execute

57661.master.cl.nort yongli victoria execute.q 5568 2 1 -- 72:00 R 68:51

57664.master.cl.nort yongli victoria execute.q 6237 2 1 -- 72:00 R 24:10

57665.master.cl.nort yongli victoria execute.q 27826 2 1 -- 72:00 R 24:10

57666.master.cl.nort yongli victoria execute.q 29582 2 1 -- 72:00 R 16:02

57667.master.cl.nort yongli victoria execute.q 28883 2 1 -- 72:00 R 16:00

57668.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

57669.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

57670.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

57671.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

57672.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

57673.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

57674.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

57675.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

57676.master.cl.nort yongli victoria execute.q -- 2 1 -- 72:00 Q --

kaien@victoria ~ $ a|grep execute|awk 'BEGIN{FS="."}{print $1}'

57661

57664

57665

57666

57667

57668

57669

57670

57671

57672

57673

57674

57675

57676

kaien@victoria ~ $ a|grep execute|awk 'BEGIN{FS="."}{print $1}'|sudo xargs qdel

Is this node working?

It seems a node is working fine, but ganglia says it is down?

Sometimes, ganglia service deamon dies without any good reason, so it has to be reset. You need to restart the ganglia monitoring deamon in the node, and nodewatch service:

kaien@josquin ~ $ sudo rsh node18 /etc/init.d/gmond restart

Password:

* Shutting down ganglia gmond: ... [ ok ]

* Starting ganglia gmond: ... [ ok ]

kaien@josquin ~ $ sudo /etc/init.d/nodewatch restart

* Shutting down GANGLIA gmnod: ... [ ok ]

* Starting Nodewatch server ... [ ok ]

How do I stop this process?

One needs to rsh to a node and use top command to find out the PID of the zombie process and use ‘kill -9’ to kill certain processes.

kaien@josquin ~ $ sudo rsh node13

Password:

Last login: Wed Jan 13 09:51:04 CST 2010 from master.cl.northwestern.edu on pts/0

node13 ~ # ps aux|grep vasp|grep kaien|grep RLl|awk '{print $2}'|xargs kill -9

List of useful commands

which

List the full pathnames of the files that would be executed if the named commands had been run. which searches the user's $PATH environment variable.

josquin ~ # which distpasswd

/usr/local/sbin/distpasswd

rsh

Execute command on remote host, or, if no command is specified, begin an interactive shell on the remote host using rlogin. The options can be specified before or after host. Use of rsh has generally been replaced with ssh, which offers better security.

josquin ~ # rsh node11

Last login: Wed Jan 13 09:40:23 CST 2010 from master.cl.northwestern.edu on pts/0

node11 ~ #

showq

Displays information about active, eligible, blocked, and/or recently completed jobs.

josquin ~ # showq

ACTIVE JOBS--------------------

JOBNAME USERNAME STATE PROC REMAINING STARTTIME

32467 hahansen Running 8 7:07:58 Tue Jan 19 13:21:09

32478 hahansen Running 8 8:34:04 Tue Jan 19 14:47:15

32479 hahansen Running 8 8:34:14 Tue Jan 19 14:47:25

32352 daidhy Running 16 1:03:42:00 Sat Jan 16 16:55:11

31405 wacounts Running 16 1:06:34:30 Fri Jan 8 11:47:41

32486 wei Running 16 1:21:50:58 Tue Jan 19 15:04:09

32449 wei Running 16 2:12:32:22 Tue Jan 19 05:45:33

32450 wei Running 16 2:14:45:08 Tue Jan 19 07:58:19

32451 wei Running 16 2:18:12:09 Tue Jan 19 11:25:20

32432 kaien Running 16 3:16:50:50 Mon Jan 18 10:04:01

32434 kaien Running 16 3:19:35:07 Mon Jan 18 12:48:18

32427 daidhy Running 16 3:23:57:29 Sun Jan 17 21:10:40

32303 yongli Running 16 6:09:49:09 Tue Jan 19 03:02:20

32304 yongli Running 16 6:14:02:47 Tue Jan 19 07:15:58

32327 yongli Running 16 6:23:02:17 Tue Jan 19 16:15:28

32477 sjk648 Running 4 7:08:08:01 Tue Jan 19 13:21:12

32312 bmeredig Running 8 8:19:19:25 Wed Jan 13 12:32:36

32447 kaien Running 16 9:04:42:05 Mon Jan 18 21:55:16

32313 bmeredig Running 8 9:11:38:43 Thu Jan 14 04:51:54

32314 bmeredig Running 8 10:22:43:46 Fri Jan 15 15:56:57

32036 zhang Running 8 12:20:11:23 Sun Jan 17 13:24:34

32038 zhang Running 8 13:04:33:50 Sun Jan 17 21:47:01

32041 zhang Running 8 14:04:41:41 Mon Jan 18 21:54:52

32460 zhang Running 4 14:20:08:01 Tue Jan 19 13:21:12

24 Active Jobs 288 of 288 Processors Active (100.00%)

36 of 36 Nodes Active (100.00%)

IDLE JOBS----------------------

JOBNAME USERNAME STATE PROC WCLIMIT QUEUETIME

32488 daidhy Idle 16 10:00:00 Tue Jan 19 16:19:06

32487 daidhy Idle 16 20:00:00 Tue Jan 19 16:17:49

32441 wacounts Idle 16 10:10:00:00 Mon Jan 18 11:58:08

32328 yongli Idle 16 7:00:00:00 Wed Jan 13 11:38:47

32329 yongli Idle 16 7:00:00:00 Wed Jan 13 11:45:09

32330 yongli Idle 16 7:00:00:00 Wed Jan 13 11:45:09

32331 yongli Idle 16 7:00:00:00 Wed Jan 13 11:45:09

32332 yongli Idle 16 7:00:00:00 Wed Jan 13 11:45:09

32333 yongli Idle 16 7:00:00:00 Wed Jan 13 11:47:17

32334 yongli Idle 16 7:00:00:00 Wed Jan 13 11:47:17

32335 yongli Idle 16 7:00:00:00 Wed Jan 13 11:47:17

32336 yongli Idle 16 7:00:00:00 Wed Jan 13 11:47:17

31363 daidhy Idle 16 12:02:00:00 Wed Dec 30 09:59:33

32315 bmeredig Idle 8 15:00:00:00 Wed Jan 13 02:10:02

32316 bmeredig Idle 8 15:00:00:00 Wed Jan 13 02:10:06

32317 bmeredig Idle 8 15:00:00:00 Wed Jan 13 02:10:10

32042 zhang Idle 8 15:00:00:00 Sat Jan 9 22:29:18

32043 zhang Idle 8 15:00:00:00 Sat Jan 9 22:29:18

32044 zhang Idle 8 15:00:00:00 Sat Jan 9 22:29:18

32480 zhang Idle 8 15:00:00:00 Tue Jan 19 14:30:23

32481 zhang Idle 8 15:00:00:00 Tue Jan 19 14:31:11

32482 zhang Idle 8 15:00:00:00 Tue Jan 19 14:31:39

32483 zhang Idle 8 15:00:00:00 Tue Jan 19 14:32:13

32484 zhang Idle 8 15:00:00:00 Tue Jan 19 14:33:02

32485 zhang Idle 8 15:00:00:00 Tue Jan 19 14:34:07

25 Idle Jobs

BLOCKED JOBS----------------

JOBNAME USERNAME STATE PROC WCLIMIT QUEUETIME

32305 yongli Hold 16 7:00:00:00 Wed Jan 13 01:57:24

32306 yongli Hold 16 7:00:00:00 Wed Jan 13 01:57:24

32307 yongli Hold 16 7:00:00:00 Wed Jan 13 01:57:24

32308 yongli Hold 16 7:00:00:00 Wed Jan 13 01:57:24

32309 yongli Hold 16 7:00:00:00 Wed Jan 13 01:57:24

32310 yongli Hold 16 7:00:00:00 Wed Jan 13 01:57:24

32321 yongli Hold 16 7:00:00:00 Wed Jan 13 11:29:32

32322 yongli Hold 16 7:00:00:00 Wed Jan 13 11:29:32

32323 yongli Hold 16 7:00:00:00 Wed Jan 13 11:29:32

32324 yongli Hold 16 7:00:00:00 Wed Jan 13 11:29:32

Total Jobs: 59 Active Jobs: 24 Idle Jobs: 25 Blocked Jobs: 10

pbsnodes

pbs node manipulation

josquin ~ # pbsnodes node11

node11

state = job-exclusive

np = 8

ntype = cluster

jobs = 0/32038.master.cl.northwestern.edu, 1/32038.master.cl.northwestern.edu, 2/32038.master.cl.northwestern.edu, 3/32038.master.cl.northwestern.edu, 4/32038.master.cl.northwestern.edu, 5/32038.master.cl.northwestern.edu, 6/32038.master.cl.northwestern.edu, 7/32038.master.cl.northwestern.edu

status = opsys=linux,uname=Linux node11 2.6.27-gentoo-r7-osmp #1 SMP Sun Mar 1 09:32:55 CST 2009 x86_64,sessions=11451193,nsessions=2,nusers=1,idletime=4934861,totmem=50160144kb,availmem=48218196kb,physmem=32697532kb,ncpus=8,loadave=8.00,netload=47459498596,state=free,jobs=32038.master.cl.northwestern.edu,rectime=1263942862

qstat

show status of pbs batch jobs

josquin ~ # qstat -n 32038

master.cl.northwestern.edu:

Req'd Req'd Elap

Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time

-------------------- -------- -------- ---------- ------ ----- --- ------ ----- - -----

32038.master.cl.nort zhang josquin cabnh5-1-r 1145 1 1 -- 360:0 R 43:29

node11/7+node11/6+node11/5+node11/4+node11/3+node11/2+node11/1+node11/0

http://ars.userfriendly.org/cartoons/?id=20020726

Sudo

First off, there are two ways to issue commands as root on a linux cluster. They are:

$ sudo -i

Password:

# [commands]

or

$ sudo [commands]

Password:

Sudo

http://xkcd.com/149/