Remote access to PCs in terminal rooms
Remote access to PCs in terminal rooms
Introduction
A script running on a clusternode can access another clusternode with ssh without password or keys, e.g.:
# from cn10.science.ru.nl your can run a command, e.g. ``uptime'',
# on cn11 with:
ssh cn11.science.ru.nl uptime
In general, you would actually use the slurm batch system
to start jobs on the cluster.
Access between cluster nodes and lilo7, lilo8, or PCs
in terminal rooms requires some form of authentication:
a password - something you do not want in batch scripts
ssh keys managed by C&CZ - used for logging into lilo7 from outside campus
ssh keys managed by the user - to go from PC or login server to a cluster node
Kerberos tickets - needed to log into a PC remotely from a script
For communication between clusternodes and login servers, ssh keys
can be used. When you login with ssh from outside the campus and without
eduVPN you can only use lilo7 and the public key
is managed by C&CZ, i.e., your ~/.ssh/authorized_keys is ignored.
Once you are inside the science.ru.nl domain, you can manage
your own ssh keys. In particular, if you create a key (using ssh-keygen)
without a passphrase, this key will be ignored by lilo7
for access from outside the campus, but it will let you use
ssh to go from, e.g., a PC in a terminal room to lilo7
or a clusternode.
On the other hand, if you login from outside with ssh-agent forwarding:
ssh -A lilo7.science.ru.nl
# You can now go to a cluster node without password, assuming
# your account has been granted access to cluster nodes:
ssh cn10.science.ru.nl
When running batch jobs that require access to other nodes,
using an ssh-agent is not convenient. For example, after
a reboot, the ssh-agent process is gone, and your script can
no longer use keys protected by passphrases. Using a hardcoded
password in your script is possible, but not very secure, and not
recommended.
You only need to know about Kerberos tickets when you need
to log into a PC in a terminal room from a script running on
a cluster node or login server.
The problem with remote access to PCs
When you log into a PC in a terminal from a login server (e.g. lilo7) or
clusternode (e.g. cn10) you may be prompted for a password,
even if you have set ssh keys without passphrase. The problem is
that your /home/<user> directory may not be mounted, and so
your ssh keys will not be available during logon. After you
enter your password, the automounter will mount your home
directory. So, if you login a second time to the same PC, your home directory
is available, and your ssh keys will work, and you may not
need to enter your password. However, after a timeout of reboot, your
home directory may again not be available.
The solution is the use of ssh with Kerberos forwarding
Kerberos
Kerberos is a network authentication protocol developed by
MIT.
It can be used together with ssh. When you login with
a password, a Kerberos ticket is generated. The following
commands manage Kerberos tickets:
klist - shows available Kerberos tickets
# when not ticket exists you get something like:
# klist: No credentials cache found (filename: /tmp/krb5cc_15010)
kinit - creates a Kerberos ticket
# it will ask for your password, e.g., for user gerritg
Password for gerritg@SCIENCE.RU.NL:
# after you enter your correct login password
klist
# shows something like this:
Ticket cache: FILE:/tmp/krb5cc_15010
Default principal: gerritg@SCIENCE.RU.NL
Valid starting Expires Service principal
12/15/2024 16:19:26 12/18/2024 16:19:26 krbtgt/SCIENCE.RU.NL@SCIENCE.RU.NL
renew until 12/23/2024 16:19:26
# You can remote the ticket with:
kdestGroy - deletes the Kerberos tickets
When you have set a Kerberos ticket, you can log into a PC in a terminal room without entering a password using
ssh Kerberos forwarding
The PCs have names like, e.g., hg023pc04.science.ru.nl.
In your ~/.ssh/config file enter the following:
host *.science.ru.nl hg*pc*
StrictHostKeyChecking no
GSSAPIAuthentication yes
GSSAPIDelegateCredentials yes
Now when you have a Kerberos ticket on lilo7 you can
login to a PC, e.g. hg023pc04.science.ru.nl, without a
password even before the the ssh keys are available.
The last problem to solve is to have a Kerberos ticket in a script without an interactive session. This can be achieved by creating Kerberos keytab file.
Kerberos keytab file
This is a file that contains a key derived from your password and
it can be used to generate a Kerberos ticket, which in turn
lets a script access a PC or server on the network, so the
file should not be readable by others. To also make it not
visible you can put it in, e.g. ~/.ssh:
cd ~/.ssh
# start interactive tool
ktutil
ktutil: add_entry -password -p gerritg@SCIENCE.RU.NL -k 1 -e aes128-cts-hmac-sha1-96
# here ``gerritg`` must be replaced by your username
# it is important to keep SCIENCE.RU.NL in upper case
# You will be prompted for your login password:
Password for gerritg@SCIENCE.RU.NL:
# After entering the password, write the keytab file with
ktutil: write_kt science.keytab
ktutil: q
# q for quit
# now double check the permissions of the file
ls -l science.keytab
# should give something like:
-rw------- 1 gerritg gerritg 65 Dec 15 20:44 science.keytab
In a script that needs to access a PC from a login server or cluster node, generate a Kerberos tickit with:
#!/bin/bash
# generate Kerberos ticket
kinit -k -t ~/.ssh/science.keytab gerritg@SCIENCE.RU.NL
# This should now work:
ssh hg023pc04.science.ru.nl uptime # or something even more useful
Database access
Assume a PostgreSQL database was properly setup on a clusternode, and the
settings are stored in ~/.DB_CONFIG, which looks like this:
export PGHOST=cn143
export PGPORT=16034
export PGDATA=/scratch/gerritg/DB
export PGUSER=gerritg
export DB_SERVER=/home/gerritg/pgserver/cn143
export DB_LOGFILE=/home/gerritg/pgserver/cn143/cn143_16034.log
export DB_CONFIG=/home/gerritg/pgserver/cn143/DB_CONFIG
In your .bashrc these settings must be sourced:
[ -f ~/.DB_CONFIG ] && . ~/.DB_CONFIG
In the PC rooms the firewall prevents direct access to a PostgreSQL server
running on a cluster node. The script db_connect.sh solves this
by running a ssh tunnel with the autossh command.
This script will check the $PGREMOTE environment variable: if
it exists a tunnel will be used. For this to work, add
the following lines to your .bashrc, after sourcing ~/.DB_CONFIG,
so that $PGHOST variable has been set:
# check whether we are in a terminal room by testing the $HOSTNAME
if [[ "$HOSTNAME" =~ hg...pc.. ]]; then
# we are on a PC with no high ports to cluster nodes
export PGREMOTE=$PGHOST
export PGHOST=localhost
fi
If you now run db_connect.sh it will setup a tunnel to $PGREMOTE,
so that psql and other programs using the database can connect
to localhost. This script will first test access using the
/usr/bin/pg_isready command, and it will only try the tunnel if
that command fails. This way, it does not hurt to run db_connect.sh
if you happen to be on a clusternode, or a tunnel was set up already.