SSH

Tip's for Configuring OpenSSH

  • Create your .ssh folder under your $HOME directory

      • cd

      • mkdir .ssh

  • Ensure your .ssh directory has correct permissions

      • cd

      • chmod -R 0700 .ssh

    • Create your public and private keys. I always use RSA for the keytype, you may also use DSA (value: dsa)

NOTE: It is much better, and more secure to use RSA keys instead of DSA. The only reason to use DSA keys is for backward compatibility, and to maintain the FIPS (federal information processing standard) 186-2 specification. The spec for DSA states that all keys will be 1024 bits in size. When using an RSA key, the default key size is 2048, and is adjustable. The larger the key size, the more secure the encryption is.

      • cd

      • cd .ssh

      • ssh-keygen -t rsa -N '' -f ./id_rsa

        • This will create a public and private key pair, of type RSA. The "-N" option indicates the passphrase, which in this case is an emtpy single-quote quoted string (''). The "-f" option indicates the name of the public & private key files (i.e. id_rsa & id_rsa.pub).

    • Create your authorized_keys file. I always put my local rsa public key into my authorized_keys file, then just copy my authorized_keys file to all the other systems.

      • cat id_rsa.pub >> authorized_keys

      • cd

      • chmod -R 0700 .ssh

  • NOTE: you may just copy your private & public keys (along with your authorized_keys file) to all remote systems. This will allow full bi-directional ssh between the remote systems as your user id.

  • NOTE: If you have users in your organization that do not know how to manage their keys, you as administrator can do this for them. You would create a key-pair (private & public and copy their public key to their authorized_keys file) for the user and store it in a private directory. You may then use something like Puppet to push those keys out to the remote systems. For example:

    • /var/keys --> root owned keys directory

    • /var/keys/user --> directory to hold keys for user

    • ssh-keygen -trsa -N '' -C "username" -f /var/keys/user/id_rsa

    • cp /var/keys/user/id_rsa.pub /var/keys/user/authorized_keys

    • Then just copy the entire contents of /var/keys/user/* to remote-host:~user/.ssh/, set that user as the owner & chmod directory (.ssh) and all files within to 0700.

    • Now you can copy your ~/.ssh/authorized_keys file to the remote systems. You can now ssh to the remote systems without getting prompted for your password. If you copied the file and still get prompted for password, check the permissions on the remote system ~/.ssh/authorized_keys file. Everything should be 0700 for permissions.

    • We've got some older Solaris systems in our shop that are using an old version of SSH2. Here's how I convert between openssh (which is what Linux uses) to ssh2 (what solaris is using).

    • Convert an existing openssh public key for use on Solaris ssh2. In this example my two hosts are tom1-lnx (a linux server running openssh) and tom2-sun (a solaris 8 server running ssh2). In all examples I am using my login account of "sandholm".

      • On the Linux host tom1-lnx:

        • cd .ssh

        • ssh-keygen -e

          • This will execute the "ssh-keygen" program. The "-e" options indicates to export the specified public key.

      • You will be prompted for the location of the "id_rsa" or "id_dsa" file: (i.e. /home/sandholm/.ssh/id_rsa, or /home/sandholm/.ssh/id_dsa)

      • The output produced will look similar to:

---- BEGIN SSH2 PUBLIC KEY ----

Comment: "1024-bit DSA, converted from OpenSSH by sandholm@tom1-lnx"

AAwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww

X8m9Yxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

1Qbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb

YDcccccccccccccccccccccccccccccccccccccccccccccccccccccccccc

Flffffffffff < CONTENT REMOVED FOR SECURITY > ffffffffffffffffffffffff

aXeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

zmtttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt

z9vpppppppppppppppppppppppppppppppppppppppppppppppppppp

b3ggggggggggggggggggggggggggggggggggggggYc=

---- END SSH2 PUBLIC KEY ----

      • You will capture this output and put it into a file in your $HOME/.ssh2 directory called tom1-lnx.pub on the Solaris system. This will be your public key file. You should name this file as the hostname of where the public key came from.

      • On the Solaris system, you must update your .ssh2/authorization file to contain:

        • key tom1-lnx.pub

      • Ensure you set permissions on the new file to 0600.

      • You should now be able to ssh from Linux to Solaris

  • Create a Solaris ssh2 public key for use to Linux openssh.

      • On tom2-sun:

        • cd .ssh2

        • ssh-keygen -t dsa

          • NOTE: you may use keytypes other than dsa, such as rsa. This command will generate DSA public and private key files. When prompted for a passphrase, leave it blank. This is the only way to configure ssh keys to allow remote login without prompting for password.

      • Copy the id_dsa_2048_a.pub file to tom1-lnx:~/.ssh/id_dsa_2048_a_tom2-sun.pub

      • On tom1-lnx (Linux system), in your .ssh directory:

        • ssh-keygen -i -f ./id_dsa_2048_a_tom2-sun.pub >> authorized_keys

          • This will run the "ssh-keygen" program. The option "-i" indicates to import the public key. The "-f" option specifies the filename that holds the public key. The append operator (>>) indicates to append the output of the program to the authorized_keys file.

      • You should now be able to ssh from Solaris to Linux.

SSHD configuation

    • There isn't much you should do to the /etc/ssh/sshd_config file. I would generally discourage making changes on this file where the changes are unique to a particular host. By having a default sshd_config file, you have simplified system setup, and can easily manage the file by using a configuration management tool, such as puppet. I've seen security documents indicate to configure the Listen directive to only listen on a single, given IP Address. If you do this, then you've locked the sshd_config file to that particular host. If you have hundreds of hosts, imagine the headache of maintaining hundreds of unique sshd_config files. If you need to restrict what interface the sshd will accept calls from, consider using the /etc/hosts.allow & /etc/hosts.deny files. You can configure the ssh service (via /etc/hosts.allow) to accept calls on given network address, which accomplishes the same thing as setting the Listen directive in the sshd_config file. You are now able to use a common, generic sshd_config file on all the hosts, and your /etc/hosts.allow file can also be distributed as a common, generic file. The other problem with locking down the sshd_config Listen directive to a single interface is how do you connect when that interface fails? If you've got remote console access via ipmi, or some management board, with a separate network interface, then fine. Now if this system is on a DMZ, of course I would NOT allow ssh access via the DMZ facing network. Rather, I'd expect you have a maintence network to allow ssh access to DMZ based systems. For internal systems, I think that setting a restrictive Listen directive is overkill, and just makes remote management all the more difficult.

    • As for disabling remote root login, and being a system's administrator, I would strongly discourage this. Unless you've got some kind of remote root worker agent that runs on your hundreds of remote servers, it's a real nightmare to ssh to the system as some non-root, then su to root, just to do maintenance. Sure, you could write an Expect script to try and automate remote management, but now you've just complicated the whole remote access procedure. You should consider using the /etc/security/access.conf file to restrict where the root user is allowed to ssh login from. Configure an infrastructure machine, a bastion, that only your support staff have accounts on. Then configure /etc/security/access.conf to only allow root login from the bastion. Then you can use tools such as dsh (dancer shell/distributed shell) to globally manage your remote clusters of machines.

Multiplexing Multiple ssh sessions over an existing one

If you run a lot of remote sessions to the same server, you can improve your performance by multiplexing new ssh sessions over an existing connection. The first session you open up to a server can then be shared by other sessions. This is particularly useful when you need to tunnel your ssh session into a datacenter.

If you don't already have a config file in the .ssh directory in your home directory, create it with permissions 600: readable and writeable only by you. Populate the file with:

Host *

ControlMaster auto

ControlPath ~/.ssh/master-%r@%h:%p

ControlMaster auto tells ssh to try to start a master if none is running, or to use an existing master otherwise. ControlPath is the location of a socket for the ssh processes to communicate among themselves. The %r, %h and %p are replaced with your user name, the host to which you're connecting and the port number—only ssh sessions from the same user to the same host on the same port can or should share a TCP connection, so each group of multiplexed ssh processes needs a separate socket.

To make sure it worked, start one ssh session and keep it running. Then, in another window, open another connection with the -v option: (put this in your ~/.ssh/config file; permissions 0600)

For example:

Open the first session:

node1:~/.ssh# ssh -Y root@mary

Password:

Last login: Fri Mar 27 20:20:13 2009 from 192.168.224.150

Have a lot of fun...

mary:~ #

Open the second session in another window:

node1:~# ssh -v mary echo "hi"

OpenSSH_4.3p2 Debian-9etch3, OpenSSL 0.9.8c 05 Sep 2006

debug1: Reading configuration data /root/.ssh/config

debug1: Applying options for *

debug1: Reading configuration data /etc/ssh/ssh_config

debug1: Applying options for *

debug1: auto-mux: Trying existing master

hi

node1:~#

...and you will find the 2nd session has multiplexed over the first session socket, as indicated by the "debug1: auto-mux: Trying existing master" entry in the debug output. You will also find the performance is much improved.

If you have to connect to an old ssh implementation that doesn't support multiplexed connections, you can make a separate Host section:

Host node2.tsand.org ControlMaster no

For more info, see man ssh and man ssh_config.