Security update regenerates stale SSH ECDSA host key

Peter Lieven from discovered a problem with TurnKey 13.0 where the OpenSSH ECDSA key is not regenerated on firstboot like the RSA and DSA host keys.

We've issued a signed hotpatch to TurnKey Core 13.0 that regenerates the ECDSA SSH host key. TurnKey deployments that have not disabled automatic security updates (it's on by default) will have their ECDSA SSH host key regenerated automatically within the next 24 hours.

If you don't want to wait for cron-apt to install the security update you can install the hotpatch immediately by executing this command as root:


If you've turned off security updates you can regenerate the ECDSA SSH host key by executing the following command in a root shell:

rm -f /etc/ssh/ssh_host_ecdsa_key*
dpkg-reconfigure openssh-server

Which is essentially what the hotpatch does.

Warning: Remote host identification has changed

If you're using an SSH client that supports ECDSA then after regenerating the ECDSA host key you may get the following error message when trying to login to your server:

~$ ssh

 Someone could be eavesdropping on you right now (man-in-the-middle attack)!
 It is also possible that a host key has just been changed.
 The fingerprint for the ECDSA key sent by the remote host is
 Please contact your system administrator.
 Add correct host key in /home/liraz/.ssh/known_hosts to get rid of this message.
 Offending ECDSA key in /home/liraz/.ssh/known_hosts:1506
 ECDSA host key for has changed and you have requested strict checking.
 Host key verification failed.

Getting rid of this error is simple:

ssh-keygen -R

This removes all keys belonging to hostname from your $HOME/.ssh/known_hosts file.

Why was this update necessary?

The SSH host key is supposed to be secret. If a stale key is leftover from the build process then that makes active MITM attacks possible under certain conditions.

SSH host keys are the login shell equivalent of SSL certificates. They allow you to know that you are connecting to your server and not to some other server that is performing a man in the middle attack against your SSH session.

The ramification of this is limited by the fact that your SSH client needs to support ECDSA and be configured to default to that. Even if that is the case an attacker couldn't have used this to passively eavsedrop on your SSH connections. Rerouting network traffic in an active man in the middle attack would have been required to take advantage of this. Passively eavsedropping on the connection would not be enough to break the encryption because unique encryption keys are used for every session.

Does my SSH client support ECDSA?

Only users using SSH clients that supported ECDSA and are configured to default to that over RSA or DSA could have been effected by this.

OpenSSH gained ECDSA support since version 5.7. At the time of writing, Windows SSH clients such as Putty and WinSCP still don't have ECDSA support. Newer versions of Tera Term do support ECDSA however.

Should I do anything else?

If you used an SSH client that defaults to ECDSA and also use password based authentication with SSH you may want to consider changing your passwords, assuming you are worried about the type of attacker that has the capability to reroute network traffic and that would target your SSH sessions for an active man in the middle attack.

How did this happen?

Support for ECDSA keys was new to Debian Wheezy. The script that runs on firstboot only regenerated RSA and DSA keys. It was too specific. To prevent this from happening again I committed a fix to make SSH key regeneration more generic.

I've also added to our release checklist to go over the changelogs of the major packages that go into Core more carefully to try and spot any changes that could have security ramifications.

Many thanks again to Peter Lieven from for discovering this and reporting it.


vuser1's picture

Hotpatch does not work for appliances I created with TKLDev. I had to re-create the key manually.

The problem is line 4:

   if ssh-keygen -l -f $keyfile | grep -q 'root@fab-dev'; then

My appliance had 'root@tkldev', not 'root@fab-dev' substring. I think all custom appliances suffer. Looks like there should be 2 ifs :)

John Carver's picture

I can confirm that the hotpatch did not work on my home built Ansible appliance.  For me, the offending key contained the string, 'root@tkldev-wheezy-amd64', which comes from the hostname of my build host.  I chose to setup four vm's for tkldev so that I could build 32 & 64 bit iso's for both squeeze and wheezy.  I changed the hostname from the default of 'tkldev' so that I could always tell which host I was working on.

Since you can't rely on always finding either 'fab-dev' or 'tkldev', the only test I can think of would be a mis-match between the key and the hostname.  My other keys contain 'root@ansible' so they are presumably okay.

Information is free, knowledge is acquired, but wisdom is earned.

Liraz Siri's picture

John, the mismatch between the key and the hostname is also potentially unsafe because the hostname may have changed since the ecdsa was created.

I think the best we can do is to match variations of tkldev and fab-dev in the offending key.

John Carver's picture

That reminds me that I've seen this discrepancy between tkldev vs fab-dev while experimenting with generating SSL keys. I believe the problem is that the build hostname is being picked up (in this case, tkldev, but mine was different) and tested for a match with 'fab-dev' .  I think the patch I submitted took care of the SSL key issue, but if not, the one I'm working on to generate a SAN (subject alternative name) SSL key will.  Now that I'm back from vacation, I hope to find a few minutes to finish work and test it properly.

Dev's may want to scan for other cases where the code is looking specifically for 'fab-dev' and change it for a more generic case.

Information is free, knowledge is acquired, but wisdom is earned.

Liraz Siri's picture

FWIW, I made the hotpatch rely on matching 'fab-dev' within the ecdsa key not because it's the right thing to do but because it was better than matching a specific fingerprint or timestamp, which also wouldn't have worked with custom builds. It's a hack.

I grepped through the common/ repo and couldn't find any other instances where we rely on the hostname, certainly not on fab-dev.

In case you're wondering where the name fab-dev comes from, it's the hostname for the master tkldev instance. It's basically tkldev with the latest versions of all the repositories + various security sensitive credentials needed to upload images to S3 or to the master rsync server.

The way we build TurnKey is that we first test the Core build and conversions on fab-dev, the tkldev master. Then we snapshot fab-dev and clone it as many times as needed using cloudtask so that the build/conversions run in parallel.

vuser1's picture

Custom appliances are still bugged, yes? Core is not fixed yet:

root@tkldev products/core# git pull
Already up-to-date.

Should we wait turnkey devs to fix?

Liraz Siri's picture

hotpatches don't go to the Core repo, they go to the Core repo:

I overlooked custom tkldev integrations when I was testing the hotpatch yesterday. I even overlooked 32-bit archs, and then realized my mistake and made the patch more generic, but it looks like I didn't do a good enough job.

FWIW, I like the idea of detecting whether the key needs to be updated by comparing it to the current hostname. Unfortunately, if we do that anyone that has changed hostname since firstboot is going to have their key regenerated which would probably not be a good thing.

So maybe I'll just update the hotpatch so that it looks for other hardwired strings in the ecdsa key. Such as tkldev, etc.

vuser1's picture

> hotpatches don't go to the Core repo

I understand. I mean - can you (me) fix build process so that new appiances do not need the hoptatch?

Liraz Siri's picture

If you rebuild from source you shouldn't need the hotpatch because I submitted a fix to inithooks.

Unfortunately I think I forgot to ask Alon to update the archive with inithooks so if you rebuild in TKLDev you'll still be getting the old package.

As a dirty workaround you can drop the updated 10regen-sshkeys file from the fixed inithooks into overlay/usr/lib/inithooks/firstboot.d.

FWIW, in the next version of TKLDev we'll roll in a feature that makes it easy to build packages from source based on the scripts we use to generate the archive so that you don't have to wait on us to do that.

Liraz Siri's picture

What I meant was that you might want to consider building inithooks from source, because inithooks was where the ssh regen script was coming from.

By default, if you haven't setup your own package repository TKLDev will get inithooks from, which isn't in sync with the source code on GitHub. In other words, the inithooks package on isn't up-to-date.

Alon is the archive master and I reminded him just now we hadn't updated the inithooks package yet and that we really should so that TKLDevers don't need to use strange and unusual workarounds.

Also, the next version of TKLDev will include functionality that auto-builds Debian packages from the latest versions of the the source code so that you can test the latest and greatest without having to rely on us to update packages on the archive. Also, that makes it easier to test your own patches or custom packages. Much easier.

Liraz Siri's picture

Brilliant idea Peter I hadn't considered that. I'm looking into it.

Liraz Siri's picture

Peter's suggestion regarding comparing the timestamp might be made to work but it's a PITA to test to see if it does and I'm very wary of rushing anything as potentially destructive as an automatic security update without thoroughly testing it from all angles.

Assuming we update inithooks so that new builds are not effected by this, I'd like feedback on how important you feel it would be for a new version of the hotpatch to fix this issue for custom apps that have already been deployed.

Presumably this would be mainly useful if there is a large deployment of nodes that couldn't be easily updated using another method?

John Carver's picture

First you have to select a file that is guaranteed to be present in all appliances and is generated at build-time.  inithooks.conf wouldn't work for me because it isn't generated for my Ansible appliance (possibly this is a bug, but it didn't seem necessary).

Second, you have to allow for some time variation because all build-time timestamps are not the same.  For example the following is a listing of my ssh key timestamps.

-rw-r--r--  1 root root 1.7K Jun 29 21:34 ssh_config
-rw-------  1 root root  668 Jul 26 02:11 ssh_host_dsa_key
-rw-r--r--  1 root root  602 Jul 26 02:11
-rw-------  1 root root  227 Jul 26 01:55 ssh_host_ecdsa_key
-rw-r--r--  1 root root  186 Jul 26 01:55
-rw-------  1 root root 1.7K Jul 26 02:11 ssh_host_rsa_key
-rw-r--r--  1 root root  394 Jul 26 02:11
-rw-r--r--  1 root root 2.5K Jul 26 01:56 sshd_config

Here the build started sometime before 1:55, sshd_config was updated during build at 1:56, and first-boot occurred a few minutes later about 2:11.

Is there any reason why we would expect the user@hostname to be different for the three keys, rsa, dsa, and ecdsa?  All three keys should have been rebuilt during first-boot with the then current hostname.  If the hostname changes later, the three keys should still match.  However, if the hotfix regens only the ecdsa key and the hostname has changed, then the keys will still not match.  The only way to guarantee matching keys would be to have the hotfix regen all three keys.  Because of the difficulty in achieving consistent results, I would recommend not issuing another hotfix.  By now, every TKLdev user should be aware of the problem. 

Information is free, knowledge is acquired, but wisdom is earned.

Liraz Siri's picture

I tend to agree. While I would have preferred a perfect hotpatch that worked everywhere, it doesn't seem to be enough of a problem to be worth the trouble. The benefit is low relative to the risk of making a mistake and getting it wrong and the amount of etsting that we need to do to make absolutely sure it doesn't.


Add new comment