Keyturns's picture

Hi, my TKL Wordpress instance sourced recently from the AWS marketplace and on AWS (t2.small) went to 100% CPU and the culprit was a process called 'yam', which appears to be a bitcoin miner. It looks like my Turnkey instance got hacked! But I do not know how.  Has TKL seen such a hack before?  What is the vector that it got on my system so I can block this? I had changed firewall access to webmin and webshell and adminer to be just from my IP so those were not wide open. Wordpress Admin is open

I killed the 'yam' process and deleted what appeared to me to be suspicious files that I found looking around the system.  There is a new folder inmy '/root' dir named yam-yvg1900-M7v-linux64-generic with a create date of 2014, here's a find command and what it found for 'yam':

root@wordpress ~#find / -name "*yam*"
/root/yam-yvg1900-M7v-linux64-generic
/root/yam-yvg1900-M7v-linux64-generic/yam-fcn.cfg
/root/yam-yvg1900-M7v-linux64-generic/linux64-generic/yam
/root/yam-yvg1900-M7v-linux64-generic/yam-dmd.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-max.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-bcn.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-qcn.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-xmr.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-mmc.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-myr.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-grs.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-pts.cfg
/root/yam-yvg1900-M7v-linux64-generic/yam-dvk.cfg
/proc/sys/kernel/yama
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/hamradio/yam.ko

 

The files "/proc/sys/kernel/yama"  and  "/lib/modules/3.16.0-4-amd64/kernel/drivers/net/hamradio/yam.ko" appear to be legit Debian so not deleting those but deleted the dir "yam-yvg1900-M7v-linux64-generic".

I'm guessing the virus infection vector is still there on my TKL Wordpress appliance so I want to close it. Suggestions on how best to find to hole and then to close it? Which log files do you suggest I review to get more info on this?

There's not much info on the 'yam' hack on the net but some Stack Overflow links that appear related:

The suggestion on these Stack Overflow pages is to terminate the instance and start again - what is this the TKL recomendation?

Forum: 
Keyturns's picture

FYI there was also miner.tgz file in my /root dir, these are the two suspicous files in my root directory:

-rw-r--r-- 1 root root 1418598 Jan 13  2017 miner.tgz
drwxr-xr-x 3 root root    4096 Jun 27  2014 yam-yvg1900-M7v-linux64-generic

 

 

OnePressTech's picture

Hi Keyturns,

Caveat: I don't work for the Turnkey Linux organisation...I am just a TKLX VM user like you.

When it comes to security and security breaches a TKLX VM is simply a packaged Debian release with some security features enabled (like auto-patch and component password reset on first boot). The TKLX VM is no more or less secure than any other Debian VM.

Regarding how you got hacked...have you checked the WordPress vulnerability databases to ensure you are not using a vulnerable plug-in (in Dec 2017 a number of plug-ins changed ownership and the new owner put in a back door)?

If I were you I would start with a fresh VM, lock it down, and migrate WordPress data over manually.

I hope you get it all sorted.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

That really sucks! :( Apologies I'm a little slow responding (I've been on holidays). Thanks Tim for dropping in and sharing your suggestions! :)

A quick bit of googling suggests that Yam miner is a legitimate CPU cryptocurrency mining client. And the yam.ko (kernel module) is a (completely unrelated) legitimate kernel module (for use with Ham radio, as the path suggests). It is provided by the default Debian kernel package.

With all that in mind, it seem likely that your server was hacked and a CPU miner installed. It may have been done manually, but I'm more inclined to imagine that it was a bot attack. So it's possible that besides the miner, there is other software also installed. In fact, it's highly likely that the miner was the payload installed by some other malware. Therefore, Tim's advice (move to a new server) is sound.

Having said that, the fact that Yam is in your /root directory, suggests that the attacker gained root access. So whilst it's still worth checking WordPress itself as per Tim's suggestions (he knows much more about WP than me), it seems unlikely that was the vector they gained access. A brute force attack seems much more likely. So making sure that a new server has a really good root password is super important. Better still, just use keys to access SSH.

If you use Webmin (and therefore need to set a password) you can still disable SSH password access. Please edit the SSH config file (/etc/ssh/ssh_config) and look for the "PasswordAuthentication" option. If it's anything like the server I have in front of me, it has a "#" at the start of the line, remove that first. Then change the "yes" to a "no". I.e change:

#   PasswordAuthentication yes
to
   PasswordAuthentication no
Then restart like this:
service ssh restart

Please keep in mind though, that if you do that and lose your key file, you won't be able to log ion via SSH! We can't help you recover it!

When cleaning up the current server, you could try tools such as ClamAV (apt-get install clamav), Sophos AV (proprietary, but free as in beer), chkrootkit (apt-get install chkrootkit) and rkhunter (apt-get install rkhunter). But personally, I find it hard to ever completely trust a machine that has been compromised. It may still be worth running them on your server before you migrate though to reduce the chances of any nasty following you...

Personally, what I'd be inclined to do would be to clean up your current server as best you can, then migrate the data to a new machine as Tim suggests.

If you were to go the route suggested by Tim, you could still use TKLBAM to migrate the data to a new server, just don't do a full restore! To do that, you'll want a current backup of your existing server. If you don't already have recent full backup, please create one (after you've at least cleaned up WordPress itself) like this:

tklbam-backup --full-backup 1D
Then on a new server, dump the backup to a directory. Assuming that the server is already linked to your Hub account:
mkdir /tklbam-dump
tklbam-restore BACKUP_ID --raw-download=/tklbam-dump
Then all of your backed up files will be in /tklbam-dump. You can then manually restore the files to their proper location if you wish. Although it is also possible to use TKLBAM to restore some stuff if you want?!

E.g. to reinstall all the Debian packages that were installed on your old system (so long as you don't restore /etc/apt then it will only be able to re-install legitimate Debian or TurnKey packages):

tklbam-restore --skip-files --skip-database /tklbam-dump

E.g. to restore the WordPress DB (assuming it's still called 'wordpress'):

tklbam-restore --limits=mysql:wordpress /tklbam-dump

E.g. to restore the WordPress directory:

tklbam-restore --limits=/var/www/wordpress /tklbam-dump

You could also use that method to restore other directories, such as /etc but as they may contain traces of malicious software that was installed, please be careful. I would recommend that you just manually migrate directories that you need, rather than doing any blanket restore.

E.g. to restore Apache config:

tklbam-restore --limits=/etc/apache2 /tklbam-dump

Hopefully that helps you get back up and running on a clean server ASAP.

One other thing that may be worthy of inclusion in your new server (and we're considering including it for v15.0 too) is Tiger (apt-get install tiger). It's more of an audit tool than a specific anti-malware tool, but can be very useful for keeping a track of files that change. If you have it running regularly on a cron job, you can use the logs to work out what changes happened when.

There is also another security checking script that I'm aware of in Debian, it's called checksecurity (apt-get install checksecurity). I personally don't know much about it, but it may be worth a look?

If you find anything that you think we should be doing better for the initial set up, please get in touch. General hardening suggestions can be posted here, or on our GitHub Issue tracker. Hopefully you won't, but if you do find something significant, please contact us privately, via support@turnkeylinux.org.

Keyturns's picture

Thanks Jeremy and OnePressTech for your comments. Just an update here as I dig through logs to understand what happened. The first thing I did was change passwords for server and Wordpress accounts and monitored the system closely. Since I killed and deleted the miner processes and software the server appears to be running as expected. Reviewing logs /var/log/auth.log showed someone did login via ssh from an IP that was never used by me (true that passwords can never be too strong). I installed fail2ban to throttle this but I am still getting tons of failed ssh login attempts in auth.log so that's annoying. Checking other logs and Wordpress did not show anything suspicious to me so it seems the hacker got in via ssh and installed his miner stuff and ran it and then left (the 'and then left' part I don't know for sure, next will look at options you guys have presented). I plan to go to a fresh VM instance but still poking around to understand and learn as much as I can here.

Jeremy - I use PEM and PPK files to ssh into my AWS TKL Worpdress instance, if I do your suggestion and set this in my /etc/ssh/ssh_config

PasswordAuthentication no

will I still be able to ssh into the server?

I'd like to block all ssh login attempts that do not use a PEM or PPK file, is this possible with the above? This in tandem with fail2ban seems like a good way to really throttle back this attack vector.

OnePressTech's picture

Hi Keyturns,

If you were hacked via SSH that is something that Jeremy and the TKLX community would like to know more about if you turn over any additional details. Thanks for sharing :-)

FYI - I always put a host firewall block against the non 80 / 443 ports locking them down to my IP address. Fail2Ban is a good secondary but is not a good primary defense since the hackers cycle through pools of hacked IP addresses. You can't block them all on an individual basis. You need to block all IP addresses except for your own. It's a bit of a pain in the ass since ADSL users like me and others get their IP address cycled on a daily basis but it's a good tool in the protection game.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Again Tim is on the money! :)

I'd be interested in learning more about what happened on your server so that perhaps we can look at stronger mitigation for what you experienced in our next release.

WRT your question on /etc/ssh/ssh_config the answer is yes! Changing that setting (and restarting ssh) will mean that only keypair login is supported via SSH. I would recommend making sure that you can log in via Webmin (using your password) before you do anything. That is because if you do that while connected via SSH, when you restart SSH, you will be booted. Worst case scenario, then you can still access your server config (via Wedmin) if something goes completely pearshaped (e.g. accidentally enter something in the SSH config which causes it to fail to start).

Once you have proven that SSH key only log in works fine (after making the above change) then you can also disable Webmin (if you don't use it and/or you want to lock things down). Incidentally both Webmin and Webshell run behind stunnel. So disabling them should be as simple as:

service stunnel4 stop
systemctl disable stunnel4.service
You'll get some warnings about runlevels from the second command, but you can safely ignore them. If you don't plan to use Webmin and Webshell at all, then I'd also recommend disabling them (same as above, but substitute "webmin" and "shellinabox" for where it says "stunnel4").

Tim's suggestion to block access to ports that you don't need public access to is very sound. Even if you don't have a static IP but want to raise the bar a bit without causing yourself too much overhead, AWS Security Groups allow you to block IP address ranges. In my experience, generally your IP address will be assigned within a certain block of IPs (allocated to your ISP). So blocking everything bar that block of IPs should limit the people who could potentially hack your site. If you are happy to do it manually, then you could allow your current IP address only and just update it via AWS console, everytime you want to log in (I assume that's what Tim does).

If your site has a clear geographical user base, then you can even consider completely blocking access from certain countries (generally each country has a specific IP address range assigned). That may be useful if say your site was hacked by a Chinese hacker and your userbase is relatively local (i.e. block all Chinese IPs, or at least from the ISP the attacks originated). Having said that, often these sort of attacks come from previously exploited AWS servers, so it may not be of much value.

Finally, another vector worth exploring is a CDN service such as Cloudflare. They have a free plan so you can test it out and for a smaller site, it's probably enough. For details on the security advantages of using Cloudflare, please see their site. Please don't take that as an endorsement of Cloudflare above all other CDNs. I assume that their competitors (including AWS Cloudfront) probably provide similar security, it's just that I have only had experience with Cloudflare so can only speak to that.

Keyturns's picture

Tim, on your comment:  "I always put a host firewall block against the non 80 / 443 ports locking them down to my IP address", I take it this can be done in AWS EC2 using Security Groups? Do you know of any AWS Security Groups examples for a Wordpress server I could review? I am just not sure what ports to keep open so I don't break things, like I don't want to block MySQL port and break the site.

Jeremy, I'll put more here as I uncover details about the hack.  And note that I simply installed the Wordpress appliance via the TKL Hub interface, and I don't recall exactly but I'm sure I just went with defaults and the default AWS Security Groups TK creates. It seems my error here was not defining a strong enough password so a brute force ssh attack got me since it was just a matter of time. Perhaps on your default Security Groups via TK Hub don't make ports 22 and 12320-22 open to the internet but rather give an option to set it initially to be limited to the IP of the client using TK Hub. Also having fail2ban and some other tools pre-installed and available to help with security and auditing would be good. I don't mind installing these tools myself but it would be great for them to just be there and working and ready to go with all the default TKL appliances.

And a question about the latest security matter: can you comment on how Meltdown and Spectre impact TKL appliances?  Will my appliance get updated by the TKL appliances automatic security updates when a patch is avialble?

 

 

OnePressTech's picture

Public facing access should be reserved for 80 and 443. All other ports should be locked to your IP address.

Regarding MySQL port if you have a single server installation on AWS then there is no external access to MySQL since all interactions are local. If you have your SQL server on its own instance shared across multiple processing instances then it would not be an issue since you would configure the instances to only allow access to each other within the VPC using internal addresses.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

You beat me to it mate! Must have been while I was writing my mini essay! :)
Jeremy Davis's picture

Yes, Tim is referring to AWS "Security Groups" (aka AWS firewall).

Assuming that you are using a default TKL config you will only need public access to port 80 and 443 (http and https). MySQL is bound to localhost, so whilst it listens on 3360 (IIRC), it only listens on localhost anyway (so blocking external access will make no difference).

Thanks for the insight and feedback. We are already considering setting up fail2ban for the v15.0 release, but thanks for the additional suggestions. Including some other tools is probably worthy of consideration too.

The problem with blocking access to those admin ports by default is that you won't be able to use your server at all if you can't log in! Although perhaps defaulting to blocking password SSH access might be worth consideration (seeing as most people use keys these days anyway)? Or at least increasing the required password complexity and encouraging users to use keys whenever possible. Although, as you say, perhaps we can allow some easy configuration of access to those ports.

As to Meltdown and Spectre, as you are possibly aware, they are essentially hardware bugs, primarily affecting Intel (and some ARM chips). Apparently AMD chips are only potentially vulnerable to CVE-2017-5753 (one part of Spectre).

AFAIK, there is no current fix available for Spectre (AFAIK pretty much every Intel chip on every OS remains affected). Meltdown is marginally better news as there is a kernel update to address that slated for Debian Jessie (the basis of v14.x) "as soon as possible". Currently only Debian Stretch has a kernel update for Meltdown. FWIW, here are the 3 CVEs:

Spectre:
https://security-tracker.debian.org/tracker/CVE-2017-5715
https://security-tracker.debian.org/tracker/CVE-2017-5753
Meltdown:
https://security-tracker.debian.org/tracker/CVE-2017-5754

Considering AWS's reliance on Intel chips, and their usage of Xen (which makes guests vunerable too, regardless of whether they are patched or not) I imagine that they would be pulling their hair out! Apparently Redhat released a kernel update to address Meltdown yesterday, so Amazon would be madly patching their machines as we speak. Although as I say, I'm fairly sure that there is no Spectre fix yet, so I'm not really clear how much risk there is there.

When I first heard about them, I did plan to write up a blog post on it. However, seeing as there isn't a patch yet, alarming people when there is no current available resolution doesn't make a lot of sense.

I have spoken with a Debian insider we have access too and whilst he doesn't know much personally, he was quick to reassure me that the Security Team take these sort of things very seriously and he thought it was likely they would be working on the issue as we speak.

Auto sec updates will install the patched kernel when it's released. However, you will need to do manually a reboot for the patched kernel to be used.

FWIW, from what I've been reading, there is no know exploit in the wild. So I don't think it's yet urgent. Obviously the sooner it's patched though the better. Assuming that we get a patched kernel by next week, please expect a blog post ASAP.

Jeremy Davis's picture

I've just posted a blog post on Meltdown/Spectre. I have tried to provide some basic info plus some resources for further reading. It also covers the steps that AWS have taken to mitigate the issue, plus how you can check you have a patched kernel (and to reboot if you don't yet have it running).

I hope that helps and provides enough info for you. Please do not hesitate to post back if you have further concerns or questions.

Add new comment