OnePressTech's picture
Hi Jeremy,

Hope all is well. It's been a while since I have booked a support query. I just completed a AWS EC2 TKLX14.1 Jessie server root disk resize (from 10GB to 20GB). That all seemed to go as planned and df -h shows the correct disk size.

When I SSH into the server though the initial display of the MOTD shows incorrect "Usage of" for the disk. The disk size and the usage amount is understated.

I took a quick look at the MOTD script but it seems to be a bit of a rats nest of sub-calls.

Any quick fix pointers?


Jeremy Davis's picture

Yeah, as I recall, the MOTD setup was a real mess back then! IIRC, Debian Wheezy (what v14.x was based on) was the first release that provided a dynamic MOTD, but we had already implemented it ourselves (robbed from Ubuntu) in v13.x. So there was some overlap, which made it a bit messy. FWIW, we now implement it as per the Debian default method in v16.x.

Unfortunately though, v14.x was a long time ago and I didn't really get deeply involved in core dev until v15.0 (and Alon did much of the initial transition from v14.x to v15.x). So I don't recall what might be required to ensure that the MOTD is dynamically generated. And I no longer have a v14.x server handy to check.

Having said that, it appears that the v14.x MOTD wasn't actually updating as it should have been (see MOTD issue) - which seems similar to your report. To workaround that, we just created a MOTD cron job. FWIW, once we got to v15.x though, that resulted in a double MOTD, which we resolved by removing the cron job again...

Hopefully that helps?!

Additionally, google tells me, that when logging in, it should first display /etc/motd, followed by /run/motd.dynamic (FYI /var/run should be a symlink to /run). /etc/motd is static and (as the name suggests) /run/motd.dynamic should be dynamic. That's certainly how it is now, but I'm not sure if that's how it was previously?!

OnePressTech's picture

Much appreciated as always. I will leave the current MOTD in its incorrect state then. I will be upgrading that server to 16.1 over the next month so I will leave this issue buried in the past. Thanks for the quick response and the insightful comments. You answered my question which is that there is no quick simple fix.


Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

The cron job will workaround the issue, but it will need to be removed later. So if you're planning on updating soon, then might not be worth it.

Add new comment