Amazon EC2: How to expand the size of an existing root volume

Notes:

  • As of 2019, this is the recommended path if you wish to increase the size of your root volume. Best practice is to stop your server for the initial snapshot, but strictly speaking it's not required.
  • Alternatively, please see the legacy documentation on migrating data to a new instance (with a larger volume). It should still be relevant.
  • This page is only relevant for EBS backed AWS EC2 instances. If your instance has "instance storage"*, it is suggested that you instead consider moving files to your instance storage.

* - "Instance storage" is only available on older AWS instance sizes. I.e. older versions of TurnKey and/or older AWS instances sizes. If you have a v14.2+ TurnKey server it will almost certainly be EBS backed. Versions as far back as v13.0 will likely be EBS backed too, but double check within the AWS Console to be sure, or please post to the forums stating the TurnKey version and AWS Instance size/type.

The problem: the root filesystem is running out of space

Here's an example to illustrate the problem. Alon launches a Micro server via the TurnKey Hub. After a couple of months Alon notices that space on his 10GB root filesystem has almost run out.

# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            486M     0  486M   0% /dev
tmpfs           100M  4.1M   96M   5% /run
/dev/xvda2       9.8G  9.8G   19M   100% /
tmpfs           498M     0  498M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           498M     0  498M   0% /sys/fs/cgroup

The problem is that there is only 10GB in the server's root filesystem.

So how does Alon increase the storage size?

Increase the root volume size using AWS Console

Expected time to complete: 40 mins to an hour.

These instructions assume that your server is running from the TurnKey Hub, but they also require usage of the AWS Console. If you aren't using the Hub (or TurnKey for that matter) they should still be generally relevant, although you may wish to consult the AWS docs for more detail strictly related to the AWS Console.

So long as you can afford a small amount of server downtime, then arguably the best way to resolve this, is to enlarge the root volume via the AWS console. If you can't afford any downtime, then this can be done live, but I would argue that the few minutes of downtime is worth the piece of mind.

Getting ready

The first thing to do is to create a backup. You can use TKLBAM for that purpose, although in this case, I would recommend taking a "snapshot" (as well?). If you are extremely short on space, TKLBAM may not be an option. A snapshot will allow you to get back to exactly where you are right now quicker and with less hassle (no need to start a new server, etc).

Whilst you can take snapshots of running servers, for the purposes of backup, Amazon recommend that you take snapshots of stopped servers. This is because there is a (very slim) chance of data corruption when taking a snapshot of a running server. So unless a few minutes of downtime is going to be a problem (and even then), it seems like a no brainer to follow that advice.

If you're stopping the server, from the Hub's Servers page, select your server so that the full info rolls down and click Stop.

Once you're ready to take a snapshot, still on the Hub Server page, look for the text link which says "x snapshots" (where 'x' is the number of snapshots you have taken already - will be '0' if you haven't taken any before). Click that and on the page that opens, click the "Create snapshot" button. Initially it will display "0%" and will take a minute or 2. If you don't see any progress, try refreshing the page. Once it's complete, you should see a green circle.

Finding your TurnKey server in AWS Console

Back on the Hub's Servers page (roll down your server info again if need be), look for where it says Region and note that. Now, in a new browser tab/window open your Amazon Console to the EC2 - Instances page. In the AWS Console, on the top bar, towards the top right (between where it shows your username, and "Support") click the drop down and ensure that the region matches the one that your Hub server is in (if it does already, skip that step).

If you only have one stopped server/instance (or are otherwise sure of exactly which server you need to work with), then you can skip this next step. Although, if you have 2 or more instances, to be completely sure that you are working on the right one, I recommend doing a "search".

To do that, go back to the Hub's servers page and copy the EC2 instance ID (in my case it's "i-0c04e369a3d987199"). Paste that into the search bar towards the top of the AWS Console tab/page (by default the search bar will contain the text "Filter by tags and attributes or search by keyword"). Paste your server/instance ID in there and hit enter.

Increasing the volume size

Only one instance should now remain showing in your AWS Console. If you don't get any result, double check that you have selected the correct region and that you are signed into the correct AWS account (if you use multiple accounts). Further information about your server should now be viable in the lower pane. Scroll through this until you see "Root device", it will likely be "/dev/xvda".

Click the device (e.g. /dev/xvda in my case) and small black pop-over should appear with some info about the EBS volume. Click on the EBS ID (in my case "vol-02110c92a873ad915") and you should be redirected to the AWS Console - Volumes page, with only the desired volume showing.

Right click on your volume and select "Modify Volume". A pop over should open. Look for where is says "Size". Mine is Currently "10" (GB). Increase that number to the desired volume size (in GB). I'm going to make mine 20GB. Then click the "Modify" button. On the next screen click "Yes". You should be greeted with a green message: "Modify Volume Request Succeeded". That should happen almost instantly, but you may need to refresh the AWS console to see your volume display with the desired size (e.g. 20 GiB in my case). If it doesn't, try refreshing again.

Back to the Hub: restart your server (if stopped) and confirm the larger volume

That's all we need the AWS Console for, so you can now log out and shut that. If you stopped your server, return to the Hub's Servers page. On your server, click "Start" to bring it back to life. As per usual, it will take a little while to become available. For most servers, this should only take a couple of minutes (5 at the most).

Once your server is running (or straight away if it wasn't stopped), log in via SSH. If you were to run the df command again, you would notice that nothing has changed. That is because we've only increased the volume size. If you wish, you can check that the volume is indeed bigger, with gdisk:

# gdisk -l /dev/xvda
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/xvda: 41943040 sectors, 20.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 8A333A04-DB02-4249-870C-EC5665D405BF
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 20971486
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            6143   2.0 MiB     EF02  grub
   2            6144        20969471   10.0 GiB    8300  rootfs

Note the "Disk /dev/xvda: 41943040 sectors, 20.0 GiB".

Fix the partition table and resize the partition

Next we need to fix the partition table (GPT), then we can extend the partition to use the new free space. To extend the filesystem (and fix the GPT), we'll use parted (should be installed by default on all v15.x servers). Start it with the 'parted" command.

# parted
GNU Parted 3.2
Using /dev/xvda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) 

First up, double check the existing info (again) using the 'print' command. It should bring up a warning that not all the space is being used, with an option to fix it..

(parted) print                                                            
Warning: Not all of the space available to /dev/xvda appears to be used, you can fix the GPT to use all of the
    space (an extra 20971520 blocks) or continue with the current setting? 
Fix/Ignore?

Allow it to do the fix (by typing 'f' followed by enter). It should now display similar info to the df check we ran earlier. Although note that this displays GigaBytes (1Byte x 1000 x 1000) as opposed to GigiBytes (1Byte x 1024 x 1024) so don't be alarmed if the numbers don't quite match. In my case, it's displaying 21.5GB (which is 20GiB).

Fix/Ignore? f
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name    Flags
 1      1049kB  3146kB  2097kB               grub    bios_grub
 2      3146kB  10.7GB  10.7GB  ext4         rootfs

(parted)

To resize the partition, use the 'resizepart' command. On a default set up, you will want to resize partition 2, but double check your output (from 'print') to make sure yours matches. Note that I ignored the warning about modifying a partition in use. Also note that I used all the free space (i.e. went to the final size of the disk: 21.5GB)

(parted) resizepart                                              
Partition number? 2                                                       
Warning: Partition /dev/xvda2 is being used. Are you sure you want to continue?
Yes/No? y                                                                 
End?  [10.7GB]? 21.5GB                                                    
(parted)          

Double check that it all worked as expected with 'print'. If something doesn't look right, you can rerun the 'resizepart' command again. But mine looks fine.

(parted) print                                                            
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name    Flags
 1      1049kB  3146kB  2097kB               grub    bios_grub
 2      3146kB  21.5GB  21.5GB  ext4         rootfs

(parted)

Nearly done; extend the filesystem and finish

Even now, if you were to exit parted and recheck the output of df, you could still not see the additional new space. That is because the final step is to extend the filesystem to fill the partition. Some versions of Parted can also do that (IIRC the command is 'resizefs'), but the one I haven't doesn't seem to have that ability, so exit out of Parted ('quit'). You can safely ignore the info about /etc/fstab

(parted) quit
Information: You may need to update /etc/fstab.

Then run 'resize2fs' on the partition:

# resize2fs  /dev/xvda2

If we check df now, we'll see that the free space is now available.

# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            486M     0  486M   0% /dev
tmpfs           100M  4.1M   96M   5% /run
/dev/xvda2       20G  9.8G   10.2G   48% /
tmpfs           498M     0  498M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           498M     0  498M   0% /sys/fs/cgroup

YAY! All done! :)

Final cleanup

Personally, I like to give it a few days to settle in before I remove the snapshot. That's just in case something isn't quite right, but doesn't show immediately. Having said that, I've never needed it before. Still, I'd rather give AWS a few more cents for the piece of mind. Once you are feeling confident, you can delete the snapshot if you wish. Do that by finding the snapshots as per the first part and this time instead of creating, delete.