I've spotted several posts of people with problems or questions about long running backups, but short of signing my client up and running tests (since they are more of a "build it in house" group, that would be an uphill battle) ... I was hoping that someone, somewhere had statistics on what typical performance is. Assume it's all in Amazon ... say it's a TB or two ... any statistics?  Is there some threshold effect (viz. fast for GiB, slow for 100's of GiB and then fast again for TiB?)


Inquiring minds and all that.

Thank you in advance.


Jeremy Davis's picture

There are so many factors that there is no such thing as a "typical" backup/restore time...!

Ultimately it will depend on so many different factors, e.g. the power of your server (mostly I/O and CPU), your backup size, your server's upload/download bandwidth and latency (at the time you run your backup/restore); Amazon's S3 storage upload/download bandwidth and latency (at the time you run your backup/restore), etc...

If you are running your server locally then I would imagine the the main bottleneck will be your local bandwith (upload for backups; download for restores). If you are using the Hub (or you have super fast download) then it will probably be AWS download/upload throttling (TBH I have no idea how much and/or when they do that...). For large size backups I/O will have noticeable effect copying data around and CPU effects compressing and uncompressing speeds.

Add new comment