For a while now, I have been using a Virtual Private Server to store my personal websites and screenshot uploads. While this has been working well for me, the cost of having such a server only to store and serve mostly-static content has seemed like overkill. While companies like Webair have amazingly reasonable pricing on VPS servers, the fact of the matter is that when you’re not taking advantage of what you’re paying for, you’re doing nothing more than throwing money away.
While I briefly considered migrating myself to shared hosting, prior experiences with “web giants” such as 1and1 quickly deterred me from doing so. So what was I to do? I needed a “new” and flexible solution. This is where the Amazon S3 service steps into play.
Part of the Amazon Web Services, Amazon S3 stands for “Simple Storage Service”, and the service definitely holds up to its name. S3 allows the end-user to create “buckets” in which they store their data. After uploading data via the friendly web-based interface or API, you can configure for the content to be made accessible from the bucket URL.
So, for example, I have a “bucket” for my screenshots configured. This bucket is called “grabs.mydomain.tld”, and has a URL of “grabs.mydomain.tld.s3.amazonaws.com.” To make this simpler, I have a CNAME configured that points “grabs.mydomain.tld” to the Amazon URL.
While many will quickly discredit Amazon S3 for not having support for uploading via FTP or SSH, I have found that their API to be simple enough to implement personally. They even have scripts such as “s3curl” that allows a user to upload to S3 using the “curl” command found in most Linux distributions.
Further, there are third-party applications such as CloudBerry, BucketExplorer, and S3 Browser that provide simple yet effective interfaces for managing the S3 service. What makes many of these applications convenient is the fact that they are very similar to graphical FTP applications, meaning that an end-user could transition themselves relatively easily.
The one disadvantage to S3 is that it does not allow you to specify a “root object” such as “index.html” to be the default file shown when someone views your S3 bucket’s URL (or CNAME).
This can be easily resolved by using the Amazon CloudFront content distribution service, which allows for a user to specify a “root object.” Better yet, CloudFront acts as a CDN, caching a “bucket” across multiple servers in various regions of the world in order to better balance the content distribution, allowing for insanely fast loading times.
And, S3 does not allow for “dynamic” content such as PHP files to be executed, meaning that you are only able to store “static” non-dynamic content. For this reason, you may still need to hold onto your traditional hosting in order to execute PHP scripts (for WordPress, etc), as well as services such as MySQL.
However, this does not mean that you cannot take advantage of S3 to store large bandwidth-heavy files. The Amazon S3 Plugin for WordPress, for example, allows you to store your WordPress images and uploads on the S3 service, allowing you to free up resources on your web-server. Because of this, you can still scale back your server specs and migrate some of your workload to S3 in order to save overall.
S3 stands for “Simple Storage Service”, and we cannot think anything more of it. For storing static content, it’s hard to beat the excellent pricing behind S3. Because I only store static HTML files and images, I do not have the need for highly-advanced web-hosting. Thus, Amazon S3 works very well for me and allows me to save a boatload of money on my personal hosting needs.