What Happens When My Cloud Goes Down?
“There is no cloud; there is only someone else’s computer.”
Whether this is technically true or not, the fact remains that when you do any sort of cloud computing, you’re interacting with some physical server that exists somewhere in the real world.
It can be easy to forget this... until catastrophe strikes.
In March of 2021, catastrophe struck in the form of a fire at a data center belonging the largest hosting provider in Europe. About 100 firefighters rushed to the scene and battled to contain the blaze but, despite their best efforts, significant damage had already been done.
The fire’s effects were devastating and far-reaching. One of the four data centers was destroyed, and another sustained extensive smoke damage. 3.6 million websites went down, including ones owned by the French government. Customers were contacted and told to activate their disaster plans, but not everyone had a plan.
The onus was on the customers to prepare for the worst, whether that be by backing up their own data elsewhere or paying for a higher tier of service that included backup. Even for those who had prepared, the fire meant significant downtime and incurred expense as teams scrambled to rebuild the lost servers, scrub clean the smoke-damaged ones, and get everyone back online as soon as possible.
This was an expensive lesson in the dangers of centralized storage and a potent reminder that the cloud is not magic and that “nothing stops the fundamental laws of physics and chaos.” As you might guess, fires aren’t the only danger of storing all of your data in one physical location. So what are these dangers, and how can we mitigate them?
Dangers of Centralized Storage
In recent years, it has become clear that the main issue with traditional cloud storage is that it is too centralized. A small handful of companies host the vast majority of data in their data centers. Things can (and do) go wrong, and it can be expensive or complicated to execute a backup plan when that happens.
Human Error
According to the 2021 Uptime Institute Annual Outage Analysis, about 63% of data center outages are due to human error. This was the case in February of 2017 when an outage at Amazon S3 took out a large portion of the internet for over four hours.
The cause? A typo.
While debugging, the S3 team entered a command incorrectly and took down more servers than intended. Human error was similarly implicated in a 2019 outage at Google that affected both their apps and Google Cloud. A configuration change, that was only intended to affect a small group of servers, was accidentally applied to a larger number of servers. Although Google’s engineers detected the error almost immediately, it took a significant amount of time to correct it.
Hacking
The majority of the cloud is administered by a small group of large companies. This makes them obvious targets for attacks like the 2019 DDoS attack on Amazon AWS. This particular attack lasted eight hours long and targeted Amazon’s DNS service, which then caused a ripple effect that took down other AWS services. AWS Shield was unable to fully mitigate the attack, and thousands of customers were affected.
DDoS attacks have become more frequent in recent years, and that trend is projected to continue. They’re getting more complex and larger in scale, making them harder to deter. And they’re becoming cheaper and easier to implement using insecure IoT devices.
Weather Events
In 2012, electrical storms took out AWS servers in the US-East-1 region. A backup generator was activated but failed to deliver consistent power. This outage affected various Amazon services such as Elastic Compute and Elastic Cache and impacted Netflix and Pinterest, among others.
2016 saw a similar event hit Sydney. Again, despite having multiple fail-safes in place, multiple pieces of equipment failed to function as needed. The power remained off for a little over an hour, and by 1 AM, the majority of customers were back online.
The effects of weather events are definitely worth considering when it comes to choosing a cloud provider. Large weather patterns can easily encompass huge swaths of land and affect multiple server locations. Cloud providers have backup power sources, but even those backups can fail.
What Is Your Disaster Plan?
Regardless of whether you are storing data on a private server or in the cloud, you should have a disaster plan in place. Don’t assume that your cloud provider has backups. Find out if backup is included and if so, whether it is stored in a different region or not. Having a backup of your data doesn’t help you if it can be compromised along with your original files.
Make sure you are following the 3-2-1 rule: Have three copies of your data on two different media, one of which is stored offsite. In the case of cloud storage, you may need to do some digging to figure out whether your cloud provider is fulfilling the 3-2-1 rule. In many cases, having a copy offsite is a premium service and can be quite costly.
How Filebase Can Help
The moral of the story: data housed on a single company’s servers in a single location is vulnerable to natural disasters, malicious attacks, and human error. These dangers can be avoided by switching from a traditional cloud provider to a decentralized solution.
With no single point of failure, decentralized storage is inherently more secure. Human error is much less likely to occur in a distributed system as there is no chance for one person’s mistake to affect the entire system. There’s no single data center that can be targeted by hackers. And with your data spread out geographically, there is no risk of a single weather event causing an outage.
Filebase is an object storage platform that uses decentralized networks for its storage tier versus centralized servers. With Filebase, you can rest easy knowing that your data is geo-redundantly backed up with 3x redundancy, thanks to native erasure coding. This means that your files are split up, encrypted and distributed globally using blockchain technology. If any server on Filebase’s huge network goes down, data is repaired and moved to a new host. This doesn’t cause any interruption in service, and it’s done automatically.
Filebase is geo-redundant by default for all customers, including those with a base subscription (which costs only $5.99 for 1TB of storage and 1TB of bandwidth). If you were to try to replicate this level of redundancy using AWS, you would need to pay for three buckets in three different regions. Add to that the cost of moving your data around, and your bill will rise exponentially. (~$150/TB/Month)
Data center outages are just one example of the problems with traditional cloud storage solutions. Centralized storage carries with it the potential danger of a single point of failure. If one thing goes wrong at AWS, for instance, it can take out a significant portion of the internet. Filebase represents a viable solution to this problem by being natively geo-redundant, thus removing the dangers of using centralized cloud storage and putting all of your eggs in one basket (or bucket, as it were).
Learn more, and sign up to receive 5GB for free, at https://filebase.com.