An organization's data is one of their most important assets as it helps to make better strategic decisions.
— Simon Jelley, VP, Veritas
Whether you are working from an office or from home (as many of us now are), how you manage access to your data and protect it should be one of your top considerations. Donor information, personal data on those you serve or work with, or even information on your own staff is all of value to hackers. If you lose access to or control of your data, it can mean tens of thousands to potentially millions of dollars in damages to your organization. But hackers aren't the only concern here. You can also lose data because of hardware failures, user error, or foul play like software viruses and ransomware.
Establishing a fundamentally sound data protection and recovery strategy to mitigate or prevent the impact of these events is essential for the success of your nonprofit organization. Below, you'll find a framework to help you think about how to approach data protection and data recovery, plus mistakes to avoid as you look to tighten your data management strategy. The core ideas behind this framework can and should be applied to on-premises data management as well as data that is stored and maintained in the cloud.
Data Protection for On-Premises Systems
The "3-2-1 rule" is a data industry standard that can help you design the ideal data protection strategy for your nonprofit's data and systems that are stored on servers and in systems at your organization's offices.
Keep Three Copies of Your Data
The best practice is to create an initial backup, usually to disk media for speed, then to duplicate that backup to at least two other locations.
Keep Copies on Two Types of Media
The types of media you can use will vary based on the data protection solution. It is common to use disk and tape, but other media types can also fulfill this purpose, such as removable USB drives, cloud storage, even CDs.
Keep One Copy Offsite, Off-Network
Examples of this step would be
- Tape media that has been sent to an offsite storage location
- RDX type media that you store in a secure location locally
- Archival cloud storage such as Amazon Glacier or Azure Archive
Ideally the copy should be geographically separated from your primary data to prevent localized disasters (such as a fire) from destroying all your data. However, even an off-network local copy provides an additional layer of protection in the event your entire environment was impacted by a malware attack like ransomware.
RTO and RPO
Two terms are commonly used when discussing data protection strategies:
- Recovery time objective (RTO) — The acceptable amount of time that a system can be unavailable before it starts to impact the organization
- Recovery point objective (RPO) — The maximum amount of transactional data that can be lost due to a system failure.
RTO and RPO objectives can vary from system to system, which can impact decisions on how you protect that data.
Part of the data protection strategy impacted by RTO/RPO is backup frequency. Many organizations run a full backup on Monday and then incremental or differential backups the remainder of the week. While this strategy is common, it might not fit the requirements of your organization and may need to be adjusted. For example, there may be a payment processing server that needs to be backed up more frequently than once a day, as its tolerance for data loss (RPO) is lower. You may also have critical constituent data at your nonprofit that would severely hinder your operations if it was lost for even a day or two.
Your organization also needs to consider how long to keep backup data. Some organizations have regulatory requirements that define their retention. Others need to discuss with leadership to determine what types of data need to be kept for extended periods and what can be expired more rapidly to mitigate storage growth.
Data Protection in the Cloud
Most of the information on data protection listed above focuses on protecting on-premises systems and data. However, the principles can still apply to cloud-based data such as Microsoft 365 or G Suite.
Many organizations assume that cloud services providers protect their data, but the reality is often that the cloud provider is only responsible for the availability of application or infrastructure. Organizations are therefore almost always responsible for their own data protection. There are a few ways of protecting SaaS workload data. The most common is a cloud-to-cloud backup.
Cloud-to-cloud backup still somewhat follows the principle of 3-2-1, since a new copy of data is created and placed somewhere separate or off-network from the original data. Other solutions can send SaaS backup data back on-premises, assuming storage is available, or to another public cloud such as AWS or Azure. Choosing which backup method to use for your cloud-hosted data depends on how much data you have, how critical it is to daily operations, and what the level of security is needed to protect that data.
Another important part of your data protection strategy is ensuring that you have a plan for recovery. Here are some things to consider when planning.
- Can you restore to dissimilar hardware if necessary (bare-metal disaster recovery)?
- Can you restore individual applications as needed? Which are most important?
- Can you recover entire virtual machines in case of a critical failure?
- Can you granularly restore items — individual files and folders or application-specific items such as individual emails?
Once your organization has determined the restore features required for your needs, it is time to put it to the test. Testing your restore capability assures you that your backups are ready for recovery and allows you to identify any hiccups in the process.
For example, important donor information may be stored in a database. To ensure that you can recover that data, it may be wise to create a new instance of the database and use your backup solution to restore to that new instance. If the operation is successful, you know you will be able to recover. Otherwise you know some troubleshooting needs to be done to fix the issue so that it will not happen when real data loss occurs.
Data Recovery in the Cloud
Again, much of the information presented here mainly applies to on-premises data, but testing recovery for backups of cloud workloads is just as important. Luckily, the restore options of most cloud-to-cloud backup solutions are flexible. Options may include direct download, creating shareable links, or restoring to the original location (e.g., Microsoft OneDrive).
Common Mistakes to Avoid
There are several typical pitfalls that both for-profit and nonprofit organizations run into from time to time. Let's take a moment to go over a few things to keep in mind to avoid them.
- Avoid using the same system and storage for both production and data protection. If the device fails, recovery may be difficult or impossible.
- Do not store off-site media in poor conditions — the trunk of a car is technically off-site, but one hot day can entirely compromise the media and cause data loss.
- Be careful when setting up backup image retention (how long old backups are kept). It is possible to overwrite existing backups accidentally. This commonly occurs when disk media approaches full capacity or tapes are incorrectly flagged as overwritable.
Creating a comprehensive data protection strategy can seem challenging at first, but using the framework provided here can be a good first step in building out a strategy that fits the requirements of your organization. Also be sure to check out TechSoup's Nonprofit Disaster Planning and Recovery resources to learn more ways to prepare for the worst at your organization. In the event of a disaster of any kind, you won't skip a beat in the important work you perform in your communities each day.