Top 20 Moments in the History of Backup and Disaster Recovery

History

13.7 billion years BC – The universe begins as a singularity; those who believe in the “big bang” theory suggest the disaster is on-going…

3.8 billion years BC – The start of life on Earth. The first cell is thought to have arisen from self-replicating RNA what developed later into DNA. DNA is a store of biological data, the genetic information that allows all modern living things to function, grow and reproduce. Put another way, you are the backup of your parents. Say hi to the therapist for me.

65 million years BC – Dinosaurs, not backed up.

Dino

13.7 billion years BC – The universe begins as a singularity; those who believe in the “big bang” theory suggest the disaster is on-going…

3.8 billion years BC – The start of life on Earth. The first cell is thought to have arisen from self-replicating RNA what developed later into DNA. DNA is a store of biological data, the genetic information that allows all modern living things to function, grow and reproduce. Put another way, you are the backup of your parents. Say hi to the therapist for me.

65 million years BC – Dinosaurs, not backed up.

Cyrus

 

48 BC – The burning of the Library of Alexandria. Among others in your “Top 10 Lost Books of All Time,” the second book of Aristotle’s Poetics went up in smoke and humanity was beginning to realize the fatal flaw in their cunning backup plan; paper is actually quite flammable.

1347 AD – The first known insurance contract is signed in Genoa, Italy. This was great for those buying and selling goods and owning property but information is difficult to value, most people would rather have their data back than receive compensation for its loss.

1436 AD – Johannes Gutenberg, a former goldsmith, created the first printing press in Germany. He used his knowledge of metalwork to fashion letters out of an alloy, pressing these against ink and then paper to create a copy. This made the printing of multiple copies considerably faster, a great step forward in data resilience.

1539 AD – Image based backup, born. Henry VIII, King of England was trying to decide who to marry next, he sent the artist Hans Holbein to make a reliable copy of what his list of European princesses looked like. Based on these images, Henry made his choice and proposed engagement to Anne of Cleeves only to discover she looked nothing like he expected. Corrupt data/bad copy.

Anne

48 BC – The burning of the Library of Alexandria. Among others in your “Top 10 Lost Books of All Time,” the second book of Aristotle’s Poetics went up in smoke and humanity was beginning to realize the fatal flaw in their cunning backup plan; paper is actually quite flammable.

1347 AD – The first known insurance contract is signed in Genoa, Italy. This was great for those buying and selling goods and owning property but information is difficult to value, most people would rather have their data back than receive compensation for its loss.

1436 AD – Johannes Gutenberg, a former goldsmith, created the first printing press in Germany. He used his knowledge of metalwork to fashion letters out of an alloy, pressing these against ink and then paper to create a copy. This made the printing of multiple copies considerably faster, a great step forward in data resilience.

1539 AD – Image based backup, born. Henry VIII, King of England was trying to decide who to marry next, he sent the artist Hans Holbein to make a reliable copy of what his list of European princesses looked like. Based on these images, Henry made his choice and proposed engagement to Anne of Cleeves only to discover she looked nothing like he expected. Corrupt data/bad copy.

Manchester

1964 AD – Mass market computing begins, the Programma 101 was unveiled to the public at the New York World’s fair. One of these computers was used on Apollo 11 and it was pretty much… a calculator. “One small step…” (at a time!)

1972 AD – Mainframe computers deliver applications and data at high speed to hundreds of users, in-built hardware redundancy ensures exceptional RPOs and RTOs. The ancient Sumerians would have just loved this.

1990 AD – Arcserve 1.0 released by Cheyenne software. The age of distributed computing is in full swing and it is all about backing up to these little rectangular things called “tapes.”

1998 AD – VMware founded in Palo Alto, California. Although the concept of a hypervisor originated from 1960s, it was VMware who introduced hardware virtualization to the mass market. Virtualization will go on to revolutionize backup and disaster recovery.

vmware

2006 AD – XOsoft’s WANsync technology is integrated into Arcserve. For the first time mid-market users can perform both backup and full system failover from one solution.

2008 AD – Microsoft releases their competing product to VMware, they call it Hyper V. If you weren’t virtualized before, you are now. Specific software for virtual backup exists but there is little integration with physical servers, tape backups or cross platform Microsoft/Linux.

Hyper V

2006 AD – XOsoft’s WANsync technology is integrated into Arcserve. For the first time mid-market users can perform both backup and full system failover from one solution.

2008 AD – Microsoft releases their competing product to VMware, they call it Hyper V. If you weren’t virtualized before, you are now. Specific software for virtual backup exists but there is little integration with physical servers, tape backups or cross platform Microsoft/Linux.

arcserve award

2016 AD – You are here.

Please register to see a live demo of Arcserve UDP here.

Or download a free copy of Arcserve UDP here.

Would you like to discuss how to get the best pricing for Arcserve or do you have any specific questions about the technology?

Drop me a mail with your contact details and I can help: louis.cadier@arcserve.com

Arcserve RHA vs Zerto

Choosing the right replication solution for your organisation can be tricky if not daunting.  Arcserve RHA or Zerto? That is the question! In this post we highlight the key features and benefits of both of these products in an attempt to make the decision making process easier for you.

Arcserve RHA vs Zerto

In short, Zerto is a hypervisor agentless VM replication solution whereas Arcserve RHA is an agent based real time replication & high availability solution.

Architecture 

Zerto requires a ZVM (Zerto Virtual Manager) on each site, a ZVA (Zerto Virtual Appliance) on each virtual host and a ZCM (Zerto Cloud Manager) in order to manage multi-tenancy. This can range from a minimum of 4-5 servers upwards, depending on the amount of hosts.

Arcserve RHA requires a minimum of 1 control service engine with RHA manager, to 2 servers if redundancy is required for scenario management.

Overview

Zerto: 

  • After the large installation, Zerto allows for simplistic DR VPGs (Virtual Protection Groups) to be created.  This doesn’t involve much configuration except to point to a secondary site ZVM & some Re-IP configuration for the DR site.
  • SLAs can be created to ensure VPGs meet their RPO’s.
  • Zerto replicates VM files at a block level to the DR site spool and only creates and powers a VM onto the DR Hypervisor once failover is initiated, regardless of whether it’s a test or real scenario. Therefore Zerto cannot guarantee a RTO.
  • Zerto does not test application or data consistency within the guest OS of the protected VM.
  • When a scenario involves a transactional intensive DB environment; an agent is required to be loaded onto the specific VM’s.  Guest Quiescing is then initiated from Zerto to commit transactional logs (log truncation) so that the DB’s stay consistent on the DR side as it’s an active passive solution. SQL is only built at the point of failure with the latest blocks replicated; and you would need to consider factors like transaction size and long-running transactions that cannot be cleared from the log until they have completed. If a fail over where to occur during the transaction, the DR VM built on the other side will have inconsistent DB’s in a recoverable state.
  • Zerto has a journal history function where blocks can be rolled back to a certain time, but you would need to find a specific point in time where all transactions were completed. Alternatively you can load a Zerto agent which will Quies the guest OS, thus utilising VM resources and IO.  Zerto recommends that it be run every 4 Hours +, inflating your RPO to 4 Hours.
  • Zerto’s failover process requires the user to log into the site ZVM and press a fail over button.  There is no HA or automatic failover solution.

Arcserve RHA:

  • Arcserve RHA requires a control service engine to be loaded onto the server managing the HA or DR replication scenarios. A control engine agent is loaded onto each server protected.
  • Unfortunately the scenario setup requires that the user be more involved. Arcserve allows for DR file & application replication where failovers are manual & HA full system or application HA; where failover can be manual or automated with integrated DNS changes to the local DNS server.
  • In a replication scenario; a live DR VM or server is required to receive replicated files and the server will be required to be setup by the user. With application HA; the server will require the application to be setup and configured as well. This does require more setup and running computer resources on the DR site but it has its added benefits.
  • Arcserve RHA allows for replication or HA from physical environments, virtual or a mixture of both (P2V, V2P, V2V & V2P) this can then be configured to automatically failover or failover through user Intervention. This then provides seconds RPO & seconds RTO.
  • Arcserve has another feature called ‘Assured Recovery’ that allows for automated HA & DR testing and unlike Zerto, tests the application in the guest OS as well as the data constancy with the master server.
  • Other features include data rewind (similar to Journal History) where you are able to rewind changes made to an application or the OS files; unlike Zerto which is at a hypervisor block level only.
  • Synchronization between servers can be done at a block or file level.
  • This entire solution is then able to integrate into Arcserve UDP (Backup Suite) and is managed through a console with 1 license.
  • To avoid the large setup of building the DR VM’s for your scenarios, you can use the backup feature of UDP to replicate source servers and export as VSB ( Virtual standby ) the VSB can then be used in an HA or replication scenario.

Quick Feature Comparison

arcserve zerto comp

The way we look at backups – the simplest method

 

Backup

Flat File / File Level Backup Versus Full System State/Image Backup

There are a lot of methods in this current day and age to backup you Critical data, but what must be taken into consideration is the percentage of your business that is dependent on your IT infrastructure and if your backup method meets your Business requirements. When it comes to backups in the past, due to the cost of backup software/hardware and target Destinations such as Tape & Disk, you were limited to only protecting your critical servers. On that note the backup method chosen was often the cheapest round trip solution. Even still, companies only backup critical servers, some only the critical data on those servers through file level or flat file. The changes in technology and the growth in cloud technology has led to many organizations planning DR strategies as they are now affordable. This introduces the RTO & RPO as key features in any DR strategy.

What Defines Critical servers? Here is a simple explanation of ‘mission critical’.

Mission critical refers to any factor of a system (equipment, process, procedure, software, etc.) whose failure will result in the failure of business operations.

Before I go into comparing the 2 backup methods, what must first be understood is that backups are important, so make sure you choose a solution that will work for the needs of your environment. But more important in my opinion is the restore capability. Restoring your data is even more crucial than having it backed up. You need to ensure that the solution that you use can easily restore critical data when needed in your environment.

What you need to define is whether or not you need to be able to restore a critical business service or just critical data when lost. If you require restoring a service, then you should also consider planning your backup strategy around your current or future disaster recovery strategy.

Good backup infrastructure is the foundation for any disaster recovery plan.

Choose a backup strategy that supports your disaster recovery requirements for recovering from a disaster.

Flat File & File Level Backup Methods

File systems store files and other objects only as a stream of bytes, and have little or no information about the data stored in the files. Such file systems also provide only a single way of organizing the files, namely via directories and file names.

A flat file is a file containing records that have no structured interrelationship. The term is frequently used to describe a text document from which all word processing or other structure characters or mark-up have been removed. Typically an image file like .MDF .CTK .BIN etc…

When referring to Flat File & File Level backups, this generally entails a scheduled job that backs up specific individual files (either Flat in the case of a Database servers DB files or file and folder level on files server or Application server with a critical file store).

The main benefits of this is to backup a small amount of data on a large volume rather than the entire volume. For example, consider a backup consisting of 10 gigabytes of critical database files that reside on a 120 gigabyte hard disk. The other 100-odd gigabyte could contain OS and other non-crucial data. Another advantage is that file-level backups allow for individual file restores, which are useful when you have just a few files to restore and do not need to restore all the data just to restore the individual files.

However, consider the IT Engineers installing service packs to production type software such as Exchange, SQL, or other software that is critical to your business, or the installation of new hardware/software and you have a full system state failure to your business critical servers. Your course of action with only having file level/flat file backups in place will be (this will calculate to be your RTO):

  1. OS rebuild
  2. Application install
  3. Patching (in some cases you can’t restore system databases unless the patch level is of the same build to when they were backed up)
  4. Then you can begin to restore individual flat files to your new server

So file-level backups have their limitations. In other cases, operating system files or locked files will not be backed up, and data discovered in the volumes is not structured. That means backup will not be application consistent. Therefore, this backup type is not suitable either for restore in the event of a disaster or for granular restores within applications such as databases, email, etc.

Building out a disaster recovery plan on this method is not fail proof and it’s not flexible solution. It’s very manual and will involve a lot of maintenance & fail over testing. Unless you incorporate Arcserve Replication and High Availability into your file based strategy you don’t have continuity (I will post more on RHA in upcoming posts).

Full System State/Image Backup

Image/System State Backups in the form of an agent or agentless on virtual or physical allow you to select an entire drive, partition, or entire machine which backs up the entire selection that you have selected. This backup then covers your system data crucial platform for your critical company service or DATA. Examples of these situations are when your hard drive dies, your Windows Guest will not boot up or is corrupt or your servers are stolen. “This is Africa “. Or as mentioned earlier, technical team update rollout goes horribly wrong, or another major disaster that requires a complete rollback of the entire machine. The advantage to image-based backups is that all of the information can be collected in a single pass, providing an updated bare metal restore (BMR) capability with each file-based backup.

You can restore back to dissimilar hardware. It allows for file-level restores from the backup image. You are able to recover servers remotely across wide area networks (WANs) or local area networks (LANs) and allow backup images to be saved to a variety of different media. You are able to convert your image of your physical servers to virtual instances. You can convert your virtual images back down to physical instances.

Having multiple ways to recover your data allows you to create a suitable backup strategy for your environment and grow it into a disaster recovery plan.

You are able to have pre-exported recovery points which allow for minimal downtime in an event. This can be leveraged for maintenance, platform migrations, Dev testing and upgrades without having to rebuild servers from the bare bones up. That way, if something goes wrong with that change then you can bring the machine back to where it was before the change very quickly.

One of the main reasons that flat and file backups are chosen over full system or image based backups is because of target storage availability. Most customers don’t have the storage capacity available and can’t afford to put down a larger storage device. Because image-level backups use snapshots, all of the data including deleted files and empty disk blocks, are backed up. To reduce the amount of data stored: data deduplication. Arcserve UDP can achieve very impressive deduplication and compression ratios making it comparable to a file/flat backup strategy on target data consumption. For example, where I backed up full image backups of 179 Servers which totalled 7TB of raw data, it amounted to roughly 200GB held on disk after deduplication and compression.

Conclusion

File level backups basically come down to you selecting some files and folders that you want to back up and then where you want those file level backups to go.

Image/system state backups allow you to select an entire drive, partition, or entire machine which backs up the entire selection you have selected.

In my opinion there are more features on image based backups than on file. In addition, with UDP all you would need is one backup of your server and you can replicate, export, BMR etc.