Arcserve UDP Windows Remote BMR with WDS

 

With the new release UDP6, so comes the functionality of instant Linux BMR (Bare Metal Restore) which allows you to recover physical hardware remotely and instantly. This feature would be also be great for Windows environments not yet available.

A great solution for the remote recovery of physical Windows servers is to use Windows Deployment Service (WDS) integrating Arcserve UDP 6 Restore Capabilities with WDS, allowing for remote physical restore. It is no longer a requirement to have an engineer standing in front of your data centre rack to run system state recoveries on your physical system.

In this post I explain how I have created such an implementation and tested it!

The prerequisites are a Windows server and DHCP server (I used Virtual Servers in my testing but it applies to physical servers too).

The process would be to access your physical server through remote BIOS (e.g. ILO, IPMI or IDRAC or similar) and setting the server to network boot, at which point the server will PXE boot the Arcserve Windows BMR ISO files.

I used one server for the implementation: Server 2012 R2, running Arcserve UDP Console and RPS roles. I added Windows roles WDS and DHCP.

Installation guidance for WDS and DHCP:

How to install and configure Windows WDS

Installing and configuring DHCP

 

This one server had an Arcserve agent, so I created a BMR ISO for X86 and X64 compatible with ADK 8.1. You can create both Windows 8 and Windows 7 compatible boot kit ISOs for Server 2008 and 2012 physical server spreads in your environment.

*One important thing to note: if you run WDS and DHCP on the same server then some properties need to be altered on WDS as they both listen on the same port.

Port

Once your BMR ISOs are created, browse to their location and mount ISOs.

Then open WDS MMC through Server Manager: under “Boot Images” browse, select add boot image and follow the wizard.

How to add boot images

 

Unfortunately WDS can’t use ISO format boot images and requires .WIM.

Browse Image location to:

ISO:\AMD64\SOURCES\BOOT.WIM for X64

ISO:\X86\SOURCES\BOOT.WIM for X86

Name your images as this will be displayed at your boot screen.

Add Image

Once the image has been created and stored, you can begin a network boot. (Log into server remote BIOS interface, initiate boot from network device.)

DHCP will assign an IP address and discover PXE proxy “WDS Server”.

Press F12 to Boot into PXE.

PXE

You can now see available images to boot from.

Boot

arcserve

After boot select you will then see the Arcserve Bare Metal Recovery screen.

This is great for large workstation environments and multiple remote sites.

 

How Arcserve fits into your IT strategy as an SMB

Where are you now? Where are you going? Where do you want to be? Same product, same license.

SMB

This is how Arcserve fits into your IT strategy as an SMB.

When building your company’s IT infrastructure for the first time you would take the most minimalistic approach. For example, virtualization would most likely be out of reach initially depending on your IT budget. You would also probably have a Physical Active Directory Server (AD), an Application Server (APP) or File Server (FS), all with internal Disk. Mail would be outsourced to a service provider (SP) or you would use Online Office 365.

You Data Loss risk at this point is high, if your FS/APP server were to fail a disk; you would have an un-recoverable data loss – assuming that the server volumes have no raid set and are isolated.

The initial approach, considering IT Budget for a small to medium company would be to have a Backup Server or Backup Role on an existing server. In this case a server with Windows Server 08/12 running Arcserve UDP Standard Edition backing up to a cheap storage device such as NAS or External Disk 1TB – 2TB of backup storage will allow you to protect an estimate of 4.5TB of Source Data with a rolling backup of 30 days, allowing you to restore back to any point for an entire month.

Your restore process would either be file level or a bare metal recovery with USB or ISO for a full system state recovery on one of the servers.

Your next step would be to consider what would happen if you lost your entire IT infrastructure, due to theft, flood, fire etc. Only your Mail at this point would be intact & available. This is where Offsite Backup is now considered, having copies of your backups offsite to ensure backup redundancy, this approach would be to either migrate Disk Backup Points to tape on a weekly bases and then store offsite or replicate to Cloud SP Storage.

Your restore process would entail repurchasing lost hardware and to rebuild your Backup Server if necessary, deliver tape to site for the Restore Process / Replicate Backup points Back to Site or Restore through WAN from Cloud SP storage (whichever would be more cost effective and the least time consuming). This would be considered poor RTO (Restore Time Objective).

A few months or years down the line the business has grown considerably and there are now double or triple the amount of employees; and new hardware & applications have been purchased to accommodate for the growth. A virtualization approach has now been taken and a few physical servers or a SAN storage device are in place. AD, FS, APP, SQL Servers have been virtualized and in addition, the Mail environment has been localised and a Virtual Exchange Environment has been built for more efficiency and to reduce data costs.

The Backup Server has now been upgraded to an internal Raid 6 Volume and has 5TB-10TB backup capacity, licensing has been upgraded to Arcserve UDP advanced to cater for Exchange & SQL; this will enable you to protect an estimate of 20TB of source data with a rolling backup of 30 days, allowing you to restore back to any point for an entire month.

Calculations would show that if your IT infrastructure were to fail or go down you would lose thousands of Rands every hour. You could restore a VM instantly with Arcserve UDP Instant Restore but in the event of power failure or theft, flood etc, you would have a 24 – 72 hour estimate restore time to restore services from offsite copies; this however depends on many factors.

Your approach now is to have a Disaster Recovery Strategy (DR) and to repurpose the replaced hardware or purchase new less expensive hardware; and build a DR Cold Site at a branch office or Co-locate in a SP Data Centre Rack. You would then virtualise the hardware and build a second Backup Server as a Virtual Machine (VM). Now you have an offsite target to replicate to. Once replicated, you would export backup points as virtual machines onto the cold site. This is known as ‘Virtual Standby’. Each replication will update the Cold Site Virtual standby machines.

In the event of a disaster at your HQ; you would manually power up Virtual Standby VMs & redirect users to a temporary office or grant them remote access to services from the DR site.

Your RTO (Recovery Time Objective) here could be anything from minutes to hours, depending on system boot time and requirements to connect users to services, EG –VPN, Remote RDP etc.

Your DR Cold Site could also be Cloud Compute & storage Resource with a Cloud SP, where you have a Hosted Arcserve UDP server. This is a simple and entry level approach to DR and mostly likely small to medium business.

A few more years go by and your business has grown into a large organisation and you’re heading for the enterprise space. Your IT infrastructure would have grown significantly with multiple branch offices all connecting to your companies services in your server room or even data centre.

At this point moving to a data centre or to a local SP cloud platform is the best route to ensure redundancy and system resilience across your physical IT infrastructure, e.g. redundant power, redundant cooling, redundant WAN links etc. This is all to reduce downtime as the impact now in loosing critical services would cost hundreds of thousands of Rands, every minute or hour.

However there are still factors to consider even though the physical infrastructure is redundant; you could still have system outages, such as bad OS Patching, data corruption, human error, virus infections etc.

As a DR strategy is still required, you would start looking at UDP premium / Premium Plus for the simple reason that you want Backup and you want DR and high availability for critical applications.

One can then create high availability scenarios with Arcserve that will allow for instantaneous failover to a second server so that no service downtime is experienced as well as maintaining the DR strategy with a cold site virtual standby or warm site live replication with seconds RPO (Restore Point Objective) between Business Critical Servers and DR Servers.

Based on Implementing the above Premium / Premium Plus data continuity solution your restore options will include File Level Restore when needed and Full System State Instant VM restore to your production site. During a disaster scenario users can be seamlessly redirected to slave servers in the HA scenario relationships within the DR warm Site, virtual standbys can be powered up as needed for less critical services/servers in the warm site.

Certain servers will have a higher priority than others, this is why one would approach multiple strategies and features to provide the full solution while staying within budget.

This all can be done with one License, one Software Vendor. So less complexity, simple and easy to use.

Regardless of the size SMB to Enterprise, we at Arcserve have a solution for you that is more than just a backup.

Are you ready to crush the competition with UDP V6?  Let’s go get them!

arcserve logo

It is almost here! UDP V6, Project “Tungsten”, will be generally available on February. This new version will further establish UDP as the most modern and the leading solution in the market today. With UDP V6, we are squarely focused on going up-market and have added many new capabilities that allow us to leapfrog our competition. This is a very exciting time for Arcserve as we enter our final quarter of the Fiscal Year. Our new version includes many enhancements:

Improved Tape Unification and Ease of Setup with wizards, direct management from the management console and a new unified product installer.

Faster, More Flexible Recovery Options with new enterprise storage array snapshot support to enable high performance, and low-impact snapshots of virtual and physical production servers, new Instant VM for fast recovery and new Instant Bare Metal Restore (BMR) and support for VMware vSphere® version 6.

Windows Platform Enhancements and with support for Windows 10, Exchange 2016,  new Exchange granular recovery support, new reboot-less Agent for fast deployment, and RPS File Copy to a public/private cloud for archiving or storage cost reduction.

Many Linux Platform Enhancements such as support for RHEL & CENTOS v7, Oracle Linux (RHEL compatible) and SLES 12File/folder level recovery of Linux VMs backed up via agentless, host-based backups on vSphere and Hyper-V hosts,  source-side backup and replication to RPS, Infinite incremental backup, RPS to RPS replication, BMR of UEFI systems, archive to tape from RPS , “Sudo” authentication for backup source (improves security).

Management Enhancements and Third Party Integration with role-based administration, WAN management, reboot-less Agent deployment, a new Command Line Interface (CLI), and enhanced Agent and Console v2.0 APIs and DB Schema documentation.

The product marketing and product management teams will host training sessions and by attending, you will learn more about the new release, how to explain and position its new features, and we’ll review updated tools to help you sell.

When?

Tuesday, January 19th at 9:30 AM CT/3:30 PM GMT

Register here.

CTRL Z your life

After a busy day of writing emails, copying and pasting into spreadsheets and tweaking objects in this and that presentation; I was finishing up the last of it… tapping away on my laptop at the kitchen table when my right hand suddenly slipped and the mouse went “Saturday Night Fever” on me across the tabletop only to knock a glass of water off the side.

As the glass fell in slow motion, my left hand – still resting on the keyboard, jumped into action and out of pure reflex hit CTRL Z. To paint the picture for those of you who do not use keyboard shortcuts, I tried to stop a real life glass of water from breaking on my kitchen floor by using a computer’s “Undo” command. And… smash.control_z

This immediately provided my Mrs. with a new entry for her long catalogue of ‘silly things Louis has done’, the source material for her best jokes at my expense. It was one morning as I melodramatically writhed in pain following a stubbed little toe that she suggested satirically “Why don’t you just hit CTRL Z?” Funny…I’m told. But it got me thinking about it again and you know what? I need CTRL Z in my life.

This is the stuff science fiction is made of! Oh to imagine what it would be like to live in a virtual world where you can pick the rules, read the dark warnings of William Gibson’s Burning Chrome or enjoy the pop asceticism of The Matrix. However, as we spend even more time online, our lives routinely uploaded there, perhaps the future is closer than we think.

What commands would you want in your virtual world? I am just a Backup and Disaster Recovery guy so please forgive my lack of imagination for this bucket list of Louis’ Must Have Commands For His Virtual World:

1. Save a Recovery Point from when I was 21 so I can go back and have hair again any time I want.

2. Replicate myself on holiday. After deduplicating and compressing myself so that I could travel on even a modest connection, I would encrypt myself and then, either real time or scheduled, replicate myself to a datacenter in Barbados. NICE!

3. Use Virtual Standby to create a lookalike of me. Not feeling like work today? I would spin up a Virtual Machine copy of myself fully equipped with all the relevant data; applications and send the poor chap into work instead.

4. Archive my fashion mistakes to the cloud. This is pretty much all the way from 1995 till present day with only a few exceptions like weddings and one or two fancy dress parties. I would take Granular Restore with that just in-case I am ever feeling nostalgic and want to have a laugh at one or two badly dressed memories without having to remember the whole lot.

5. Make everything much easier to do than it currently is. I’m thinking of absolutely everything here; but specific examples include: baking a decent macaron, DARPA’s math challenge and Morris dancing.

6. Deduplicate plastic bags. If only we could delete all the unnecessary plastic bags in the World! Well this is my virtual world and we just did it! Of course we’ll keep one plastic bag to put in a museum somewhere…

7. SSD my brain. Daft Punk have already had this upgrade.

8.Intuitively know exactly what to do and when to do it. In my virtual world I’m not asking to be smarter, I’m just asking that everything else is simpler.

9. Truncate my logs before I go to bed.

There is probably a far more controversial version of this list available to anyone who uses Adobe Photoshop extensively; but all of the above mentioned Backup and Disaster Recovery capabilities are available at this very moment with Arcserve UDP in both software and appliance options. And for those of us left wanting CTRL Z right now and in the real world, Virtual Reality exists via our smartphones and we still have the power to untag bad photos of ourselves on the likes of Twitter and other social media platforms. Things are looking up – we’re getting there!

Online Backup of Lotus Domino with Arcserve UDP

Since Lotus Domino is an application non-VSS aware, the database’s consistency must be guaranteed during the Arcserve Snapshot of Lotus Domino process.

Using Lotus Domino as corporate messaging system, the database’s consistency is guaranteed running custom quiescing scripts (pre-freeze and post-thaw or Cache Flush) stored in C:\Windows in the Domino Server during the backup job.

See below; Option 1

Create 1 Batch File

This will Drop all connected users & Drop Cache.

Run the following Pre backup Script

Cache-Flush.bat
“C:\Program Files\IBM\Lotus\Domino\nserver.exe” -c “drop all”

timeout /t 5 /nobreak

“C:\Program Files\IBM\Lotus\Domino\nserver.exe” -c “dbcache flush”

timeout /t 5 /nobreak

Net Time \\%computername% >> C:\Arcservebackup.log

Save As .bat

On the backup plan, add this:

backup plan

See below; Option 2

This will stop Domino Services get the DB to a consistent state and then run Snapshot.

After the Snapshot Process it will then start service once again.

You can Add Option 1 to Pre Freeze to speed up the Process.

Create 2 Batch Files

See below Create 2 Batch Files

Run one Pre backup and the other Post backup
 
pre-freeze.bat
Net Time \%computername% >> C:scripts\logs\freeze.log
rem ***************************************
rem creates and inventory of all running Domino processes
rem ***************************************
pslist | findstr /I /C:”nadminp” >>C:scriptslogspid.lst
pslist | findstr /I /C:”naldaemn” >>C:scriptslogspid.lst
pslist | findstr /I /C:”namgr” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ncalconn” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ncatalog” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nchronos” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ncollect” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ncompact” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nconvert” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ndesign” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ndircat” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ndrt” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ndsmgr” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nevent” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nfixup” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nhttp” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nhttpcgi” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nimap” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nimsgcnv” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nisesctl” >>C:scriptslogspid.lst
pslist | findstr /I /C:”niseshlr” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nldap” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nlivecs” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nlnotes” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nlogin” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nmtc” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nnntp” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nnsadmin” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nnotesmm” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nobject” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nomsgcnv” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nosesctl” >>C:scriptslogspid.lst
pslist | findstr /I /C:”noseshlr” >>C:scriptslogspid.lst
pslist | findstr /I /C:”notes” >>C:scriptslogspid.lst
pslist | findstr /I /C:”npop3c” >>C:scriptslogspid.lst
pslist | findstr /I /C:”npop3″ >>C:scriptslogspid.lst
pslist | findstr /I /C:”nreport” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nrouter” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nreplica” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nsapdmn” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nsmtpmta” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nsmtp” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nstatlog” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nstaddin” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nstats” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nsched” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nservice” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nserver” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ntaskldr” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ntsvinst” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nupdate” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nupdall” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nwrdaemn” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nweb” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nxpcdmn” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccmta” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ncctctl” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccmctl” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccttcp” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccbctl” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccmin” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccmout” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccdctl” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccdin” >>C:scriptslogspid.lst
pslist | findstr /I /C:”nccdout” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ngdsscan” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ngsscan” >>C:scriptslogspid.lst
pslist | findstr /I /C:”ngstmgr” >>C:scriptslogspid.lst
rem ***************************************
rem Stops Dominio daemon in a controller fashion
rem ***************************************
net stop “Lotus Domino Server (LotusDominoData)”
rem ***************************************
rem Wait a fair amount of time for processes to stop
rem ***************************************
Sleep 300
rem ***************************************
rem If some Domino processes are hanged, it kills all of them
rem ***************************************
for /f “tokens=2” %%I in (C:scriptslogspid.lst ) do pskill %%I
Net Time \%computername% >> C:scriptslogsfreeze.log
post-thaw.bat
net start “Lotus Domino Server (LotusDominodata)”

On the backup plan, add this:

backup plan2

Arcserve RHA vs Zerto

Choosing the right replication solution for your organisation can be tricky if not daunting.  Arcserve RHA or Zerto? That is the question! In this post we highlight the key features and benefits of both of these products in an attempt to make the decision making process easier for you.

Arcserve RHA vs Zerto

In short, Zerto is a hypervisor agentless VM replication solution whereas Arcserve RHA is an agent based real time replication & high availability solution.

Architecture 

Zerto requires a ZVM (Zerto Virtual Manager) on each site, a ZVA (Zerto Virtual Appliance) on each virtual host and a ZCM (Zerto Cloud Manager) in order to manage multi-tenancy. This can range from a minimum of 4-5 servers upwards, depending on the amount of hosts.

Arcserve RHA requires a minimum of 1 control service engine with RHA manager, to 2 servers if redundancy is required for scenario management.

Overview

Zerto: 

  • After the large installation, Zerto allows for simplistic DR VPGs (Virtual Protection Groups) to be created.  This doesn’t involve much configuration except to point to a secondary site ZVM & some Re-IP configuration for the DR site.
  • SLAs can be created to ensure VPGs meet their RPO’s.
  • Zerto replicates VM files at a block level to the DR site spool and only creates and powers a VM onto the DR Hypervisor once failover is initiated, regardless of whether it’s a test or real scenario. Therefore Zerto cannot guarantee a RTO.
  • Zerto does not test application or data consistency within the guest OS of the protected VM.
  • When a scenario involves a transactional intensive DB environment; an agent is required to be loaded onto the specific VM’s.  Guest Quiescing is then initiated from Zerto to commit transactional logs (log truncation) so that the DB’s stay consistent on the DR side as it’s an active passive solution. SQL is only built at the point of failure with the latest blocks replicated; and you would need to consider factors like transaction size and long-running transactions that cannot be cleared from the log until they have completed. If a fail over where to occur during the transaction, the DR VM built on the other side will have inconsistent DB’s in a recoverable state.
  • Zerto has a journal history function where blocks can be rolled back to a certain time, but you would need to find a specific point in time where all transactions were completed. Alternatively you can load a Zerto agent which will Quies the guest OS, thus utilising VM resources and IO.  Zerto recommends that it be run every 4 Hours +, inflating your RPO to 4 Hours.
  • Zerto’s failover process requires the user to log into the site ZVM and press a fail over button.  There is no HA or automatic failover solution.

Arcserve RHA:

  • Arcserve RHA requires a control service engine to be loaded onto the server managing the HA or DR replication scenarios. A control engine agent is loaded onto each server protected.
  • Unfortunately the scenario setup requires that the user be more involved. Arcserve allows for DR file & application replication where failovers are manual & HA full system or application HA; where failover can be manual or automated with integrated DNS changes to the local DNS server.
  • In a replication scenario; a live DR VM or server is required to receive replicated files and the server will be required to be setup by the user. With application HA; the server will require the application to be setup and configured as well. This does require more setup and running computer resources on the DR site but it has its added benefits.
  • Arcserve RHA allows for replication or HA from physical environments, virtual or a mixture of both (P2V, V2P, V2V & V2P) this can then be configured to automatically failover or failover through user Intervention. This then provides seconds RPO & seconds RTO.
  • Arcserve has another feature called ‘Assured Recovery’ that allows for automated HA & DR testing and unlike Zerto, tests the application in the guest OS as well as the data constancy with the master server.
  • Other features include data rewind (similar to Journal History) where you are able to rewind changes made to an application or the OS files; unlike Zerto which is at a hypervisor block level only.
  • Synchronization between servers can be done at a block or file level.
  • This entire solution is then able to integrate into Arcserve UDP (Backup Suite) and is managed through a console with 1 license.
  • To avoid the large setup of building the DR VM’s for your scenarios, you can use the backup feature of UDP to replicate source servers and export as VSB ( Virtual standby ) the VSB can then be used in an HA or replication scenario.

Quick Feature Comparison

arcserve zerto comp

Deflate Your Bloated Backups With Arcserve Infinite Incremental Strategy

Incremental 1

Let’s start by explaining the image above. The blue illustration shows the most efficient way of backing up one full backup, followed by daily infinite incremental backups. The red illustration shows an older strategy (still used by many vendor backup solutions) – a weekly full backup with incremental chains. As you can see, the required storage footprint is more than doubled in the red illustration. An infinite backup strategy is becoming increasingly more popular as organisation strategies and policies are forever changing with future technologies.

To define “Incremental-Forever” aka infinite incremental backups:

The most basic form of incremental backup consists of identifying, recording and preserving only those files that have changed since the last backup. Since changes are typically low, incremental backups are much smaller and quicker than full backups. For instance, following a full backup on Friday a Monday backup will contain only those files that changed since Friday. A Tuesday backup contains only those files that changed since Monday and so on. In addition, the restore process is optimized, as only the latest versions of blocks that belong to a restored backup are restored. Since the same area on the production disk is recovered only one time, the same block is not written to multiple times. Therefore, one full backup followed by many backup increments act as your retention but with lower I/O impact on your storage overall.

In addition to this, Arcserve allows for multilevel incremental schedules on one plan. This means that you are able to add separate weekly, monthly, and yearly schedules to the same job that could consist of incremental or full backups. Unlike the common backup software as shown in the red illustration above, Arcserve’s infinite incremental backup allows the synthetic operation to create a new full backup which is limited to the size of the incremental file instead of the complete size of a full backup file.

So you would see a something similar to below with Arcserve’s infinite backup strategy:

Incremental 2

Infinite incremental forever backups may sound crazy. However organizations with very long retention requirements should consider this philosophy.

Complications

After a bit of searching on the web for general concerns with infinite or incremental forever backups & from my experience with multiple organizations and setups, the main concern with infinite incremental is that should any one of the copies created fail, including the first (full) backup, the possibility of restoration will be incomplete or impossible from the chain and the longer you go without a new full backup the more risk you take. What if corruption happens along the way and you lose an increment? You would lose the integrity of the chain, up to last working backup.

Looking at the image below, Backup #1 is dependent on the Full backup, Backup #2 is dependent on Backup #1 and so on. If corruption occurred on Backup #2, the last restorable would be backup #1.

Incremental 3

 

With Arcserve this is not the case. Arcserve has a completely different method of holding the restore points. Basically, each increment and backup is a pointing file rather than a reference to a set of block data set in a destination. Arcserve also allows for Verify backup to be run to check reference points and rebuild the chain if needed. This protects your infinite retention chain and preserves data. Verify backup can be run manually or scheduled as a multilevel incremental backup. The image below shows how Arcserve holds this data compared to other backup vendors as seen above.

Incremental 4

I ran a lab test on this and deleted an incremental pointer from Arcserve backup destination on a protected server from the middle of its chain. I then immediately tried to restore from the next point forward and received errors. I was able to return to a point before the deleted point but nothing following. I then ran a manual verify backup, and I was able to restore from all points in the chain except the deleted point. Arcserve can thus repair corruption or loss in the chain and statements such as “should any one of the copies created fail, including the first (full) backup, the possibility of restoration will be incomplete” are no longer valid when using Arcserve Unified Data Protection.

Why infinite incremental with Arcserve UDP?

  • It reduces backup windows from hours to minutes for many applications, while providing faster recovery of your data.
  • In a virtual environment, further data reduction can be observed with incremental changes being feed by CBT change block tracking on a lower block level.
  • It can support 24 x 7 backup strategy, reducing your RPO drastically.
  • It reduces costs by consolidating backup devices & backup storage across your infrastructure.
  • It reduces media costs incurred from offloading to tape cartridges needed on previous backup strategies to fulfill retention requirements that included bloated backups (D2D2T). When infinite incremental is your strategy going forward then retention can be kept on disk and rather replicated offsite to disk once again for peace of mind. This reduces your RTO.
  • This will enable full on and off site data retention, compliant with industry standards & corporate data policies.
  • Reduce the amount of data that goes across the local area network (LAN) and what’s replicated on across your wide area network (WAN)
  • Reduce data growth, as all incremental backups contain only the block level changes since your previous backup (incremental).
  • Source side global deduplication on incremental backups, making backups even more efficient and shortening windows: no comparison with the backup target is done since only changed blocks are identified at the source.

Why should Arcserve to be your infinite incremental backup vendor?

Because of the impressive data reduction ratios and costs savings that are achievable. Below is a breakdown of what you can achieve with Arcserve infinite incremental bundled with deduplication and further compression (even more impressive on a virtual environment using change block tracking: more data reduction can be observed).

A traditional infinite incremental backup before compression and deduplication, and no CBT:

Incremental 5

A Virtual CBT infinite incremental backup before compression & deduplication:

Incremental 6

A de-duplicated infinite incremental backup before compression:

Incremental 7

A compressed, de-duplicated infinite incremental:

Incremental 8 

It’s easy to see who the leader is in backup data reduction in the market, right? Don’t build your infrastructure around your backup software. Let Arcserve reduce your cost, backup windows, RTO, RPOs and maintenance.

 

 

 

 

Crazy. I Love Crazy.

 

Arcserve award

The picture for this post is the Arcserve team holding the award that they just won at VMworld 2015. Two things stand out for me:

1. IT IS A FANTASTIC ACHIEVEMENT. Arcserve is just 1 year out of their old CA parents. in that time, they basically had to rebuild their business process back end. New ERP, new CRM, new everything. At the same time, they enhanced the product to such an extent that it has won this award against some pretty well-entrenched VMware aligned solutions.

We are achieving ridiculously good results. De-duplication and compression of 12TB to 199GB is an example. This performance and richness of features that allows a single solution to perform simple backup, right through to application High Availability and at a price point that makes other solutions weep is seriously compelling for service providers, mid-market and enterprise customers.

2. THEY ARE CRAZY. These guys do things differently. They add fun into the equation and they make you want to deal with them. Just take this picture of them holding their award. They are doing it in front of their largest competitor’s stand and posting it all over social media. At VMworld Europe last year, they had a “silent club” where people put headphones on in a dark booth with a proper DJ and strobe lighting and danced away and a few months ago, they drove around the UK with a massive green inflatable elephant, pumped it up in their partners’ offices and spoke about the “elephant in the room” and how they can help partners make more money and build better businesses.

You may have read my post “When Crazy Becomes The New Normal“, if not, go read it. Arcserve is Crazy. We love Crazy. We support Crazy. We love Arcserve. You should try some Crazy!

The way we look at backups – the simplest method

 

Backup

Flat File / File Level Backup Versus Full System State/Image Backup

There are a lot of methods in this current day and age to backup you Critical data, but what must be taken into consideration is the percentage of your business that is dependent on your IT infrastructure and if your backup method meets your Business requirements. When it comes to backups in the past, due to the cost of backup software/hardware and target Destinations such as Tape & Disk, you were limited to only protecting your critical servers. On that note the backup method chosen was often the cheapest round trip solution. Even still, companies only backup critical servers, some only the critical data on those servers through file level or flat file. The changes in technology and the growth in cloud technology has led to many organizations planning DR strategies as they are now affordable. This introduces the RTO & RPO as key features in any DR strategy.

What Defines Critical servers? Here is a simple explanation of ‘mission critical’.

Mission critical refers to any factor of a system (equipment, process, procedure, software, etc.) whose failure will result in the failure of business operations.

Before I go into comparing the 2 backup methods, what must first be understood is that backups are important, so make sure you choose a solution that will work for the needs of your environment. But more important in my opinion is the restore capability. Restoring your data is even more crucial than having it backed up. You need to ensure that the solution that you use can easily restore critical data when needed in your environment.

What you need to define is whether or not you need to be able to restore a critical business service or just critical data when lost. If you require restoring a service, then you should also consider planning your backup strategy around your current or future disaster recovery strategy.

Good backup infrastructure is the foundation for any disaster recovery plan.

Choose a backup strategy that supports your disaster recovery requirements for recovering from a disaster.

Flat File & File Level Backup Methods

File systems store files and other objects only as a stream of bytes, and have little or no information about the data stored in the files. Such file systems also provide only a single way of organizing the files, namely via directories and file names.

A flat file is a file containing records that have no structured interrelationship. The term is frequently used to describe a text document from which all word processing or other structure characters or mark-up have been removed. Typically an image file like .MDF .CTK .BIN etc…

When referring to Flat File & File Level backups, this generally entails a scheduled job that backs up specific individual files (either Flat in the case of a Database servers DB files or file and folder level on files server or Application server with a critical file store).

The main benefits of this is to backup a small amount of data on a large volume rather than the entire volume. For example, consider a backup consisting of 10 gigabytes of critical database files that reside on a 120 gigabyte hard disk. The other 100-odd gigabyte could contain OS and other non-crucial data. Another advantage is that file-level backups allow for individual file restores, which are useful when you have just a few files to restore and do not need to restore all the data just to restore the individual files.

However, consider the IT Engineers installing service packs to production type software such as Exchange, SQL, or other software that is critical to your business, or the installation of new hardware/software and you have a full system state failure to your business critical servers. Your course of action with only having file level/flat file backups in place will be (this will calculate to be your RTO):

  1. OS rebuild
  2. Application install
  3. Patching (in some cases you can’t restore system databases unless the patch level is of the same build to when they were backed up)
  4. Then you can begin to restore individual flat files to your new server

So file-level backups have their limitations. In other cases, operating system files or locked files will not be backed up, and data discovered in the volumes is not structured. That means backup will not be application consistent. Therefore, this backup type is not suitable either for restore in the event of a disaster or for granular restores within applications such as databases, email, etc.

Building out a disaster recovery plan on this method is not fail proof and it’s not flexible solution. It’s very manual and will involve a lot of maintenance & fail over testing. Unless you incorporate Arcserve Replication and High Availability into your file based strategy you don’t have continuity (I will post more on RHA in upcoming posts).

Full System State/Image Backup

Image/System State Backups in the form of an agent or agentless on virtual or physical allow you to select an entire drive, partition, or entire machine which backs up the entire selection that you have selected. This backup then covers your system data crucial platform for your critical company service or DATA. Examples of these situations are when your hard drive dies, your Windows Guest will not boot up or is corrupt or your servers are stolen. “This is Africa “. Or as mentioned earlier, technical team update rollout goes horribly wrong, or another major disaster that requires a complete rollback of the entire machine. The advantage to image-based backups is that all of the information can be collected in a single pass, providing an updated bare metal restore (BMR) capability with each file-based backup.

You can restore back to dissimilar hardware. It allows for file-level restores from the backup image. You are able to recover servers remotely across wide area networks (WANs) or local area networks (LANs) and allow backup images to be saved to a variety of different media. You are able to convert your image of your physical servers to virtual instances. You can convert your virtual images back down to physical instances.

Having multiple ways to recover your data allows you to create a suitable backup strategy for your environment and grow it into a disaster recovery plan.

You are able to have pre-exported recovery points which allow for minimal downtime in an event. This can be leveraged for maintenance, platform migrations, Dev testing and upgrades without having to rebuild servers from the bare bones up. That way, if something goes wrong with that change then you can bring the machine back to where it was before the change very quickly.

One of the main reasons that flat and file backups are chosen over full system or image based backups is because of target storage availability. Most customers don’t have the storage capacity available and can’t afford to put down a larger storage device. Because image-level backups use snapshots, all of the data including deleted files and empty disk blocks, are backed up. To reduce the amount of data stored: data deduplication. Arcserve UDP can achieve very impressive deduplication and compression ratios making it comparable to a file/flat backup strategy on target data consumption. For example, where I backed up full image backups of 179 Servers which totalled 7TB of raw data, it amounted to roughly 200GB held on disk after deduplication and compression.

Conclusion

File level backups basically come down to you selecting some files and folders that you want to back up and then where you want those file level backups to go.

Image/system state backups allow you to select an entire drive, partition, or entire machine which backs up the entire selection you have selected.

In my opinion there are more features on image based backups than on file. In addition, with UDP all you would need is one backup of your server and you can replicate, export, BMR etc.