TACTICS FOR DISASTER RECOVERY

Business continuity is a concern for many organisations, particularly in the case of SMEs who don’t necessarily have the resources to respond quickly and effectively. 73% of businesses have now had some type of operations interruption in the past five years, and businesses must ensure that they have the correct measures in place to counteract these forms of disruption. Strategies must be well-thought out, with proactive tactics such as cyber-security, and frequent system backups for efficient disaster recovery (DR).

Paul Blore, Managing Director at Netmetix, explores the DR strategies available to organisations.

Traditional DR processes

Historically, on-premise backup systems that use removable media in the form of tapes or disk drives to store backup data, have been used to try to ensure continuity in the event of a disaster. However, this requires manual action and designated employees, that can lead to human errors and failed or incomplete backups. Moreover, removable media is typically a consumable that needs replacing regularly; a considerable cost that is not ideal for SMEs.

When it comes to conventional DR, it works through the duplication of all critical systems, installed at a different location and ready to take over if disaster strikes at the primary location. Although a working solution, this is expensive, and many businesses have concerns over the required budget for ‘what if’ technology that may never be needed.

Cloud-based DR

Cloud technology has received a flood of attention, and in the case of DR, has the ability to drastically reduce storage costs, whilst making entire system backups much more cost-effective and straightforward. All of the leading cloud providers now offer backup as a core service of their cloud offerings, and clients can generally select whichever backup schedule and retention policy they wish to utilise.

This also addresses the DR aspect as well, with major cloud service providers employing large-scale resilience and redundancy to ensure their systems remain operational. In the unlikely event an entire data centre goes down, client systems could operate from a second data centre. The very best systems can also provide a full DR service for on-premise systems by replicating on-premise data in almost real-time into the cloud. Then, if disaster strikes, the systems can automatically allocate computing resource e.g. CPUs, RAM etc. and “spin-up” virtual servers to seamlessly take over until normal service is resumed on-site. Once the disaster has passed, the cloud systems will “fail-back” to the on-premise systems and synchronise all data that was changed during the disaster window.

Operating on usage-based costings, this type of system is ideal as the secondary or replicated IT infrastructure lays in wait until it’s required, and businesses need only pay for it when, or if, they need it – perfect for SMEs with minimal budget. This means that when it comes to defining a DR strategy, businesses now have far more options available, with genuine DR systems now a cost-effective possibility for SMEs.

Future gazing

With so many businesses relying on digital technology to function day to day, business continuity should be a key priority for organisations, and will continue to be in the foreseeable future. After all, businesses will cease to function at full capacity if a disaster strikes and the necessary procedures are not in place; and as a direct result will experience a significant increase in downtime and expenditure, with a decrease in potential profits.

It’s now easier than ever to migrate to the cloud and take advantage of the inbuilt backup and DR options available. With the rate of cyber attacks on businesses of all sizes increasing significantly, no company is immune from the threat of hacking, human error or natural disasters, and there is no longer an excuse to not have these systems and procedures in place.

Cloud computing is gaining massive momentum amongst organisations of every type and size, with an ever-expending range of services becoming available.

Many organisations’ first foray into cloud-based computing will have been with a private cloud service provider where they would effectively rent dedicated hardware in a computer rack in the service provider’s data centre. These facilities would be able to provide basic computing services such as virtual servers, storage and backups. However, the big public cloud services like Microsoft Azure, Amazon AWS and Google Cloud are now offering a depth and breadth of computing services that are putting enormous pressure on the private cloud companies.

Cloud users no longer need to be constrained by the limits of rented hardware or the geographic locations of the private cloud provider.

In response, a number of myths are being peddled in an attempt to cast doubt on the suitability of public cloud computing for certain requirements, so here we will address some of those myths.

Myth No. 1 – Public Cloud is less stable and secure

The big public cloud service providers, including Microsoft, Amazon and Google have invested mind-boggling sums of money into their global platforms and provide services to every conceivable type of organisation around the world, including military, government, health and legal. It is inconceivable that their systems would be less stable or secure than a private cloud system.
They also invest enormous resources into achieving and maintaining a vast array of security certifications from governments and industry bodies around the world. For example, Microsoft employs an elite team of security “hackers”, known as the Red Team to simulate real-world breaches, conduct continuous security monitoring and practice security incident response to validate and improve the security of Microsoft Azure and Office 365. The Red Team takes on the role of sophisticated adversaries and allows Microsoft to validate and improve security, strengthen defences and drive greater effectiveness of its entire cloud security program. Very few, if any of the private cloud providers can claim anything even remotely comparable.

However, it is also beholden on each client to protect their own cloud environment, so virtual firewall appliances from the world’s leading security vendors like WatchGuard and Barracuda can be deployed in public cloud environments to protect services and data from a wide range of sophisticated malicious attacks with comprehensive threat detection and response technologies, including AI based anti-virus services to detect and remove zero-day malware.

Physical security is just as important as digital security and all the big public cloud service providers employ comprehensive and sophisticated measures to ensure that client systems and data are protected. The data centres will typically be wholly owned and managed by the vendor, not simply rented rack space in a third-party facility. They will have secure perimeters of steel and concrete with CCTV and security professionals. Access will be tightly controlled with only those with a very specific need to enter allowed in and only for as long as they need to be there. Microsoft also use full-body screening of everyone entering or leaving their data centres to ensure that no unauthorised device or data enters or leaves the facility.

Myth No. 2 – Public Cloud is expensive

If a public cloud-based infrastructure was designed and provisioned in the same way as an on-premise or even a private cloud infrastructure, it may well end up costing more. But to do so would completely miss many of the compelling benefits of a public cloud environment.

In a private cloud solution, we are still fundamentally working with hardware, all that has been done is it to relocate it to a remote location. Therefore, we need to take account of resilience and where feasible, avoid potential single points of failure, all adding cost and complexity. If DR is also required, we will need to replicate the hardware, software and data in a separate facility, significantly further adding to the cost and complexity.

Whilst there is still obviously hardware in a public cloud data centre, the user or system administrator isn’t aware of it. All of the services are virtual. Therefore, we don’t need to design in resilience by duplicating hardware, because it comes as standard with the service. For example, in Azure, on the most basic level of storage, any data held on a virtual “disk” is replicated a minimum of three times within the same facility, so even if an underlying physical disk failed, anyone using the disk wouldn’t be impacted because the data would continue to be available from one of the other locations. Systems and data can also be replicated to separate locations in either the same region or a different part of the world at minimal cost and complexity.

In a private cloud, resources and therefore costs are dictated by the physical devices that have been provisioned for a specific client. Whereas in a public cloud environment, resources are completely elastic, so they can be scaled up or down to suit the real-time demands. If a business works normal working hours for example, there is no point having a server with 24 vCPU cores and 128GB RAM running at minimal utilisation outside of those working hours, so resources and therefore costs can be scaled down to match the reduced demands. If a peak in workload requires additional resources, they can be provisioned in hours or even minutes in a public cloud, whereas a private cloud would require the deployment of physical hardware, which could take weeks or even months to complete. And once deployed, it will typically be a commitment for a minimum contract term.

Comprehensive analytics and reporting tools are provided by the service providers to allow systems administrators to assess, manage and optimise the levels of resources deployed.

The range of services being made available on the big public cloud platforms is growing at an incredible rate, meaning that it is highly unlikely that a full deployment couldn’t be achieved for any services required.

A public cloud isn’t simply a cloud-based alternative to a traditional on-premise infrastructure. It is a fundamentally different way of provisioning IT infrastructure with its own requirements for specialists and expertise. It therefore follows that if the maximum benefits and efficiencies are to be achieved, appropriate skills need to be sought and employed. This would generally mean working with a partner with a proven track record of designing, deploying and managing public cloud systems.

Myth No. 3 – Moving to Public Cloud is complex

Most people wouldn’t choose to move house on their own. Instead, they would use specialists who have the necessary skills and resources to ensure a satisfactory outcome. The same goes for migrating into the cloud.

Mistakes can be costly, but that goes for pretty much anything in business these days, so it pays to find the right company to work with. The choice can be daunting and if asked, many IT companies may well claim expertise, even if they don’t actually have it. Microsoft use certifications called Competencies to identify those Partners with expertise in specific areas. Partners with expertise in a specific field are awarded a Silver Competency, whilst those at the top get a Gold Competency. Those Partners with a wider range of experience and expertise will be able to boast multiple Gold Competencies and those that specifically relate to Azure are the Gold Cloud Platform, Gold Cloud Productivity and Gold Datacenter certifications.

Working with a Partner with one or more of these Gold Competencies will help ensure a successful outcome to a public cloud migration.