Author: CloudRight Services Group

Key factors and challenges in cloud repatriation

“Unclouding” or “cloud repatriation” is the process of reverse-migrating application workloads and data from the public cloud to a private cloud located within an on-premise data center or to a colocation provider. Businesses may elect to move one or more, and in some cases all, of their applications and data out of the public cloud. Typically, this work is conducted by the technical staff within the organization; however, in some cases, third party channel providers are hired to perform this reverse cloud migration. In either case, the staff will work with the cloud provider to move the applications and data to the customer’s chosen private cloud.
"A significant number of organizations are finding that it is less expensive to manage their own servers compared to the charges for public cloud." 

Costs, control, and security driving cloud repatriation

According to a recent IDG report, cost and security were the top two reasons organizations relocated their application workloads and data away from the public cloud. From a historical standpoint, customers have had significant difficulties planning for costs required to maintain their presence in the public cloud. Top-tier providers such as AWS and Azure frequently make changes to their pricing structures. AWS alone has made dozens of changes to their pricing structures in the last 12 years. Additional difficulties pertain to egress fees, which are typically charged at a per megabyte of data rate. Therefore, depending on the amount of data egressed from the cloud provider, the costs will vary.

A significant number of organizations are finding that it is less expensive to manage their own servers compared the charges for public cloud. In recent years, the cost of infrastructure, including servers, network, and storage, have been decreasing steadily. This has led businesses to reassess their hosting strategies.

Another reason companies are choosing to uncloud is lack of control. Cloud providers offer customers the ability to manage resource utilization, but customers have limited control over where their application workloads are hosted or even where their data is stored. Additionally, some customers have been surprised to discover that they may not own some types of data, such as metadata, taxonomies, social “likes” or folder structures. This has been a cause for concern for many businesses.  

Additionally, many companies in the IDC report cited issues relating to performance, such as latency and project failures due to insufficient capability and functionality available through their cloud providers.

Finally, there’s a common misconception among customers that the information they place into the cloud is protected and secured by the cloud provider. This is not the case. It’s up to businesses to implement controls for security, whether the information resides in the public cloud or in an on-premise solution. For example, in 2017, a government contractor uploaded highly classified information to a cloud provider. The information was not secured and was freely accessible to the public, highlighting why organizations must maintain a policy that ensures their data is secure.


RELATED CONTENT | White Paper
Nine Strategic Considerations for Application Workload Placement /
Legacy applications that lack the ability to leverage the cloud’s services could quickly become costly."

What are the alternatives and challenges to cloud repatriation?

An alternative to cloud repatriation is refactoring application workloads so that they perform more efficiently in the cloud. For example, applications may be designed so that they easily migrate across a hybrid environment, which consists of a mixture of both public and private cloud infrastructure. One option is to turn to containerized applications or microservices. Applications that have been redesigned to take advantage of the cloud are good candidates for public cloud hosting. However, legacy applications that lack the ability to leverage the cloud’s services could quickly become costly.

According to the Cloud Standards Customer Council, an advocacy group for cloud end users, an exit clause should be part of every cloud service agreement. Moving data between cloud providers can be a time consuming and expensive undertaking. Depending on the cloud provider, data may have to be converted to a usable format prior to being moved.

For example, when moving application workloads from the public cloud to an on-prem VMWare solution, VMWare Converter is needed to build VMDKs required to recreate the VM. This can take a long time and requires an extended outage of the VM. Azure and Hyper-V would follow a different methodology. Both utilize VHDs for hard drives. Therefore, it’s only a matter of downloading the VHDs from Azure via a web browser and importing them into a new VM.

There are several tools that may be utilized to migrate cloud VM’s back to on-premises, including disaster recovery and replication software providers. The advantage is that they eliminate a few manual conversion steps, such as disk copies and conversions.

How to plan for migration

Planning and coordination between technical and business teams is essential. Migration priorities must be established first. It’s just as essential to properly assess the existing environment through discovery. This includes creating a comprehensive inventory of applications, databases, websites, workloads, and other services that are in scope for migration. Applications, data, and web services should be categorized by type, function, and priority.

There are discovery tools and processes that inventory an environment. A complete and accurate inventory will aid in developing a plan for migration. This inventory may be used to identify migration priorities, workloads that should not be migrated, and resource requirements for the destination. Ensuring a successful outcome for migrations requires following a proven discovery methodology for uncovering all your IT infrastructure, assets, and dependencies.