Data Mobility and Freedom Maximizes Its Business Value

Companies that fail to manage their unstructured file and object data often end up with unstructured data silos, leaving valuable intelligence on the table.

Carl D'Halluin

November 24, 2020

5 Min Read
Data Mobility and Freedom Maximizes Its Business Value
(Image: Pixabay)

In today's data-driven world, business intelligence has become king. If you can't extract value from your data, irrespective of the source or the contents, you are starting each day behind the competition. Organizations often find themselves in this situation because their technology solutions lack the ability to offer valuable business insights. More specifically, many companies end up with data locked into storage silos (hosted on everything from traditional storage systems to fully outsourced cloud providers). However, to extract maximum value, IT leaders must provide data mobility to move data wherever and whenever business needs dictate, so it can be accessed and actioned for business benefit.

One solution is for data center managers to partner with multiple vendors, leveraging all combinations of hybrid strategies to implement solutions that can provide the flexibility required to use the valuable stored data effectively. In doing so, organizations can achieve rapid and effective adaptability and scalability—and there's no reason the valuable information contained within those datasets shouldn't have the same mobility, flexibility, and accessibility.

Yet, the reality is that data storage systems do not always have seamless data access and extraction, especially in a world where different vendor systems are rampant with incompatibility. In such circumstances, companies that fail to understand and manage their unstructured file and object data often end up with unbounded unstructured data silos - leaving valuable intelligence on the table.

It’s important to understand why this situation is so common.

Why data value remains a challenge

IT executives are bombarded with statements such as "data is the new oil," which highlight the potential bottom-line business benefits of data mining. They're also simultaneously inundated with horror stories about liabilities and fines for companies that cannot readily produce legal or financial records for compliance purposes.

Data archiving has its own set of challenges: IT departments often struggle with handling the volume of requests for archived or backed up data, and the requests for a restore of an old version of an accidentally deleted file. Moreover, many companies are not equipped to classify and categorize their data, and do not implement a proper data migration/archival/deletion lifecycle. As a consequence, they experience unbounded growth in unstructured data. This predicament was once solved by turning to the cloud, but this solution is no longer a panacea now that customers have moved to storing and retrieving petabytes of data.

In addition to the realities of scale, today’s storage administrators must deal with multiple vendors. The variety of storage protocols also poses a challenge towards data protection, data security, and data governance. These realities often lead IT leaders to take similar actions: never delete any data, keep all file and object data where it is, and continue growing the infrastructure and increasing the storage cost. This exacerbates the problems caused by data silos, and no amount of infrastructure investment will alleviate this lack of mobility.

Data freedom - the building block of data mobility

Organizations that experience any of these issues should not feel alone: these are common challenges across industries. Businesses are being ‘held hostage’ by their data and its lack of mobility. But it doesn’t have to be this way.

Data migration is a key building block in data mobility, and where once the movement of millions of files was impossible, today, data mobility is possible. In practical terms, organizations want the right data in the right place at the right time, with the flexibility to reorganize and move it around whenever they see fit. They also want flexible protection for different types of data. And, of course, they need proper archiving, data immutability through WORM, and compliance with regulation and legal obligations.

In the real world, data mobility between storage or cloud vendors is challenging and complex. You cannot, for example, make systems read-only for days or weeks. You cannot copy all metadata efficiently due to the heterogeneity of the vendors and various corresponding protocols. You cannot simply point your clients to new targets.

So, what’s the answer? Effective data migration sits at the heart of overcoming these challenges.

Completing an effective data migration

Creating an effective migration strategy requires a migration partner that understands how vendors build their systems, their strengths and weaknesses, and the issues associated with inter-vendor compatibility. Organizations that have this information available are in a stronger position to categorize their data, define a data lifecycle, and move their data to the best location to meet business needs.

Before beginning a migration, companies should carefully consider the proper data migration software to discover, plan, execute, and report on the migration. Once that critical step is completed, an effective migration process falls into place because all the best practices that make a migration project a success are taken care of or aided by the software: analysis and planning, execution, and verification and support.

We live in a hybrid world where combining multiple vendors with both on-premises and cloud infrastructures have become ubiquitous features of IT strategy. As a result, businesses want to get back in charge of their unstructured data because mobility is key to delivering that big insight they so badly need.

Carl D’Halluin is CTO of Datadobi.


About the Author(s)

Carl D'Halluin

Carl D’Halluin is CTO of Datadobi. He has been building cloud and storage software for 20 years. He has made notable contributions in protecting and manipulating unstructured data, building highly scalable and secure storage systems, and enabling metadata-driven insights and automation. Each is a cornerstone of the Datadobi business and technology. D’Halluin owns many patents in this domain. He was instrumental in the growth and acquisition of storage companies Amplidata and Q-layer. Carl also worked at EMC Centera, where he architected the world’s first commercial object storage system. Carl has a double Masters Degree in Electrical Engineering (KU Leuven, Belgium) and in Mathematics (UC Berkeley, California).

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights