Getting value from your data shouldn’t be this hard
The potential impact of the ongoing worldwide data explosion continues to excite the imagination. A 2018 report estimated that every second of every day, every person produces 1.7 MB of data on average-and annual data creation has more than doubled since then and is projected to more than double again by 2025. A report from McKinsey Global Institute estimates that skillful uses of big data could generate an additional $3 trillion in economic activity, enabling applications as diverse as self-driving cars, personalized health care, and traceable food supply chains.
But adding all this data to the system is also creating confusion about how to find it, use it, manage it, and legally, securely, and efficiently share it. Where did a certain dataset come from? Who owns what? Who's allowed to see certain things? Where does it reside? Can it be shared? Can it be sold? Can people see how it was used?
As data's applications grow and become more ubiquitous, producers, consumers, and owners and stewards of data are finding that they don't have a playbook to follow. Consumers want to connect to data they trust so they can make the best possible decisions. Producers need tools to share their data safely with those who need it. But technology platforms fall short, and there are no real common sources of truth to connect both sides.
How do we find data? When should we move it?In a perfect world, data would flow freely like a utility accessible to all. It could be packaged up and sold like raw materials. It could be viewed easily, without complications, by anyone authorized to see it. Its origins and movements could be tracked, removing any concerns about nefarious uses somewhere along the line.
Today's world, of course, does not operate this way. The massive data explosion has created a long list of issues and opportunities that make it tricky to share chunks of information.
With data being created nearly everywhere within and outside of an organization, the first challenge is identifying what is being gathered and how to organize it so it can be found.
A lack of transparency and sovereignty over stored and processed data and infrastructure opens up trust issues. Today, moving data to centralized locations from multiple technology stacks is expensive and inefficient. The absence of open metadata standards and widely accessible application programming interfaces can make it hard to access and consume data. The presence of sector-specific data ontologies can make it hard for people outside the sector to benefit from new sources of data. Multiple stakeholders and difficulty accessing existing data services can make it hard to share without a governance model.
Europe is taking the leadDespite the issues, data-sharing projects are being undertaken on a grand scale. One that's backed by the European Union and a nonprofit group is creating an interoperable data exchange called Gaia-X, where businesses can share data under the protection of strict European data privacy laws. The exchange is envisioned as a vessel to share data across industries and a repository for information about data services around artificial intelligence (AI), analytics, and the internet of things.
Hewlett Packard Enterprise recently announced a solution framework to support companies, service providers, and public organizations' participation in Gaia-X. The dataspaces platform, which is currently in development and based on open standards and cloud native, democratizes access to data, data analytics, and AI by making them more accessible to domain experts and common users. It provides a place where experts from domain areas can more easily identify trustworthy datasets and securely perform analytics on operational data-without always requiring the costly movement of data to centralized locations.
By using this framework to integrate complex data sources across IT landscapes, enterprises will be able to provide data transparency at scale, so everyone-whether a data scientist or not-knows what data they have, how to access it, and how to use it in real time.
Data-sharing initiatives are also on the top of enterprises' agendas. One important priority enterprises face is the vetting of data that's being used to train internal AI and machine learning models. AI and machine learning are already being used widely in enterprises and industry to drive ongoing improvements in everything from product development to recruiting to manufacturing. And we're just getting started. IDC projects the global AI market will grow from $328 billion in 2021 to $554 billion in 2025.
To unlock AI's true potential, governments and enterprises need to better understand the collective legacy of all the data that is driving these models. How do AI models make their decisions? Do they have bias? Are they trustworthy? Have untrustworthy individuals been able to access or change the data that an enterprise has trained its model against? Connecting data producers to data consumers more transparently and with greater efficiency can help answer some of these questions.
Building data maturityEnterprises aren't going to solve how to unlock all of their data overnight. But they can prepare themselves to take advantage of technologies and management concepts that help to create a data-sharing mentality. They can ensure that they're developing the maturity to consume or share data strategically and effectively rather than doing it on an ad hoc basis.
Data producers can prepare for wider distribution of data by taking a series of steps. They need to understand where their data is and understand how they're collecting it. Then, they need to make sure the people who consume the data have the ability to access the right sets of data at the right times. That's the starting point.
Then comes the harder part. If a data producer has consumers-which can be inside or outside the organization-they have to connect to the data. That's both an organizational and a technology challenge. Many organizations want governance over data sharing with other organizations. The democratization of data-at least being able to find it across organizations-is an organizational maturity issue. How do they handle that?
Companies that contribute to the auto industry actively share data with vendors, partners, and subcontractors. It takes a lot of parts-and a lot of coordination-to assemble a car. Partners readily share information on everything from engines to tires to web-enabled repair channels. Automotive dataspaces can serve upwards of 10,000 vendors. But in other industries, it might be more insular. Some large companies might not want to share sensitive information even within their own network of business units.
Creating a data mentalityCompanies on either side of the consumer-producer continuum can advance their data-sharing mentality by asking themselves these strategic questions:
- If enterprises are building AI and machine learning solutions, where are the teams getting their data? How are they connecting to that data? And how do they track that history to ensure trustworthiness and provenance of data?
- If data has value to others, what is the monetization path the team is taking today to expand on that value, and how will it be governed?
- If a company is already exchanging or monetizing data, can it authorize a broader set of services on multiple platforms-on premises and in the cloud?
- For organizations that need to share data with vendors, how is the coordination of those vendors to the same datasets and updates getting done today?
- Do producers want to replicate their data or force people to bring models to them? Datasets might be so large that they can't be replicated. Should a company host software developers on its platform where its data is and move the models in and out?
- How can workers in a department that consumes data influence the practices of the upstream data producers within their organization?
The data revolution is creating business opportunities-along with plenty of confusion about how to search for, collect, manage, and gain insights from that data in a strategic way. Data producers and data consumers are becoming more disconnected with each other. HPE is building a platform supporting both on-premises and public cloud, using open source as the foundation and solutions like HPE Ezmeral Software Platform to provide the common ground both sides need to make the data revolution work for them.
Read the original article on Enterprise.nxt.
This content was produced by Hewlett Packard Enterprise. It was not written by MIT Technology Review's editorial staff.