Article 5J2M5 How to ensure quality in the era of Big Data

How to ensure quality in the era of Big Data

by
Ram Iyer
from Crunch Hype on (#5J2M5)
Patrik Liu TranContributorPatrik Liu Tran is the co-founder and CEO of Validio, an automated real-time data validation and quality monitoring platform. He holds a Ph.D. in Business Administration (as well as an M.Sc. and B.Sc.) from Stockholm School of Economics, and a Civil Engineering degree in Engineering Physics with an M.Sc. in AI and machine learning from KTH Royal Institute of Technology. Patrik is also the chairman of Stockholm AI.

A little over a decade has passed since The Economist warned us that we would soon be drowning in data. The modern data stack has emerged as a proposed life-jacket for this data flood - spearheaded by Silicon Valley startups such as Snowflake, Databricks and Confluent.

Today, any entrepreneur can sign up for BigQuery or Snowflake and have a data solution that can scale with their business in a matter of hours. The emergence of cheap, flexible and scalable data storage solutions was largely a response to changing needs spurred by the massive explosion of data.

Currently, the world produces 2.5 quintillion bytes of data daily (there are 18 zeros in a quintillion). The explosion of data continues in the roaring 20s, both in terms of generation and storage - the amount of stored data is expected to continue to double at least every four years. However, one integral part of modern data infrastructure still lacks solutions suitable for the Big Data era and its challenges: Monitoring of data quality and data validation.

Let me go through how we got here and the challenges ahead for data quality.

The value vs. volume dilemma of Big Data

In 2005, Tim O'Reilly published his groundbreaking article What is Web 2.0?", truly setting off the Big Data race. The same year, Roger Mougalas from O'Reilly introduced the term Big Data" in its modern context - referring to a large set of data that is virtually impossible to manage and process using traditional BI tools.

Back in 2005, one of the biggest challenges with data was managing large volumes of it, as data infrastructure tooling was expensive and inflexible, and the cloud market was still in its infancy (AWS didn't publicly launch until 2006). The other was speed: As Tristan Handy from Fishtown Analytics (the company behind dbt) notes, before Redshift launched in 2012, performing relatively straightforward analyses could be incredibly time-consuming even with medium-sized data sets. An entire data tooling ecosystem has since been created to mitigate these two problems.

1-The-emergence-of-the-modern-data-stack

The emergence of the modern data stack (example logos and categories). Image Credits: Validio

Scaling relational databases and data warehouse appliances used to be a real challenge. Only 10 years ago, a company that wanted to understand customer behavior had to buy and rack servers before its engineers and data scientists could work on generating insights. Data and its surrounding infrastructure was expensive, so only the biggest companies could afford large-scale data ingestion and storage.

The challenge before us is to ensure that the large volumes of Big Data are of sufficiently high quality before they're used.

Then came a (Red)shift. In October 2012, AWS presented the first viable solution to the scale challenge with Redshift - a cloud-native, massively parallel processing (MPP) database that anyone could use for a monthly price of a pair of sneakers ($100) - about 1,000x cheaper than the previous local-server" setup. With a price drop of this magnitude, the floodgates opened and every company, big or small, could now store and process massive amounts of data and unlock new opportunities.

As Jamin Ball from Altimeter Capital summarizes, Redshift was a big deal because it was the first cloud-native OLAP warehouse and reduced the cost of owning an OLAP database by orders of magnitude. The speed of processing analytical queries also increased dramatically. And later on (Snowflake pioneered this), they separated computing and storage, which, in overly simplified terms, meant customers could scale their storage and computing resources independently.

What did this all mean? An explosion of data collection and storage.

Techcrunch?d=2mJPEYqXBVI Techcrunch?d=7Q72WNTAKBA Techcrunch?d=yIl2AUoC8zA Techcrunch?i=kNDEAayAbhw:zdeFCume64A:-BT Techcrunch?i=kNDEAayAbhw:zdeFCume64A:D7D Techcrunch?d=qj6IDK7rITskNDEAayAbhw
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TechCrunch/
Feed Title Crunch Hype
Feed Link https://techncruncher.blogspot.com/
Reply 0 comments