Monte Carlo raises $25M for its data observability service
This morning Monte Carlo, a startup focused on helping other companies better monitor their data inflows, announced that it has closed a $25 million Series B.
The round, which was co-led by GGV and Redpoint, comes mere months after its September Series A that was worth $15 million. Accel led the company's Series A and Seed deals, participating in its Series B as well.
The round caught our attention not only for the speed at which it was raised following Monte Carlo's preceding investment, but also because your humble servant had no idea what data observability, the startup's niche, really was.
So we got Monte Carlo co-founder and CEO Barr Moses on a call to explain both her company's space, and how it managed to attract so much more capital so quickly.
Data inflowsBig data was the jam a while back, but it turned out to be merely one piece in the broader data puzzle. We can see evidence of that in recent revenue growth at Databricks, which reached $425 million ARR in 2020 by building an analytics and AI service that sits on top of companies' data.
Monte Carlo is another bet on the data space, sitting a bit earlier in the data lifecycle. Think of it this way: Snowflake can hold all your data, and Databricks can help you analyze it. But what's checking to make sure that data flowing into your repositories is, you know, not bullshit?
Figuring out if data inflows are healthy and not bunk is what Monte Carlo does.
According to Moses, companies now have myriad data sources. That's great in theory as more data is usually a good thing. But if one or two of those sources goes haywire, figuring that out before you collect, store, and analyze the bad information is pretty important.
So Monte Carlo sits upstream from the other data companies that are hot these days, keeping tabs on inbound data sources across a number of parameters to make sure that what's actually arriving in your data lake is legit.
The startup does that, Moses said, by checking data freshness (how recent, or tardy the data in question is), volume (is there too little, too much?), schema (the data's structure itself, to see if things have changed that could matter, or break downstream services), distribution (if data points suddenly jump from say, the single digits to the millions), and lineage, which can help find breakpoints in data inflows.
Hearing that Monte Carlo learns from a company's particular data pipes to figure out what could be non-standard data inflow made me curious how long it takes to get the startup's software set up and running; not long, per Barr, an hour to fire it up in many cases, and a week to learn.
A growing sectorMonte Carlo's product is neat enough to warrant our attention by itself. But, fitting neatly inside the growth of the broader data space, and especially data tooling that isn't directly concerning storage, makes it all the more worth considering.
And now with $25 million more, Monte Carlo can expand its current staff of 25, and keep attacking its mid-market and enterprise customer target. Let's see how quickly it can scale, and how soon we can start squeezing the startup for growth numbers.