Webinar

Feb 22, 2024

Unlocking Untapped Value in Portfolio Updates

Copy link

Copied

Copy link

Copied

Duration: 57 minutes

In this Synaptic webinar, CEO and co-founder Rohit joins Sales Head Christian to discuss how venture and private equity investors can unlock hidden value from their portfolio company updates.

Over the course of an hour, they break down why portfolio data is one of the most underutilized assets in the investing workflow — trapped in decks, emails, and PDFs — and how systematic extraction, standardization, and automation can transform it into a powerful intelligence layer for funds.

The conversation covers three major themes:

  1. The Problem — Why funds struggle to use their own proprietary data effectively.

  2. The Approach — How Synaptic built a human-in-the-loop system that blends AI precision with expert oversight.

  3. The Payoff — The high-value workflows unlocked when portfolio data becomes structured: benchmarking, alerts, fund planning, and continuous insight generation.

What follows is an edited and timestamped transcript of the session, lightly condensed for clarity.

Part I — Setting the Stage

[00:00:11 – 00:02:40] Christian:
Good morning to everyone on the West Coast, good afternoon to those joining from the East Coast, and good evening to our friends tuning in from Europe and the Middle East. We’ll get started in a minute as people continue to join.

It’s been raining in San Francisco for a week now — since the day I arrived. I didn’t bring an umbrella or a rain jacket, so I’m hoping I’m spared on the walk back to the hotel this time.

Alright, I think we can begin. Welcome, everyone, to Synaptic’s second webinar: Unlocking Untapped Value in Portfolio Updates. Thanks for taking the time to join us today.

Before we dive into introductions, here’s what we’ll cover:

  1. The power of portfolio company data

  2. The challenge of tracking it at scale

  3. The ideal approach to extract and standardize it

  4. The high-value workflows and analyses this enables

We want this to be interactive, so please use the Q&A box throughout. We’ll try to take questions live when relevant.

A quick bit of housekeeping:
– The session is being recorded, and we’ll share it with all registered participants afterward.
– If anyone faces technical issues, our team is available to help.
– And again, don’t hesitate to send in your questions—we want this to be a discussion.

By way of introduction: I’m Christian, based in New York, leading Sales at Synaptic. I’m in San Francisco this week with Rohit, so if anyone’s nearby and wants to grab coffee, reach out. Before Synaptic, I spent six years at CB Insights, so I’m deeply familiar with private-market data and the investment ecosystem. I’m excited about what we’ll discuss today.

[00:05:21 – 00:06:52] Rohit:
Thanks, Christian. I’m Rohit, co-founder and CEO at Synaptic. This is my first webinar, so I’m looking forward to seeing how this format goes.

I come from a technology and data background. Before Synaptic, I led tech at Vy Capital — a fund that’s invested in companies like Reddit, SpaceX, and Blockchain.com. So I’ve been in the investor’s seat and understand what a typical day inside a fund feels like: the constant pulls and pushes, the context-switching, the data chaos.

My goal today isn’t to sell anything—it’s to walk you through how we arrived at our solution, what motivated us, and how portfolio data, when used right, can unlock real value.

Synaptic was founded seven years ago with one mission: to help investors make better decisions using data. Our platform brings together best-in-class alternative data sources and applies them to key investing workflows—deal sourcing, due diligence, thesis formation, and portfolio monitoring.

Today, funds like Ribbit Capital, Index Ventures, Dragoneer, General Catalyst, GIC, Morgan Stanley, and SoftBank use Synaptic. Talking to such sharp people keeps me motivated—it’s an ongoing education.

What I want to share today began as a conversation with many of these investors. I used to ask them, “If there’s one thing we could help you unlock, what would it be?”
The overwhelming answer was: “Portfolio company data.”

They all said some version of the same thing: “We have proprietary data from our own portfolio companies—the most valuable data imaginable—but we can’t use it properly.”

That was the light-bulb moment.

Part II — The Portfolio Data Problem

[00:06:52 – 00:09:00] Rohit:
When I started talking to our customers about this, I heard the same frustration repeatedly: “We want to do more with our portfolio company data, but we can’t.”

Everyone agreed that the insights inside these updates are extremely valuable, yet almost no one could access them systematically. Portfolio data was seen as messy, inconsistent, and trapped inside decks, emails, and spreadsheets.

Investors told me, “Rohit, you talk about proprietary data and alpha, but the most proprietary dataset we have—our portfolio company updates—is practically unusable.”

And when I thought about it, I realized how true that was. This data is literally insider information. Yet most funds can’t harness it effectively.

So we began asking: Why?

The more we looked, the clearer it became that across every stage of an investor’s job, portfolio data could be incredibly powerful—if only it could be extracted and standardized.

[00:09:00 – 00:11:01] Rohit:
Think about it: the best VCs are learning machines. They learn from every deal, every company, every datapoint. And within portfolio updates lies a goldmine of learning.

One of our customers shared a clever use case. They were analyzing their own portfolio updates to see which AI startups were getting traction—by tracking where their portfolio companies were spending money. If ten of their companies started paying for the same AI tool, that was a strong signal.

Then there’s fund planning—less glamorous, but crucial. You need to plan follow-on investments, manage reserves, decide between A, B, or C for the next check, and plan liquidity events. To do all that well, you need to understand your portfolio deeply.

Reporting is another time sink. Everyone loves communicating with LPs, but not the data wrangling behind it. Pulling and cleaning portfolio numbers from decks, MIS reports, and updates consumes hours that add no analytical value.

In all these cases, structured portfolio data could transform the process—if only it were accessible and standardized.

[00:11:01 – 00:12:07] Rohit:
So I asked what might have seemed like a simple question: “If this data is so valuable, why aren’t you already using it?”

They looked at me like I’d missed something obvious. “Because it’s hard,” they said.

And they’re right. Even a mid-sized fund might have 50–100 portfolio companies. Each sends at least four updates a year—often more. That’s hundreds, even thousands, of files to parse annually: board decks, MIS reports, data packs, and one-off email updates.

These updates arrive in every imaginable format—PowerPoints, PDFs, emails, spreadsheets, even Slack messages. Add in face-to-face meeting notes, and you’re swimming in unstructured information.

[00:12:07 – 00:13:15] Rohit:
To make matters worse, startups don’t report like mature public companies. They often don’t have a CFO, and even when they do, the CFO’s job isn’t standardization—it’s survival and growth. Metrics vary wildly. GAAP doesn’t capture the nuances of startup growth.

Christian and I have now spoken with over 50 funds. Not a single one has said, “We love our system for tracking portfolio data.” Everyone uses some version of a patchwork process—manual, partial, and painful.

[00:13:15 – 00:13:50] Christian:
That’s exactly what we’ve seen too. Every conversation we’ve had ends the same way: no one loves their system. Everyone knows it’s inefficient but can’t find something better.

Let’s actually check that with the audience. We’re running a poll: How do you currently capture data from your portfolio companies? Choose all that apply.

(Results trickle in.)

[00:14:09 – 00:15:02] Christian:
It’s fascinating. One investor told me recently that every quarter, multiple team members spend between 40 to 80 hours chasing down portfolio data for board decks or LP updates. That’s dozens of hours per person, per quarter—just for data collection.

[00:15:02 – 00:15:29] Rohit:
I remember that story—you told me, and I was shocked too. It’s crazy how much time gets burned on something that should be automated or at least streamlined.

[00:15:29 – 00:17:23] Christian:
Exactly. Let’s pull up the poll results. Unsurprisingly, most respondents say they still capture data manually—through spreadsheets or email updates.

Here’s the follow-up question: How satisfied are you with your current system?

[00:17:23 – 00:17:46] Rohit:
It’ll be awkward if everyone says they’re “very satisfied.” That would mean we can shut down the webinar and start job hunting.

[00:17:46 – 00:18:43] Christian:
(laughs) True. Let’s see the responses coming in. Looks like the vast majority are not satisfied. Which, honestly, matches everything we’ve heard in conversations with funds.

[00:18:43 – 00:19:10] Rohit:
Exactly. No one’s happy with how portfolio data is tracked. And that’s the real opportunity—to fix something that everyone struggles with but no one’s solved well yet.

Part III — Building a Solution

[00:19:10 – 00:20:36] Rohit:
Once we recognized how valuable and underused portfolio data was, we decided to build a system that could actually solve the problem.

Our thinking was: this shouldn’t be that hard. We’d build a quick prototype, plug it into Synaptic’s existing platform, and be done in three or four months.

Of course, it didn’t take three or four months. It took eighteen.

That’s how long it took to design something that genuinely worked—something funds could rely on to parse, extract, and standardize portfolio data at scale.

Along the way, we realized there were two distinct approaches to the problem.

[00:20:36 – 00:23:14] Rohit:
The first approach—the most common one—is to send standardized templates or forms to portfolio companies and ask founders to fill them out.

This is the model behind tools like ILPA templates, Visible.vc, or some of the new players like Standard Metrics. The benefit is obvious: the data comes in standardized because founders do the work of inputting it.

But that’s also the biggest flaw. As a founder myself, I can tell you: we miss these updates all the time. Not because we don’t care, but because we’re busy. Compliance rates are spotty.

And when founders do fill these forms, they share only 5–10 carefully massaged metrics—employee count, runway, burn rate, maybe MRR. You miss all the nuance: the operational data, the product metrics, the real signals buried in the details.

So this approach works to an extent, but it gives you a shallow view of the portfolio.

[00:23:14 – 00:25:20] Rohit:
The second approach is to do it internally: have analysts or interns manually extract data from every update and enter it into spreadsheets.

This gives you control and depth—you can capture every metric you care about. But it’s expensive, time-consuming, and inconsistent. When interns change every six months, you lose continuity.

Some funds try outsourcing it to offshore data teams, but that brings its own management overhead and still limits how much detail they can capture.

In reality, most firms only manage to do this for a handful of key companies. For the rest, they fall back to generic summaries.

[00:25:20 – 00:26:29] Rohit:
As a programmer, I naturally lean toward automation. But as we studied this problem more closely, I realized something: the deeper value lies in context.

To get that, you can’t just rely on automation. You need human judgment. Investors who really care about developing an edge will want to use all portfolio updates, not a handful of sanitized metrics.

So we set out to build a system that could automate what’s repeatable but still include expert oversight where context matters most.

[00:26:29 – 00:28:04] Rohit:
The first thing I did was shadow people actually doing this work. Literally sitting behind them as they parsed decks and updates.

I watched their process, took notes, and then tried doing it myself—extracting and cleaning data manually from board decks. After a while, I noticed every task fell into three clear steps:

  1. Extraction – identifying and pulling the relevant numbers and metrics.

  2. Validation – checking that data against multiple reports to ensure it made sense.

  3. Standardization – mapping those metrics to a fund’s internal taxonomy (“net burn” vs “cash burn,” etc.).

Those three steps became the backbone of our design.

[00:28:04 – 00:29:17] Rohit:
We then asked: can we automate this entirely?

With a heavy heart, the answer was no. Not today.

Despite all the advances in AI, full automation doesn’t work for this problem—at least not if you care about accuracy.

Roughly one-third of key metrics in portfolio updates appear inside graphs rather than tables. AI struggles with context from visuals, especially when labels, units, or scales aren’t explicit.

Even public companies, which have large reporting teams, show inconsistencies. Private startups are much worse.

[00:29:17 – 00:31:00] Rohit:
For example, in one Cloudflare report, it’s never stated that numbers are in thousands—you have to infer it. In another report, Shopify used two different names for the same metric within one deck.

AI alone can’t resolve that kind of ambiguity. Humans can.

Sometimes two metrics share the same name but measure different things. Other times, they have different names but represent the same thing. Disambiguating that requires context, pattern recognition, and experience.

[00:31:00 – 00:32:26] Christian:
So this is where the “metric name mess” comes in. Whether it’s net burn, cash burn, or free cash flow, there’s rarely consistency. Even within a single firm, teams track things differently.

[00:32:26 – 00:34:05] Rohit:
Exactly. That’s why our conclusion was clear: the ideal model is AI with a human in the loop.

Let AI handle the repetitive, mechanical tasks—extracting data from tables, identifying patterns in text—but keep an expert involved to apply higher-order judgment.

AI alone hits about 60–80% accuracy. That’s not enough for investors who need to trust their numbers. And worse, AI doesn’t know when it’s wrong—it’s confidently wrong.

So we built our process around an “expert-in-the-loop” system: automation plus oversight. That’s the only way to ensure accuracy and reliability.

[00:34:05 – 00:34:45] Christian:
That’s a good segue. We’ve talked about the process—now let’s focus on what this unlocks.

What happens when you do have clean, standardized portfolio data?

Part IV — The Payoff

[00:34:45 – 00:36:10] Christian:
Once a fund manages to clean and standardize portfolio data, the real question becomes: what can you do with it?

We ran a quick poll during the session: What would you use a clean portfolio database for?
Most respondents said alerts when key metrics change and benchmarking. Those two use cases consistently rise to the top.

The truth is, once investors see what’s possible, the number of applications expands fast. It’s like moving from scattered spreadsheets to an intelligent system that connects everything.

[00:36:10 – 00:39:20] Christian:
Let’s start with the obvious one: portfolio company profiles.

When every metric from every update is structured and searchable, you can track a company’s performance in real time. You instantly know whether it’s hitting plan, where it’s deviating, how it compares with peers, and what’s driving those changes.

Instead of chasing data before a board meeting, you spend that time thinking. You walk in already knowing what to ask.

And that’s the point: this isn’t about replacing judgment—it’s about freeing bandwidth for better judgment.

[00:39:20 – 00:40:24] Christian:
Then there’s benchmarking. With standardized data, you can compare companies across your portfolio—or against the broader market—on any metric you choose.

You can build your own Bessemer-style benchmarks, tailored to your portfolio’s sectors and stages. You can even benchmark private companies against public peers, tracking performance patterns over time.

[00:40:24 – 00:41:00] Rohit:
That’s one of my favorite parts. Once you have all this data, you can literally build your own Bessemer-style benchmarks.

You can see which traits correlate with success in your own portfolio—what growth rates preceded breakout outcomes, what burn ratios were sustainable, and where the inflection points occurred.

[00:41:00 – 00:42:18] Rohit:
Beyond static metrics, you can study trajectories.
How do successful companies evolve year over year?
What do the turnaround stories have in common—the companies that struggled early, then figured something out?

This becomes your fund’s second brain—a growing body of pattern knowledge. It codifies intuition into data.

[00:42:18 – 00:43:12] Rohit:
And then there’s alerting.

Once the data is structured, you can set intelligent triggers. For example:
– Flag companies with less than 18 months of runway.
– Highlight when a startup restates metrics multiple times.
– Detect when any number moves two standard deviations away from its usual trend.

You don’t need to read every update line by line. The system surfaces what matters.

[00:43:12 – 00:45:22] Rohit:
Here’s a concrete example I love.

Imagine plotting year-over-year website traffic growth against revenue growth for every consumer-tech company in your portfolio.

You’ll immediately spot patterns:
– Company A drives massive traffic but converts poorly.
– Company B converts well but needs top-of-funnel growth.

Now you can connect those founders—let them learn from each other.

And as an investor, you can even use these patterns for sourcing. If you know that companies with X % traffic growth tend to reach Y % revenue growth later, you can screen new prospects more intelligently.

That’s real alpha.

[00:45:22 – 00:47:06] Rohit:
Another high-value use case is fund planning.

Every VC does valuation models on a case-by-case basis—when a round or exit comes up, people scramble, pull all-nighters, update models manually.

But if portfolio data is standardized, those models can update automatically. You can maintain always-on valuations. When a follow-on or secondary opportunity appears, you already know the company’s current position.

That saves time—and sometimes, entire nights of sleep.

[00:47:06 – 00:48:20] Rohit:
And finally, reporting.

People don’t hate communicating with LPs. They hate collecting data for those reports.

If the data is already structured, you can focus on crafting the narrative—what’s important, what changed, what’s next—instead of wrestling with Excel sheets.

Communication becomes storytelling again.

[00:48:20 – 00:49:06] Rohit:
There’s one last point I want to emphasize: security.

Portfolio updates often contain highly sensitive company data. Any system you build or buy must treat security as non-negotiable.
SOC 2 compliance is table stakes. But beyond that, you need human-process controls, blast-radius containment, and alignment between your team and any external experts handling data.

You can’t afford to lose trust.

[00:49:06 – 00:51:00] Christian:
Right. We’ve gone through rigorous diligence processes ourselves, and those standards matter.

Before we close, a few questions came in.

One LP asked: “As a fund-of-funds investor, how can I apply this approach?”
The answer is straightforward: the same principles apply. Instead of company updates, you ingest GP updates. You can benchmark funds, analyze exposure overlaps, and see how different GPs value the same company.

[00:51:00 – 00:52:47] Rohit:
Exactly. It’s the same logic—just one level higher. You’re still standardizing metrics, only now they’re fund-level instead of company-level.

And because GP updates often include company updates, you get an additional layer of visibility.

[00:52:47 – 00:54:01] Rohit:
Another question was about inputs: what kind of data can Synaptic handle?

Practically everything. Board decks, emails, Zoom links, even WhatsApp or Slack updates. We’ve built integrations that let investors forward meeting notes or message transcripts, which we process, extract metrics from, and then discard the originals for security.

The richer the data, the better the output.

[00:54:01 – 00:55:13] Christian:
Someone else asked how funds use benchmarking beyond analysis.

One major use case is due diligence: comparing a potential investment to your top-quartile portfolio companies on key metrics like CAC payback or capital efficiency.

It’s not theoretical—you can see where a target sits relative to your winners.

[00:55:13 – 00:56:14] Rohit:
Exactly. And that extends to portfolio support.

When you give founders feedback backed by data—“This is how the best-performing companies in your sector allocate spend; here’s where you differ”—the conversation changes. It’s no longer opinion versus opinion. It’s fact-based, constructive, and actionable.

That’s the real goal: helping founders make better decisions.

[00:56:14 – 00:57:28] Christian:
And that’s a perfect note to end on.

To everyone who joined and stayed through the session—thank you.
We love these conversations because they come straight from our work with investors.

[00:57:28 – 00:57:45] Rohit:
Thanks, everyone. Appreciate your time.