An Empirical Perspective on Startup Valuations
Research landing page for the Inference and Machine Learning Group, my research lab at Radicle, where we apply modern statistical methods to better understand startups and venture capital. In An Empirical Perspective on Startup Valuations, I present a statistical model for estimating an undisclosed post-money valuation based on the amount of capital raised and the financing round’s venture capital stage classification (Seed, Series A, B, etc.). We also built an online model to allow anyone to estimate a valuation, available at https://rad.report/data. Read the paper on Medium or download the PDF.
Generating Invariant’s digital assets with data science and machine learning
Invariant is an applied machine learning product studio based in New York City.
I designed the linear gradient background image for the home page using Python’s Math, PIL, and Image libraries. Invariant’s other visual assets are pretty cool too, including a neural network binary classifier with four hidden layers (below), a logistic regression, an entity recognition sample using Spacy’s natural language processing API, and the plotted results of an unsupervised k-means clustering algorithm. See them all at https://invariantstudios.com. The site itself was built with Django.
Mailchimp Customer Sentiment Word Cloud
One of my final projects on MailChimp's management team involved using Python's NLTK 3.0 toolkit to obtain a high level overview of customer feedback for the VP of Customer Support. The word cloud is modeled after MailChimp's mascot, Freddie.
The Investor Cluster Score™
Consider two freshly funded startups, Gamma and Theta. If Gamma is backed by Sequoia Capital, Benchmark, and Accel Partners, and Theta is funded exclusively by Sequoia Capital, then you would intuitively consider that Gamma must be more attractive in some way, even though Sequoia Capital is in both sets. Having the addition of Benchmark and Accel Partners in Gamma’s set gives us more signal. It’s hard to describe, but with Gamma we have the qualitative validation of two additional institutions with lots of capital and huge reputations.
In general, we believe that convincing not just one but multiple mammoth VCs to sponsor an idea is incredibly difficult, and that belief gives us more confidence in a startup that is able to do so. Perhaps it’s because doing so implies that the CEO may be a great strategist, or that the startup has higher than usual product-market fit, or strong early customer traction. We’re not really sure exactly why, but our prior beliefs make us commonly assess a startup’s potential by the quality of its investors. It’s no guarantee of success, of course — venture capital is a game of probabilities — but it nonetheless instills more confidence in us. So much so that it’s an important component of any press release announcing a funding round.
Since it’s so commonplace I figured we should try to measure it, and well—we have—and we’re calling it the Investor Cluster Score™ (ICS). Like most things in Radicle’s data science lab, the ICS originally came about while I was conjuring up additional features for our Startup Anomaly Detection™ algorithm, which estimates the probability that a startup will achieve an exit via an initial public offering or acquisition.
Broadly speaking, the Investor Cluster Score™ is calculated by an algorithm that evaluates the information contained in a few distinct signals, including the size of venture capital firms, their level of investment activity, and their maturity relative to peers. Average outcomes are not included in our feature space because, as mentioned above, venture capital is a game of probabilities. We assume that funds that attract more capital have historically been more successful in some way, and hence more prominent in people’s minds (i.e., stronger signal). We start by computing, for all VCs, micro VCs, accelerators, incubators, angel entities, and individuals with at least 50 prior investments, a VC Prominence Score™.
The skewed VC Prominence Score distribution suggests that there are only a few dozen venture capital institutions that produce a really strong market signal — your Sequoias of the world. We think that this is an accurate representation of the ground-truth reality because there really aren’t many of them. Also notice how a few accelerators, incubators, and individuals have higher prominence scores than many full-fledged VC entities (think Y Combinator, Techstars, Peter Thiel, Yuri Milner, Fabrice Grinda).
We then compute the Investor Cluster Score™ by aggregating the VC Prominence Score™ of each investor that shows up in a startup’s capitalization table. Doing so is a non-trivial engineering task at the intersection of natural language processing and data analysis, but the end result is a vector that captures the characteristics of the capital behind a startup.
Is it possible that the startup failure rate is so high partially because conventional wisdom tells founders to prepare for 12–18 months between financing events, when in reality they should be preparing for longer like the experienced VCs suggest? We ingested all of Crunchbase to find out that yes, because the hard data says entrepreneurs should plan for at least 18–21 months of runway.
Plot: Average kernel density estimate for 5 distinct funding sequences, with vertical lines indicating the average and median value. Statistics— n = 13,916, mean = 20.6, median = 18, s.d = 14.6.
Kernel density estimates for 5 distinct funding sequences, with vertical lines indicating the average value for each sequence. Sample sizes — Seed to Series A (n=2623), Series A to Series B (n=5558), Series B to Series C (n=3422), Series C to Series D (n=1644), Series D to Series E (n=669). Radicle, 2017.
Inspired by the Herfindahl-Hirschman Index, the Capital Concentration Index™ (CCI) measures the degree to which venture capital dollars are consolidated among competing startups in a sector. The CCI is calculated by taking the sum of the squares of the capital shares for all startups within a sector. In general, the CCI approaches zero when a sector consists of a large number of startups with relatively equal levels of capital, and reaches a maximum of 10,000 when a sector’s total invested capital is consolidated in a single company. The CCI increases both as the number of startups in the sector decreases and as the disparity in capital traction between those startups increases.
Early in 2017 we mapped out the CCI for social networks (Facebook), search engines (Google), e-commerce retailers (Amazon), and ride sharing services (Uber). Including capital injected via IPOs produces the dynamics shown above. While clearly the dominant e-commerce platform, Amazon has historically faced considerable opposition from small and large competitors around the world, and that’s shown in the competitive CCI levels for the e-commerce retail sector. The sectors defined by Facebook and Google are interesting in having followed somewhat similar capital concentration paths on a shifted timeline. Both social media and search engines were very competitive sectors until Facebook and Google captured the dominant position, with Google’s IPO at ~8 years, and Facebook’s IPO at ~15 years. Even though Facebook’s IPO was monetarily larger than Google’s ($104b vs $23b), it was less significant from a CCI point of view (peak capital concentration was lower). Put another way, Facebook contended in a stronger competitive climate.
To better understand coin correlations we deployed an Affinity Propagation algorithm and found three distinct clusters of crypto assets, at the top end of the market capitalization table, that move in tandem.
Why Affinity Propagation? From the beginning of our research outline we identified some essential desiderata that the algorithm needed to maintain in order to produce scientifically diligent results. Overall, we found that Affinity Propagation not only met all our desiderata, but is also just generally a very powerful algorithm in theory and in practice.
Created by Frey and Dueck, Affinity Propagation takes as input measures of similarity between data points and exchanges real-valued messages between data points until high-quality clusters naturally emerge. While exchanging messages the algorithm identifies exemplars, which are observations that do a good job of describing a cluster. You can basically think of exemplars as centroids, with the exception that they are not the average value of all objects in each group, but rather a real observed data point that describes its closest neighbors. For our purposes, that implies that our exemplars are some real crypto asset. With the exemplars identified, the algorithm itself teases out how many natural clusters exist in the data.
On the left we present the clusters in time series form, with the exemplar in white and the centroid line of all objects in the cluster in bright yellow. The other constituents in the space are shown in faint yellow. On the right we present the corresponding box plot for each cluster, which effectively provides a look at the same data from a different perspective. The Affinity Propagation algorithm found three exemplars: Ripple, Tether, and DigixDAO.
There are a number of notable insights that can be inferred from the plots above. The first of which is clear, we found more than one cluster, which implies that no, crypto assets do not necessarily follow Bitcoin’s volatile news cycle. That said, there do seem to exist natural clusters of coins that move in tandem, and we expect there to be more as the crypto sample increases.
Overall, this study helped us come to the conclusion that a fundamentals framework for assessing crypto assets is an appropriate methodological approach. When evaluating crypto platforms, we look at the problem they’re trying to solve, the size of the opportunity they’re competing for, whether or not their competitive context is ripe for decentralization, and much more. All of our work is available at rad.report/crypto.