When 2+2 Does Not Equal 4

Editor’s note: This post was contributed by Katharine Gregorio. Katharine has over a decade of experience in food and agriculture with Cargill, The Climate Corporation, Monsanto, and the U.S. Government. She is currently an advisor to Finistere Ventures and CropX.

Overview

The AgTech industry is associated with lofty numbers. $3 trillion global agriculture market opportunity. 40% of the global workforce engaged in agriculture. $25 billion invested by venture capitalists and agribusinesses. ~$1 billion purchase of The Climate Corporation (Climate) by Monsanto. Hundreds of thousands of users. Hundreds of millions of acres.

Of all of these data points, arguably the most significant are related to adoption. The reason? These figures demonstrate actual implementation, use, and market penetration.

Yet, these uptake figures are also incredibly opaque. William Paul Thurston, an American mathematician, famously observed it “is not about numbers, equations, computations or algorithms, it is about understanding.” However, with AgTech understanding is challenged by two key factors: metrics and user behavior.

What’s in a Metric?

To understand the numbers in AgTech requires analysis of both the unit of measure and the methodology used to calculate it. Is the unit of measure an acre? A farm? A user? An account? A custom combination of several of these? Further complicating the unit of measure is the methodology used to calculate it. Is the approach cumulative over the company’s existence, annual, or seasonal? Is it absolute or rolling? And so on.

The understanding of metrics in AgTech is difficult primarily because each company both determines which metrics to publish (if any) and self-reports figures. Each company chooses metrics which show its product or service in the best manner. As a result, there are numerous permutations of metrics and methods in existence, and it is highly likely no two AgTech companies report figures in quite the same manner.

…it is highly likely no two AgTech companies report figures in quite the same manner.

For example, on the surface, Climate, FarmLogs and Farmers Business Network (FBN) are all digital agriculture companies who publish acre metrics. But an acre is not defined uniformly across these three companies. Climate publishes mapped acre and “active acre” metrics. FarmLogs typically refers to acre traction by percentages of farms with 100+ acres using its software. (See an example in this New York Times article.) And the Grand Forks Herald quoted FBN reporting both farmland acre and data acre metrics: FBN had “1.5 million acres of farmland in the system for a total of about 8 million acres worth of processed data.”

Importantly acres are only one metric with surprising variables in measurement. Similar issues exist across metrics for “users” and “farms” and show up across the AgTech landscape. The lack of transparency and standardization across metrics results in more questions than answers about company level performance.

Try vs. Buy

While metrics present challenges to understanding a single company’s performance, they also contribute to confusion at the industry level. User behavior further complicates understanding.

In agriculture, most operators have ~40 growing seasons in their lifetime. While there is a willingness to experiment to maximize the success of these 40 opportunities, there is also an inherent skepticism of technologies promising silver bullets. Thus, operators display a willingness to try, but not always buy, a product or service.

The reality of trying before buying is common. Farmers may be quick to test a new technology. If they like what they see, they might allow for a pilot with a more defined program and better measurement. If the pilot goes well, they will consider implementation. Typically, it takes a minimum of three years before a farmer is ready to commit to a technology solution across his or her entire operation.

Typically, it takes a minimum of three years before a farmer is ready to commit to a technology solution across his or her entire operation.

Importantly willingness to try is not necessarily limited to one product within a technology segment. At any given point, a farmer is likely trying several technologies at once, but only seriously piloting one or two (if that many). Overlap is certain, but the full extent of duplication is unknown and rarely questioned. Anecdotally, my own personal interactions (while working booths at various farm shows and conducting market research) suggest most farmers are trialing multiple digital agriculture products at any given time. When helping a user navigate a particular app on their phone or tablet, I always saw several offerings on each person’s device. And this held true across farmers as well as landlords, retailers and investors in the space.

Chronic Over-reporting

The combination of self interested and unique metrics along with trial behavior obfuscates adoption reality. To understand overlap requires additional context, which is hard to uncover.

For example, collectively the three aforementioned digital agriculture companies- Climate, FarmLogs and FBN -claim ~160M acres in their systems. (Climate reported 92M acres across its platform, FarmLogs claims 60M of acres and FBN claims 8M acres.) Given there are ~180M corn and soybean acres in production in the US in any given year, and even accounting for FarmLogs and FBN including acres not growing corn or soybeans, it is unlikely each company’s acre metrics do not overlap with each other.

…it is unlikely each company’s acre metrics do not overlap with each other.

Why? Without duplication, it appears 89% of corn and soybean acres have adopted one of the three digital agriculture solutions. However, digital agriculture is a nascent industry that is still very early on in the Rogers Adoption/Innovation Curve. 89% penetration would indicate that an approximately four-year-old industry is already at the tail end of the Rogers Adoption/Innovation Curve in the “Laggards” stage, which is highly unlikely.

Yet, without additional details, it is hard to gain further understanding into the digital agriculture space-What is each company’s unique traction? How much overlap is there across these three companies and their other competitors? What is the category’s true traction percentage?

The over-reporting of adoption in the digital agriculture is just one example of a reality that exists across the hundreds of AgTech companies in dozens of categories operating today.

The over-reporting of adoption across the AgTech landscape is significant for several reasons. First, it raises questions about the business opportunity within AgTech. What is really being used? What are users paying for? Second, it is likely over time that AgTech category winners will emerge. However, without a more transparent picture, it is unclear who is winning and hard to determine who will win. As money continues to be pumped into AgTech, investors should be asking hard questions and doing deep diligence to understand adoption reality. As businesses launch, entrepreneurs should ensure they are considering user behavior and competitive overlap to maximize success even as they showcase vanity metrics.

Summary

Understanding metrics in AgTech is challenged by metrics and user behavior. Because metrics are self-reported and self-interested, they should be met with healthy skepticism. Because it is normal for users to try a good or service before buying it, this reality should factor into assumptions to calculate adoption and to make competitive comparisons.

Because metrics are self-reported and self-interested, they should be met with healthy skepticism.

Adoption of most technologies does not happen overnight, and this could not be more true for AgTech. While total addressable markets are significant, and long-term opportunities are real, at present, market penetration remains low, despite lofty company reported figures of adoption and usage that, in the absence of context, suggest otherwise. As with any industry, solutions that solve problems and create value for users will win in the long run.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.