Distributional Logo
We raised a $19M Series A for enterprise AI testing

We raised a $19M Series A for enterprise AI testing

Scott Clark
October 8, 2024

We’ve made a lot of progress over the last year at Distributional. Since raising our seed round, we have validated the enterprise need for confidence in AI applications, grown our team, and started deploying our enterprise testing platform to address this problem in collaboration with a dozen design partners. I couldn’t be more proud of our team. 

To push us even faster on this journey, I’m thrilled to announce our $19M Series A led by Two Sigma Ventures with participation from Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence Venture Capital, Alumni Ventures, and dozens of angel investors. We are excited to partner more deeply with Two Sigma Ventures, who have had a front-row seat on our journey and a unique perspective on the AI testing problem—especially for financial and regulated industries. We are using this fresh capital to continue to expand our team, accelerate our roadmap, and scale our enterprise deployments.

Why are we so excited to pour fuel on this fire? During our first year, we’ve had thousands of hours of conversations with over 100 large financial, industrial, and technology enterprises. These conversations have made a few things clear: confidence in AI applications is a critical problem, we have a unique solution through an enterprise-first testing platform, and there is a lot of opportunity to expand on what we’ve built.

Challenge: AI testing is a unique, operationally intensive problem

Testing is the primary way to gain confidence that traditional software applications are behaving as expected. But AI is complex, which makes it difficult to test with traditional approaches. It is non-deterministic, which requires writing statistical tests on distributions of many data properties to quantify behavior. It is non-stationary, which requires continuous and adaptive testing through the AI lifecycle—including development, deployment and production to catch behavioral change. And it is multi-component, which requires testing all dependencies to pinpoint and resolve potential issues.

Because of all of this, testing is a gap in the AI stack today. Developer tools focus on helping rapidly prototype AI applications, from eval tools for constructing performance related benchmarks to workbenches that help get an end-to-end proof of concept together. But these tools don’t give teams a standardized process to gain confidence their AI app is behaving consistently, especially once the application goes into production. Monitoring tools often focus on higher level metrics and specific instances of outliers—this gives a limited sense of consistency, but without insights on broader application behavior ahead of an impact on business metrics. Testing fills the gap between these two solutions, but also enhances them. The path to more robust evaluation and validation is through better testing in development and deployment. Monitoring becomes more insightful with continuous, adaptive testing in production.

We didn’t invent this approach—statistical testing has been around for centuries. AI teams tend to not fully implement these techniques because they are operationally intensive. It takes a combination of AI, engineering, statistical, product, and platform expertise to build a deep, automated, and standardized approach to this problem—one that is rarely the core job of a single team.

Solution: AI testing with depth, automation and standardization

So what have we done about it? We’ve built the modern platform for enterprise AI testing to address this problem and remove the operational burden of enterprises building and maintaining their own solutions or cobbling together incomplete solutions with other tools. By proactively addressing these testing problems with Distributional, AI teams can deploy with more confidence and catch issues with AI applications before they cause significant damage in production. 

We’ve designed our platform with a simple principle in mind: make it easy to get to actionable value and empower customization to increase this value over time. By making this process more efficient teams are freed up to focus on their mandate of creating value through building better applications and resolve issues with confidence when they do arise. This platform has three primary capabilities:

  • Depth: To handle the non-deterministic, non-stationary, and multi-component nature of AI applications, teams need to write statistical tests on distributions of properties of their applications and data. We’ve designed the first purpose-built platform with this approach to testing, enabling AI teams to get visibility on the consistency and performance for all components of their AI applications and take action with insightful analysis. 
  • Automation: The platform allows for quickly achieving value by automating the collection of application data, derivation of testable properties, creation of statistical tests, surfacing of insights for analysis, and recalibration of tests to fit expectations. The user can provide further contextual information or feedback to enhance this process. Additionally, they can completely customize aspects of the process to fit the bespoke testing needs of the application.
  • Standardization: We built our solution to address the needs of enterprises. We provide visibility across the organization into what, when, and how AI applications were tested and how that has changed over time. We provide consistency in how teams approach the problem of AI testing that enables governance and leadership teams to effectively audit how risk is mitigated for different AI applications throughout their lifecycle through reporting. We increase the efficiency of AI teams by providing a repeatable process for AI testing for similar applications by using sharable templates, configurations, filters, and tags.

Onward: solving the AI testing problem at scale

I’m incredibly proud of what we’ve built in such a short period of time, but there is a lot more to do. The path to reliable AI starts with having a reliable AI testing platform. We’re excited to use this fresh round of funding to seize this opportunity.

If you’re just as excited to tackle this opportunity, there are a few ways to engage with us:

  • Join the team: We have an experienced team addressing this critical problem with a unique solution. We’re planning to double in size over the coming months and would love for you to join us. Apply on our career page
  • Get product access: We’re also scaling our customer deployments. If you relate to any of the challenges covered above, reach out to get product access. Our team will get in touch with you to learn more about your circumstances. 
  • Learn more: Read our blog, watch our demos, follow us on LinkedIn or X and sign up for updates

Recommended Blog Posts