The AI Equation: Creating More Value from AI Applications
Damien Henry, Posted Jan 1 2025
Having worked in machine learning for over 10 years, I've witnessed a dramatic acceleration in the field recently, with AI becoming an increasingly significant part of my work. Through this journey, I've gained valuable insights about creating value with AI that I'll share today.
Let's start with this key insight: despite all the hype surrounding AI, "Intelligence" alone accomplishes nothing. Intelligence requires a physical form—whether biological or digital—and energy to exist. It needs both input and something to act on its output. When evaluating AI products, we can break down their value creation into a simple equation. Intelligence is just one piece of this puzzle, as we'll explore.
The AI equation
Value = Intent * Context * External Knowledge * Intelligence * Connections * Trust
- Value is what the user wants to achieve.
- Intent represents how users communicate their desired outcomes.
- Context includes everything users know that LLMs don't—from personal information to project-specific details.
- External Knowledge encompasses information users need but don't currently have. For critical tasks, users typically need more reliable sources than LLMs or web-based information.
- Intelligence serves as the connector that understands intent, knowledge, and context, translating them into actionable results.
- Connections are the pathways that transform decisions into actions, such as API access or ecosystem integrations.
- Trust is essential at all levels: users must trust the UX to understand their needs and context, trust the knowledge and data sources being used, and trust the system's decision-making ability. This trust becomes particularly vital when AI operates autonomously.
Creating More Value
When creating a product based on AI, we can use this equation to increase the value we generate by acting on these levers.
Let's break down all the parameters and see what we can activate.
Intent
Intent represents how users communicate their desired outcomes.
ChatGPT, Claude, and other major players use chatbots as their primary interface. Chatbots are indeed an amazing way to let users express their needs, especially because they offer multi-turn interactions. If the tool doesn't understand the intent initially, users can refine their request through conversation until they feel understood. However, this system has limitations. Sometimes it's more efficient to undo an interaction and start over rather than trying to clarify a misunderstanding with the chatbot. But the biggest limitation lies in language itself.
This creates a significant opportunity for startups looking to differentiate themselves from OpenAI and similar companies. Very few tasks require purely language-based interaction. In practice, depending on your work, you need to interact with written documents, forms, code, slides, images, drawings, sketches, videos, plans, schedules, maps, and more. Currently, the only way to communicate with AI about these elements requires either converting them to text or sending screenshots.
In other words, limiting users to expressing their needs through language (or images) creates a huge bottleneck. Every startup developing a novel product should ask themselves: how can we capture our users' intent and needs at the right level of abstraction for AI to process?
For example, text editors still make users type character by character—which seems ridiculous in the AI era. The open question is: what novel interactions can we create to let users write without typing character by character? It's a fascinating problem that involves capturing the essence of the text to write directly from someone's brain. There's tremendous room for creativity here.
Context
Context includes everything users know that LLMs don't—from personal information to project-specific details.
Capturing intent is one thing, capturing context is another. Most interesting work requires successfully completing multiple complex tasks. These tasks all share the same context. But capturing context isn't just about efficiency—preventing users from having to repeat themselves.
Context represents the environment and background information that shapes how AI should interpret and respond to tasks. This includes project-specific details, user preferences, historical interactions, and any constraints or requirements that influence the work. Good context helps AI understand not just what needs to be done, but how it should be done within the given parameters.
The challenge with capturing context is that it often exists outside of the product itself. Context is typically scattered across various platforms and mediums - it might live in Slack conversations, Google Docs, informal team discussions, email threads, or even current news events. This distributed nature of context makes it particularly challenging to capture and integrate effectively.
The key challenge for AI products is finding elegant ways to pull in this external context without creating additional friction for users.
External Knowledge
Knowledge encompasses information users need but don't currently have. For critical tasks, users typically need more reliable sources than LLMs or web-based information.
While Large Language Models (LLMs) have absorbed vast amounts of knowledge from the open web during their training, this represents only a fraction of the world's valuable information. Many companies build their business models around collecting, curating, and selling specialized datasets that aren't freely available online. Additionally, LLMs tend to generate balanced, averaged responses that try to accommodate multiple viewpoints - which isn't always helpful for decision-making.
For critical tasks or precise decision-making, products need to leverage authoritative, detailed knowledge sources. This is what separates a vague, hedged response from ChatGPT from an actionable, precise answer provided by a product with access to high-quality, specialised information. The difference in value between these two approaches can be dramatic - generic knowledge versus deep, reliable expertise that enables confident action.
Inteligence
Intelligence serves as the connector that understands intent, knowledge, and context, translating them into actionable results. To be useful, it must maintain connections with external systems.
While intelligence alone serves no purpose, it becomes a formidable tool when combined with understanding intent, processing context, and learning from external knowledge. What's revolutionizing everything today is our ability to access intelligence through a simple API.
As noted, "intelligence requires a physical form" and operates within specific constraints. This distinction is key: human emotions and experiences are inseparable from our physical bodies. AI, lacking such embodiment, processes and responds to data in fundamentally different ways.
What makes AI special today isn't its depth of reflection or analysis, but rather its unique capabilities: the ability to instantly process vast amounts of information (like reading an entire book in seconds), explore multiple paths in parallel, and bridge knowledge across vastly different domains (like simultaneously writing code and poetry in a language unknown to the user).
Building a good intuition and understanding about how AI operates is crucial for creating successful products based on it.
Here are two links: one to build intuition about the context window size, and another to build intuition about LLMs limitations.
Connections
Connections are the pathways that transform decisions into actions, such as API access or ecosystem integrations.
While ChatGPT feels immensely useful for many tasks, AI's potential remains largely untapped, mostly because most tools don't let AI take direct actions. As already mentioned, very few tasks require purely language-based interaction. In practice, depending on your work, you need to interact with written documents, forms, code, slides, images, drawings, sketches, videos, plans, schedules, maps, and more.
For instance, you can ask ChatGPT to explain how to use a spreadsheet for a specific goal. But what's the point if Gemini can edit the spreadsheet directly? You can ask ChatGPT to help you plan your holidays, but it can’t book hotels or plane for you.
We are currently in an absurd moment in time where we must translate everything for AI, get its advice, and then translate that advice back into the medium where we're actually working. ChatGPT interacts through the clipboard while users copy and paste content back and forth. This inefficient workflow makes no sense and many businesses and startups will flourish simply by streamlining these interactions.
Trust
Trust is essential at all levels: users must trust the UX to understand their needs and context, trust the knowledge and data sources being used, and trust the system's decision-making ability. This trust becomes particularly vital when AI operates autonomously.
The first thing that ChatGPT and other AI tools tell their users is "AI can make mistakes." It's a well-known fact that AI can hallucinate. What's most troubling is how quickly some answers can become incoherent. To fully tap into AI's power, we want it to be as autonomous as possible. We want AI to have the agency to solve increasingly complex problems. There's no simple solution to this challenge, so let's break it down.
Today, we can see that hallucinations are becoming less frequent but haven't been eliminated. Through tools like Cursor and other coding assistants, we can observe AI's potential agency. While AI can impressively code complex UX, it requires close supervision as it can drift into nonsense. It's also highly sensitive to ambiguity—the smallest misunderstanding can lead to catastrophic results. In this context, trust is built by letting the user validate all the critical points.
Another dimension of trust concerns the intrinsic knowledge of LLMs. Can we trust OpenAI regarding their training data? Can we use the results without risking copyright issues? Curating external knowledge can help build trust for this point on top of potentially reducing the hallucinations.
There's also another crucial point: AI is based on probability. Unlike classical code, you can't prove it will always give the right answer. It might be right 80% of the time, 99% of the time, or maybe 99.9% of the time. So the key question is: what accuracy level does your product/feature need to be viable?
Conclusion
The AI equation helps us understand that creating value with AI involves much more than just leveraging an LLM API. By focusing on all components—intent & context capture, external knowledge curation and integration, LLMs capabilities, system connections, and trust building—we can develop more effective AI applications.
Success in AI product development comes from recognizing these interconnected elements and addressing them holistically. The companies that will thrive in the AI era won't be those using marginally better models, but those who excel at combining all these elements into seamless, trustworthy, user experiences.