AI

Agentic AI is all about the context — engineering, that is

Presented by Elastic


As organizations rush to implement agentic AI solutions, accessing proprietary data from every nook and cranny will be critical

By now, most organizations have heard of agentic AI, systems that “think” by autonomously gathering tools, data, and other information sources to provide an answer. But here’s the problem: Reliability and relevance depend on providing accurate context. In most enterprises, this context is spread across various unstructured data sources, including documents, emails, business apps, and customer feedback.

As organizations look ahead to 2026, solving this problem will be critical to accelerating the rollout of agentic AI around the world, said Ken Exner, chief product officer at Elastic.

“People are starting to realize that to do agentic AI correctly, you need to have relevant data,” says Exner. “Relevance is critical in the context of agentic AI because that AI is taking action on your behalf. If people are struggling to build AI applications, I can almost guarantee that the problem is relevant.”

Cops everywhere

The battle could be entering a make-or-break period as organizations compete for competitive advantage or create new efficiencies. A study by Deloitte predicts that by 2026, more than 60% of large enterprises will have deployed agentic AI at scale, marking a major increase from experimental phases to mainstream deployment. And researcher Gartner predictions that 40% of all enterprise applications will contain task-specific agents by the end of 2026, up from less than 5% in 2025. By adding task specialization capabilities, AI assistants are evolving into context-aware AI agents.

Enter context engineering

The process of getting the relevant context into agents at the right time is known as context engineering. Not only does it ensure that an agentic application has the data it needs to provide accurate, deep answers, it also helps the large language model (LLM) understand what tools it needs to find and use that data, and how to call those APIs.

See also  Breaking the Scaling Code: How AI Models Are Redefining the Rules

While there are now open source standards, such as the Model Context Protocol (MCP), that allow LLMs to connect and interact with external data, there are few platforms that allow organizations to build precision AI agents that use your data and combine retrieval, management, and orchestration in one place.

Elasticsearch has always been a leading platform for the core of context engineering. It recently released a new feature within Elasticsearch called Agent Builder that simplifies the entire operational lifecycle of agents: development, configuration, execution, customization, and observability.

Agent Builder helps build MCP tools on private data using various techniques, including Elasticsearch Query Language, a piped query language for filtering, transforming and analyzing data, or workflow modeling. Users can then use various tools and combine them with clues and an LLM to build an agent.

Agent Builder provides a configurable, out-of-the-box conversation agent that lets you chat with the data in the index, and also gives users the option to build one from scratch using various tools and prompts on top of private data.

“Data is the center of our world at Elastic. We try to make sure you have the tools you need to put that data to work,” Exner explains. “As soon as you open Agent Builder, you point it to an index in Elasticsearch and you can start chatting with any data you connect it to, any data indexed in Elasticsearch – or from external sources through integrations.”

Context engineering as a discipline

Prompt and context engineering is becoming a discipline. It’s not something you need a computer science degree for, but more lessons and best practices will emerge because it is an art.

See also  Freed says 20K clinicians use its AI scribe, but competition looms

“We want to make it really easy to do that,” says Exner. “What people are going to have to figure out is, how do you drive automation with AI? That’s what’s going to drive productivity. The people who focus on that are going to see more success.”

In addition, other context engineering patterns will emerge. The industry has moved from rapid engineering to fetchable generation, where information is passed to the LLM in a context window, to MCP solutions that assist LLMs with tool selection. But it won’t stop there.

“Given the speed at which things are evolving, I guarantee new patterns will emerge fairly quickly,” Exner says. “There will still be context engineering, but it will be new patterns for sharing data with an LLM, how to make sure it’s based on the right information. And I predict more patterns that allow the LLM to understand private data it hasn’t been trained on.”

Agent Builder is now available in technical preview. Get started with one Elastic Cloud trialand view the documentation for Agent Builder here.


Sponsored articles are content produced by a company that pays for the post or has a business relationship with VentureBeat, and is always clearly marked. For more information please contact sales@venturebeat.com.

Source link

Back to top button