Box CEO Aaron Levie on AI’s ‘era of context’

On Thursday, Box launched his Developer Conference Boxworks by announcing a new set of AI functions, whereby Agentic AI models are built in the backbone of the company’s products.
They are more product announcements than normal for the conference, as a result of the always fast pace of AI development at the company: Box launched its AI studio last year, followed by a new set of data extraction agents in Februaryand others for searching and deep research in May.
Now the company is rolling out a new system with the name BOX AUTOMATE This works as a kind of operating system for AI agents, where workflows are broken in different segments that can be expanded by AI if necessary.
I spoke with CEO Aaron Levie about the approach of the company of AI and the dangerous work of competing with foundation model companies. It is not surprising that he was very bullish about the possibilities for AI agents in the modern workplace, but he was also clear about the limitations of current models and how to manage these limitations with existing technology.
This interview has been edited for length and clarity.
You announce a number of AI products today, so I want to start asking about the big vision. Why do AI agents build a cloud-content service?
So the thing we think about all day – and what our focus is on box – is how much work changes due to AI. And the vast majority of the impact are currently on workflows with unstructured data. We have already been able to automate everything that deals with structured data that go into a database. If you think of CRM systems, ERP systems, HR systems, we have had automation in that space for years. But where we have never had automation, is something that affects unstructured data.
WAN event
San Francisco
|
27-29 October 2025
Think about each type of legal assessment process, every type of marketing assistant management process, every form of M&A deal review – all those workflows relate to a lot of unstructured data. People must assess that data, take updates for this, make decisions, and so on. We have never been able to bring much automation to those workflows. We have been able to describe them in software, but computers have just not been good enough to read a document or watch a marketingactive.
So for us, AI agents mean that we can ever tap into all this unstructured data for the first time.
What about the risks of using agents in a business context? Some of your customers must be nervous about using something like that on sensitive data.
What we have seen from customers is that they want to know that every time they perform that workflow, the agent will perform more or less in the same way, at the same point in the workflow, and not let things go off the rails. You don’t want to have a agent a composite mistake where, after they have done the first pair of 100 submissions, they start to walk a kind of wildlife.
It is really important to have the right definition points, where the agent starts and the other parts of the system end. For every workflow there is this question what deterministic guardrails must have, and what can be fully agent and non-deterministic.
What you can do with Box Automate is deciding how much work you want each individual agent to do before it is distributed to another agent. So you can have an entry agent who is separate from the assessment agent, and so on. With this you can in principle use AI agents on a scale in any form of workflow or business process in the organization.

What kind of problems do you make by splitting the workflow?
We have already seen some limitations, even in the most advanced fully agent systems such as Claude Code. At a certain point in the task, the model gets out of the contextwindow room to keep making good decisions. There is no free lunch in AI now. You cannot just have a long -term agent with an unlimited context window, after a task in your company. So you have to break through the workflow and use SBGugents.
I think we are in the era of context within AI. What AI models and agents need is context, and the context they need to finish is in your unstructured data. Our entire system is really designed to find out what context you can give the AI agent to ensure that they perform as effectively as possible.
There is a larger debate in the industry about the benefits of large, powerful border models compared to models that are smaller and more reliable. Does this bring you to the side of the smaller models?
I would probably have to clarify: nothing about our system prevents the task from being random or complex. What we are trying to do is make the right crash barriers, so you can decide how agent you want that task to be.
We have no certain philosophy about what people should be on that continuum. We only try to design a future -proof architecture. We have designed this in such a way that as the models improve and as improving agent possibilities, you get all those benefits directly on our platform.
The other care is data control. Because models are trained on so much data, there is a real fear that sensitive data is registered or abused. How does that mean?
Many AI implementations go wrong there. People think: “Hey, this is easy. I will give an AI model access to all my unstructured data, and it will answer for people.” And then it starts to give you answers to data to which you do not have access or that you should not have access. You need a very powerful layer that processes access controls, data security, permissions, data management, compliance, everything.
So we benefit from the few decades that we have spent building a system that actually handles that exact problem: how do you ensure that only the right person has access to every piece of data in the company? So when an agent answers a question, you know deterministically that it cannot draw on data to which that person should not have access. That is just something fundamentally built into our system.
Earlier this week, Anthropic released a new function for direct uploading files to Claude.ai. It is far removed from the type of file management that Box does, but you have to think about possible competition from the Foundation model companies. How do you approach that strategically?
So if you think about what companies need when they implement AI on a scale, they need security, permissions and control. They need the user interface, they need powerful APIs, they want their choice of AI models, because one day one AI model feeds a use case for them that is better than the other, but that can change, and they do not want to be locked in a certain platform.
So what we have built is a system with which you can have all those possibilities effective. We do the storage, security, the permissions, the vector -inbed and we connect to every leading AI model that is there.




