Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

For 24 years, Microsoft’s Amanda Silver has been working to help developers — and in the past few years, that’s meant building AI tools. After a long stint at GitHub Copilot, Silver is now a vice president at Microsoft’s CoreAI division, where he works on software delivery tools and agencies in enterprises.
His work is focused Foundry system within Azure, which is designed as a unified portal for business AI, giving him an overview of how companies are using these systems and where deployments may be slowing down.
I spoke with Silver about the potential for enterprise agents, and why he believes this is the biggest opportunity for startups to get started in the public cloud.
This interview has been edited for length and clarity.
Therefore, your work focuses on Microsoft products for third-party developers – often startups that don’t focus on AI. How do you see AI affecting those companies?
I see this as a watershed moment for a deep start like moving to the public cloud. If you think about it, the cloud has had a huge impact on startups because it means they no longer need to have storage space to store their equipment, and they don’t have to spend a lot of money on capital infusion to get equipment to work in their labs and things like that. Everything became cheaper. Now agent AI will continue to reduce the overall cost of software projects, because many of the tasks involved in representing a new project – whether it’s public support, legal research – can be done quickly and cheaply with AI agents. I think this will lead to more businesses and more startups. Then we’ll look at expensive startups with fewer people at the helm. And I think that’s an interesting world.
What does that look like in practice?
Techcrunch event
Boston, MA
| |
June 23, 2026
We’re certainly seeing multistep agents being used more and more for all kinds of different types of work, aren’t we? As an example, one thing developers need to do to maintain their codebase is to stay current with the latest libraries they depend on. You can rely on an older version of dot-net or the Java SDK. And we can have systems to do this for your entire codebase and bring new things in easily, maybe 70% or 80% less time it takes. And it should be a multistep helper to do it.
Website services and more – if you’re thinking of maintaining a website or service and something goes wrong, there’s a lot of noise at night, and someone has to wake up to answer what’s happened. We will still have people on call 24/7, unless the service goes down. But it was a very disgusting job because you woke up many times for these little incidents. And now we’ve developed genetic systems to better detect and often reduce the number of problems that come up on living sites so that people don’t have to wake up in the middle of the night and go to their last place and try to find out what’s going on. And this also helps us to significantly reduce the time it takes to solve a problem.
One of the surprises at this time is that the deployment of agents has not happened as quickly as we expected even six months ago. I want to know why you think that.
If you think about people who are real estate agents, what keeps them from being successful, often, comes down to not really knowing what the goal of the agent should be. There is a cultural change that needs to happen in how people create these behaviors. What business problem are they trying to solve? What are they trying to achieve? You need to have a clear vision of what this agent’s success means. And you have to think, what data am I giving to the agent so that they can figure out how to complete the task?
We see those things as major stumbling blocks, more than the usual uncertainty of allowing agents to be deployed. Anyone who goes to look at these systems will see a return on investment.
You mention general uncertainty, which I think sounds like a big barrier from the outside. Why do you see it as less of a problem to do?
First, I think it will become more common for agent systems to have human-in-the-loop activities. Consider something like a package restore. If you had a return workflow that was only 90% and 10% human intervention, where someone would have to go look at the package and make a decision about how the package was damaged before deciding to accept the return.
This is a great example of how the computer interface is getting so good that most of the time, we don’t need to monitor the traffic and make sure. There will still be some borderline situations, where maybe computer vision isn’t good enough to make a phone call, and maybe there’s a hike. It’s like, how many times do you have to call the manager?
There are some things that always need to be supervised by people, because they are very difficult tasks. Consider being responsible for contractual obligations, or entering code into the codebase that may affect the reliability of your system. But even so, there is the question of how far we can go in making this whole process.