Why Your AI Team Doesn’t Need a Manager; Rethinking Agentic AI Workflows for Scalability and Efficiency

9/2/2025

Audience: CEOs, CFOs, COOs, Chief Supply Chain Officers, CIOs/CTOs and supply chain enthusiasts Read time: ~10 minutes

Imagine you’re watching a school play. Every student has a role; someone is the lead actor, someone else handles the props, another adjusts the lights. But there’s always that one person offstage with the clipboard ; the director ; deciding who comes in when. That’s what many companies are doing with agentic AI workflows; they design them like theater productions, with one orchestrator agent as the director, and everyone else waiting for instructions.

At first glance, this feels natural. After all, that’s how humans work in teams. But when you look more closely, you realize the metaphor doesn’t quite fit. Computers aren’t people. They don’t need coffee breaks, they don’t get confused when five things happen at once, and they don’t need a boss to tell them when to breathe. Yet across frameworks like AutoGen and LangGraph, you’ll see this same pattern repeated; a “manager” or “supervisor” agent who funnels every decision through themselves. It’s simple to explain in a PowerPoint pitch ; but dangerously inefficient at scale.


The Rise of the “Orchestrator Agent”

Take AutoGen (2023). Microsoft researchers built it so that multiple agents could talk to each other, but instead of direct communication, everything goes through a GroupChatManager ; basically, the orchestrator of the conversation. Around the same time, LangGraph (2024) released tutorials where you create a “Supervisor” agent to control the others, as if you were setting up a mini office. And even earlier, in HuggingGPT (2023), the orchestrator model was the “brain” deciding which tool or model to call, much like a dispatcher sending out jobs.

Consulting companies picked this up quickly. If you’re pitching to a senior executive, it’s comforting to say, “Here’s our AI team; a project manager, a researcher, a coder, and a tester.” It’s familiar. Executives can point at the chart and say, “Yes, that’s how we work.” It reduces anxiety. But it also sneaks in all the inefficiencies of human organizations into machine workflows.


Why Copying Human Teams is a Trap

There’s an old adage in computer science; “Don’t build a platform inside a platform.” That’s called the inner-platform effect. It happens when people take something flexible ; like AI ; and force it to behave like the system they already know. In this case, executives and consultants are applying Conway’s Law; systems reflect the structure of the teams that built them. If the execs are used to hierarchies, they design hierarchies ; even for AI.

But think about what happens when you do that. Every message between agents has to pass through the boss. That’s like having a classroom where kids raise their hands, whisper their answers to the teacher, and then the teacher repeats them out loud. It doubles the talking time. In computing terms, you’re adding an extra LLM call every step of the way ; sometimes two. Multiply that by hundreds of subtasks, and you’ve turned what could be a fast parallel process into a slow, serial one.

This isn’t just wasteful. It’s also fragile. If the orchestrator agent stalls, the whole system freezes. If it makes a wrong decision, every worker goes off-track. And as tasks grow more complex, the dependencies between agents pile up like tangled Christmas lights ; you can’t pull one bulb without dragging the whole chain.


The Scalability Problem; Amdahl’s Law in Action

Here’s where a bit of computer science sneaks back in. In the 1960s, Gene Amdahl described a rule now known as Amdahl’s Law; the speedup you get from parallelizing a system is limited by the parts that still run in serial. Imagine you can do 90% of a task in parallel, but 10% must be done in order. No matter how many workers you add, you’ll never go faster than 10x, because that last 10% bottleneck is unavoidable.

When you design agentic AI with a single orchestrator, you’re making that orchestrator part of the serial bottleneck. Every agent has to wait for the boss to read, decide, and route. Even if you have ten agents capable of working independently, you won’t get the full benefit. It’s like widening the highway but leaving the toll booth at one lane.


Real Alternatives; What Research Suggests

So, what’s the better way? Let’s look at some fresh ideas from recent research.

1. Graphs Instead of Bosses

In 2023, researchers proposed Graph of Thoughts, showing that when you let reasoning steps branch out and reconverge ; like a decision tree ; you solve complex problems more efficiently than just marching in a straight line. Frameworks like LangGraph now let you design workflows as state graphs; each node fires when its inputs are ready, no boss required. This looks less like an office and more like a relay race, where runners start as soon as they get the baton, not when the coach shouts “go.”

2. Routers and Cascades

Another approach comes from FrugalGPT (2023), which tested the idea of “routers.” Instead of a boss agent checking every step, you use a small model or a function to decide which path to take ; cheap first, expensive later if needed. For example, a support bot might start with a lightweight model and only escalate to GPT-4 if it’s not confident. This saves cost and cuts latency, because not every question needs a board meeting.

3. Mixture-of-Agents Panels

Sometimes diversity is valuable. Instead of one boss telling workers what to do, you can let multiple agents try in parallel, then compare results. This is the Mixture-of-Agents approach; imagine five friends guessing an answer on a game show, then voting. No one is in charge, but the group performs better. This structure avoids the manager bottleneck and scales well because the heavy work happens in parallel.

4. The Blackboard Model

This one goes back decades in AI history. In the blackboard architecture, agents don’t talk to a manager ; they post their findings to a shared board. Other agents watch the board and jump in when the conditions match their expertise. It’s like detectives pinning clues on a wall and letting anyone with insight connect the dots. Modern vector databases or state stores can serve as the blackboard, and agents can subscribe to changes.

5. One Smart Agent with Tools

Finally, sometimes you don’t need a team at all. Research like ReAct (2022) and Reflexion (2023) shows that a single agent, if scaffolded properly with reasoning, acting, and self-correction, can outperform a whole “team” of chattering sub-agents. A great example is SWE-agent (2024), which fixes bugs in code repositories using one agent with access to tools, tests, and memory. It’s lean, efficient, and avoids the orchestration overhead.


Why Executives Keep Falling for the “AI Org Chart”

So why does this keep happening? Partly because it’s easier to sell. Imagine walking into a boardroom and showing a slide that says; “Here’s our AI CEO, AI PM, AI Developer.” Executives nod, because it mirrors the structure they know. It’s comforting to see their world reflected in software. It’s Conway’s Law in action; organizations design systems that mirror their communication patterns.

But that comfort comes at a cost. It locks companies into inefficient architectures. It wastes compute, slows response times, and makes scaling harder than it should be. It’s like insisting your self-driving car still needs someone to wave flags at every intersection, just because that’s how traffic once worked.


The Way Forward; Design for Machines, Not People

The key lesson is simple; don’t confuse human metaphors with machine realities. Computers don’t need bosses. They need clear dependency graphs, fast routers, and efficient communication structures. Instead of designing “AI teams,” design AI workflows ; optimized for throughput, cost, and resilience.

For your company, this means starting small;

  • Begin with a single smart agent baseline and add complexity only when it clearly adds value.
  • If you must use multiple agents, design them around a graph or blackboard, not a hierarchy.
  • And always measure the serial bottleneck. If your orchestrator is on the critical path more than a few times, you’ve built a toll booth in the middle of your highway.

Agentic AI has the potential to transform work, but only if we break free from the metaphors that limit us. The future isn’t about building little AI offices inside your company. It’s about building systems that think, act, and scale the way machines are meant to ; fast, parallel, and efficient.