Why Agentic AI Will Replace Traditional Automation by 2026?

Posted by Raul Smith
8
Nov 17, 2025
67 Views
Image

It’s 6 a.m. and I’m here staring at my laptop. I think I have read the word “agentic AI” about 40 times in the last one hour-and still do not know if this is true or just something else. A new buzzword. Another sales spiel that pretends to be new.


Let me go back.


I am the Director of Operations for an e-commerce retailer based out of Denver. We sell camping goods, outdoor gear, and things like that. We were just acquired which means new leadership, new “strategic priorities,” and— surprise— a mandate to reduce operating expenses by 30% while somehow magically improving service. Isn’t that always the way?


So, I’ve been checking out automation. Usual suspects: classic RPA tools, workflow software, etc. We got quoted $180k for a robotic process automation system capable of handling order routing and some customer service tasks. It seemed…okay? Not that exciting, but it gets the job done. The type of machine which does exactly what you tell it to do and nothing more.


Two weeks before, a person from my LinkedIn contact shared an article about “agentic AI” and how this is supposed to replace normal automation in the coming few years. I told myself, “Fine, here we go again.” Another technological trend that should theoretically interest me.


But as I continued reading, I paused: wait. Could this one really be different?

The Part Where I Realise I Don't Really Get Automation

I thought that automation meant defining a process, writing rules for it, and having the system follow those rules. Customer places an order. System checks inventory. If it's in stock, payment is processed and sent to fulfillment. Simple. Deterministic. You can count on it.


And it works! 60 percent of our normal orders are handled by automation with no human intervention. For years, we have been using very basic forms of automation.


The problem is that the other 40 percent are weird. The customer wants to ship the order to two different addresses. The item is on backorder but a similar one is available in stock. Payment gets flagged if billing and shipping addresses do not match. All those edge cases that do not fit into any ruleset.


Traditional automation just stops. Passes it on to a human. So my team is always working exceptions, and our error rate has been going up because we don't have enough people and are overworked and, frankly, burned out.


The first thing to catch my interest was this concept that it does not simply follow rules. It makes choices dependent on the situation. Rules of choice, yes, but still choices. This is what the literature says about changes in behavior for achievement and consideration: working things out.


Okay, but how? How is it any different from being just an extremely excellent programmer?

This Particular Bit of Research Made Me Stop Scrolling

This late 2024 McKinsey study defined so well the difference between plain automation and agentic AI. And the numbers were wild.


They tracked, over one year, thirty-eight enterprises which swapped out RPA for agentic-AI systems. The legacy automation handled about 55-60% of tasks with zero human intervention. The new task-completing AI? 87% on average. That’s a massive jump.


What really struck me was that it wasn’t just a matter of volume. It was the nature of things to be done. The agentic systems took responsibility for the difficult aspects: exceptions, decisions that must be made, and scenarios which previously would have required someone experienced to come along , look at them ,and decide what to do.


An example from the store is that it used an AI to consumer refund requests. If the return met policy standards, say yes or no, approve or refuse. But the agentic system was looking at things like the customer's order history, patterns of returns, the exact cause for the return, and even the customer's message's mood. And it was making smart choices: This customer has been with us for three years-never returned anything-appears really upset. Even though it's been more than 30 days, approve the reimbursement.


That’s not following the rules. That’s…a decision? I guess?


McKinsey says that organizations which implemented these technologies reported customer satisfaction scores increasing by an average of 22 percent while operational costs decreased by 31 percent.


Thirty-one percent. That is exactly the number my new managers want.

The Part That Makes Me Nervous

Of course, I am interested. I am also worried, so here goes-what happens to my team if AI can make those kinds of choices? There are twelve folks that have been with me for a long time. Nice people. Smart folks who are in charge of all those complicated exceptions right now.


Do they just not work anymore?


I spoke with my VP about this just last week. He gave me the same standard line: "It's not a matter of replacing people; it's a matter of making them better." Your staff will work on things that are more valuable.


That always sounds so great in theory. But what does it really mean, practically? When AI is doing the hard work, what exactly is the "higher-value work"?


I don't have a good answer for that yet. And frankly, neither does anyone else I know.


This report came out of MIT, I think at the beginning of 2025. They looked at how work could be displaced by agentic AI across different occupations. Their estimate was that by 2028 about eighteen percent of current operations and administrative jobs would either be gone or "significantly restructured". Eighteen percent. That's not nothing.


But they also found that in firms where productivity shot up by as much as 40% after spending on retraining, entirely new AI-centric job descriptions were created involving monitoring, exception handling, and planning at a strategic level.


So maybe there is? Or isn't? Jury's still out in my head.

The Denver Connection (I Didn't See Coming)

Here’s an odd but maybe useful aside: I went to a “Women in Tech Leadership” meetup last month. I keep going to these things even though I mostly feel out of place and stand around awkwardly until someone talks to me or I work up the nerve to talk first. Also, yes, spoke with someone working with a team of mobile app development in Denver has apparently become known for that, particularly when it comes to integrating AI.


She told me most of these agentic AI tools aren’t even coming out of big tech. They’re coming out of smaller teams, regional developers working with companies like mine to build technologies that address real operational issues— not just something cool to show off.


Because it really does matter. Most RPA vendors provide me with a set of standard solutions. I would have to change my procedures to fit their solutions but meanwhile, some agentic systems are being developed from the way we actually work. That’s supposed to be the whole point, right? We shouldn’t have to change for the AI; it should change for us.


Wait, I'm digressing-

The Timeline That Keeps Me Up At Night

This is actually what the whole discussion is centered around.


"By 2026." Everywhere. Reports, articles, vendor pitches. People say that by 2026 agentic AI will have mostly taken the place of classical automation in most operational settings.


That's like... next year. That's now.


According to Forrester Research, agentic AI global spend will reach $23 billion by 2026 up from about $4 billion in 2024. That’s almost a 6x increase over two years. Growth like that doesn’t happen unless something real is changing.


Gartner ran a survey among 2,500 IT leaders across various sectors. 68% said they plan to deploy some form of agentic AI within the next 18 months. More than two out of three. We’re not talking about early adopters anymore. We’re talking mainstream.


It’s not really a question of “if” this is happening. It’s a question of “when.” And “how fast.”

And I need to figure out where my company and my staff fit into that.

What I'm Actually Going To Do (I Think)

Next week, I have to give my advice to the leaders. And here's what I'm leaning towards, even if I'm not 100% sure yet: 


The specific task that we assign to the agentic AI system is most probably dealing with order exceptions and handling difficult customer support queries. We run it for three months parallel to our existing process. We measure everything: time-to-resolution, accuracy, customer happiness, mistake frequency. Also checking if the hype is real.


Because here is the deal, traditional automation will not just simply disappear overnight. The RPA tools that we have been using are still very much applicable to simple tasks. But what about the difficult ones? The ones consuming 70% of my team’s time and creating most of our errors? Maybe, Agentic AI could be an answer.


Or it could be yet another over-hyped tech fad that never lives up to expectations. I have seen plenty like that.


But I cannot wait and see. Not with the pressure we are under. Not with how fast this is happening.

The Conflict I’m Dealing With

Here’s what I can’t figure out. I’m excited about it. Really thrilled by what agentic AI could do for our business, could do for our customers, could make things less stressful for my team.


But also scared about what it means for jobs. For humans. For the job that I’ve been working on for 15 years.


And maybe both of those things are true. Maybe that is just where we are right now, in this weird interim period when everything is changing and nobody has any idea what the eventual outcome will be.


It is 7:15 right now. My son's alarm clock just went off upstairs. I have to turn off the computer, make breakfast, and drive him to school.


But this will be on my mind the whole day. Most probably the whole week.

This is happening whether or not you want it to happen and I need to know where I stand before somebody else makes that choice for me.

Comments
avatar
Please sign in to add comment.