Don’t Let AI Workslop Clog Up the Pipes

A growing body of research reveals an AI industry in the crosshairs: As confidence and value hang in the balance, pressure is mounting for top execs to deliver on their steep investments. Hard-nosed strategies and strong guardrails may pave the path forward.
Nov. 19, 2025
7 min read

Key Highlights

  • Two in five employees have recently dealt with “AI workslop” that creates more work than value. It’s just one potential reason why 95% of organizations reviewed in a recent MIT report are getting zero return on their GenAI investments. 
  • Despite these challenges, a sweeping new survey says 74% of U.S.-based CEOs still see the tech as a top investment priority.
  • Faced with such variability, executives must take a hard look at their strategies for implementing AI and ensuring broad-based, cohesive buy-in, according to one digital advisor, who predicts the tech will continue to gain traction for B2B organizations in the years to come.

For many workers, AI can amount to little more than smoke and mirrors. According to recent research from Stanford University and BetterUp Labs, 40% of full-time U.S.-based desk workers had received “AI workslop” within a month of being surveyed.

Generated by an algorithm, slop deliverables — such as “slick slides, lengthy reports, overly tightened summaries, or code without context” — create the illusion of progress but deliver no real value and lots of (costly) confusion: Each instance takes about two hours to resolve, which amounts to $186 per worker per month, or the equivalent of $9M annually for a 10,000-person company, researchers estimate.

Tarnished confidence is another common side effect. “The worst AI solution is the one that gets a workaround because people don’t trust it,” says Chuck Reynolds, managing director and partner in L.E.K. Consulting’s Boston office and a member of the firm’s digital practice.

Indeed, more than half of the 1,150 respondents said receiving slop made them view the sender as less creative; more than a third saw them as less capable, reliable, trustworthy, and/or intelligent. The pervasiveness of this poor experience is a recipe for organizational ills ranging from disengagement and duplicative work to false productivity and wasted money, researchers say. It’s also out of sync with the amount of money execs are pouring into the tech.

Nearly three in four (74%) U.S. CEOs surveyed by consulting firm KPMG last quarter see “AI is a top investment priority despite ongoing economic uncertainty.” Of the more than 1,300 CEOs, including 400 in the U.S., tapped for the firm’s report, four in five (81%) plan to spend at least 10% of their budget on the tech over the next year. Additionally, virtually all respondents (98%) are at least cautiously optimistic about their organization’s ability to keep pace with the speed of AI evolution. And they’re anxious to make good on their investments to date: 90% of top execs surveyed expect to see a return in three or fewer years, up from 22% in 2024.

So far, though, this optimism hasn’t always translated into results: 95% of organizations reviewed by MIT in a July report are seeing zero return on their GenAI investments. Even in this embattled arena, however, Reynolds is hopeful. In his client work, he’s not “seeing a deluge of lower-quality effort and work from employees using AI.” And it’s possible to right the ship.

De-slopping strategy

Executives observing a pattern of workslop must first identify the root cause by taking “a human-led, empathetic approach,” Reynolds says. “People are still at the core of an AI transformation and will always be.”

Start by assessing whether teams are dissatisfied with the tool, or confused over why, when, and how to use it, he suggests. Ingraining what quality looks like takes refinement and repetition, so “establish the parameters and re-establish the parameters,” he advises. Then, see what the data reveals about the tool’s fit. “We could also be overestimating the impact or the output that we’re getting,” he says.

Of course, it’s easier to avoid slop altogether than it is to course correct once it’s pervaded the organization. That means AI, like any technology, should only be considered after leaders have identified a clear use case tied to corporate strategy, Reynolds says.

He’s especially excited by “the meat and potatoes” application of AI within the supply chain to better orchestrate manufacturing, forecast and deploy inventory, and improve customer service by having “the right product at the right place at the right time,” he says. “It’s a data-rich environment. There’s a lot of ability to interject change.”

Another strong use case: generative search, which is “changing the game of discovery in the B2B sales organization,” says Reynolds. He predicts that the “sales agent of the future” will build robust support scripts for sales teams by combining a company’s transactional and relational data with external datasets of firmographics and other “context that might not be in the four walls of an organization.”

It’s also important to realize when AI isn’t the right enabler, such as for a sales organization that “runs off the back of Excel and emails,” says Reynolds. Another common misstep in adoption, according to MIT researchers, is favoring “visible, top-line functions over high-ROI back office.”

Giving guidance and guardrails

When it comes to GenAI, “the core barrier to scaling is not infrastructure, regulation, or talent. It is learning,” MIT researchers write. Often, these systems “do not retain feedback, adapt to context, or improve over time.”

The same goes for people, says Reynolds. “A lot of clients establish their AI strategy and say, ‘well, we provided everyone a ChatGPT enterprise license, and our governance policy is that they can’t use external data and can’t [input] client information,’” he explains. It’s not enough. “Truly transformative benefits” are rare without the upfront investment in educating and winning broad buy-in, he says.

And yet, that proactiveness is just as hard to come by: Only 30% of employees say their organization has guidelines or policies for using AI at work, according to Gallup. Additionally, workers cite an “unclear use case or value proposition” as their most common AI adoption challenge. In contrast, employees who strongly agree that their leadership has communicated a clear integration plan are three times as likely to feel very prepared to work with the tech and 2.6 times as likely to feel comfortable using it, the firm reports.

To create this culture of confidence, Reynolds recommends implementing the following guardrails and governance best practices:

  • Account for strengths and limitations. “Expecting an untrained employee to use an untrained model and have high-quality and caliber output is a recipe for disaster,” Reynolds says. Instead, ensure people know where and when the solution is being implemented, what data informs it, and why it’s going to help them make better decisions, he advises.
  • Follow the flow of work. When it comes to capability building, the most sophisticated organizations are moving past playbooks and prompt guides to “much more context-aware models” that answer questions and upskill workers in the course of their work, Reynolds explains. This, he says, “fundamentally starts to remove some of the risk of AI workslop because it’s embedding AI and the concepts of large language models or machine learning into a process” rather than putting the onus on individuals to decide when and how to augment their work.
  • Customize solutions to fit organizational needs. 60% of the organizations analyzed by MIT had looked into enterprise-grade, custom, or vendor-sold systems, but only 20% had reached the pilot stage, and just 5% had made it to production. “Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations,” researchers write.
  • Build a feedback loop. “If you’re just using a generic, once-trained model without allowing it to have feedback on what is working or what isn’t working … you’re missing that opportunity to continue to improve, which is one of the benefits of having AI in an organization,” Reynolds says.
  • Get real about results. Early metrics (e.g., how many times a day someone used a GenAI tool or how many questions a chat agent answered) say very little about ROI, says Reynolds. “You can have a lot of impact, but not a lot of value.” That’s why a solution’s design and “data exhaust” are as important to monitor as what’s going into it, he explains. If, for example, an organization deploys a robust predictive pricing tool, but salespeople are overriding its recommendations almost every time, the solution isn’t generating impact or value, regardless of how accurate its outputs are.

Finally, says Reynolds, execs should brace for even more movement. “B2B organizations are going to be much more disrupted than B2C in the next wave of AI transformation,” he predicts.


Want the EDGE delivered to your inbox every week?
It's free to subscribe, but the intel is priceless.

About the Author

Delaney Rebernik

Delaney Rebernik

Contributor

Delaney Rebernik is an independent journalist covering leadership, death, and digital life, and a writer and consultant for purpose-driven organizations. She’s also Design Observer’s Executive Editor. As an award-winning editorial and communications leader, Delaney helps media brands, memberships, and other champions of community, knowledge, and justice tell vital stories and advance worthy missions. 

In her spare time, Delaney consumes horror and musical theater in equal measure. She lives in Brooklyn, New York, with her husband Todd and pup Spud, named for her favorite food. Learn more at delaneyrebernik.com, and connect on Bluesky and LinkedIn.

Quiz

mktg-icon Your Competitive Edge, Delivered

Make smart decisions faster with ExecutiveEDGE’s weekly newsletter. It delivers leadership insights, economic trends, and forward-thinking strategies. Gain perspectives from today’s top business minds and stay informed on innovations shaping tomorrow’s business landscape.

marketing-image