AI in the Workplace: Efficiency Booster or Overhyped Distraction?

Originally published on nojitter.com

nojitter

As more workers get access to AI tools, it’s time to ask if these tools boost productivity—or if they introduce new complexities disguised as efficiencies.

As artificial intelligence (AI) continues to work its way into the workplace, there are growing concerns that implementation without targeted intention may lead to more confusion than clarity.

While organizations race to adopt generative AI tools, questions are emerging about whether these systems genuinely boost productivity—or if they simply burden employees with new complexities disguised as efficiencies.

A February report from the Pew Research Center found workers who have used AI are more likely to say the tools help them speed up tasks rather than improve the quality of their work, while a May Gartner survey found even executive leaders lack the knowledge and capabilities to support AI adoption.

According to Doug Gilbert, chief digital officer at Sutherland, too many companies attempt to bolt AI onto outdated tech stacks, leading to friction rather than streamlining.

“Organizations must avoid bolting AI onto legacy systems and instead integrate it with modernized processes,” Gilbert said. “Too often, IT departments layer AI onto fragmented tech stacks, forcing it to act as a clunky, human-like interface to navigate data silos or manual workflows.”

He explained that this misalignment frustrates employees and undercuts the very productivity gains AI promises.

Focus on Workflow Integration

For AI to live up to its potential, it must be embedded thoughtfully into daily workflows, rather than appearing as a bolt-on feature.

“AI should be deployed to remove friction from workflows, not add layers of complexity,” Gilbert said. “The key is intentional integration.”

Davit Baghdasaryan, CEO and co-founder of Krisp, echoed that sentiment, emphasizing employee-first design must be the priority.

“AI should integrate directly into existing workflows, minimizing disruptions rather than creating new processes or requiring extensive employee training,” he said. “Solutions should deliver immediate and visible value.”

The concept of “empathetic automation”—Gilbert’s term for designing AI with human needs in mind—requires building systems that enhance, rather than displace, employees.

At Sutherland, this approach includes co-designing tools with frontline workers to ensure relevance and usability.

“By co-designing the interface with frontline staff, we ensured the AI delivered contextual alerts, flagging only high-priority risks, rather than flooding employees with notifications,” he said.

Investing in Signal over Noise

AI solutions that work well tend to target environments with high-volume, repetitive tasks, such as contact centers or transactional support roles.

In real-time communication roles such as sales, customer support, and healthcare hotlines, AI supports quick, informed decision-making by providing timely access to relevant information, helping boost clarity, speed, and responsiveness under pressure.

In these settings, both Gilbert and Baghdasaryan said AI can dramatically reduce workload without adding stress, but both warned that a key challenge is avoiding information overload.

“Organizations should invest in systems that focus on signal over noise,” Baghdasaryan said. “AI should distill large or complex information to empower employees with the most important and relevant information for their roles.”

Similarly, Gilbert described how AI should act as a “smart curator,” providing insights within tools employees already use.

Dan Root, head of global strategic alliances at Barco ClickShare, argued organizations must align AI investment to the business strategy and processes prior to onboarding if they want to ensure the adoption and maximum value.

“Leadership should compare the UX/UI of existing tools against new AI-enabled software to ensure employees resonate with functionality and can easily adopt the new tools,” he said.

Leadership and AI Leverage

Leadership plays a pivotal role in setting realistic expectations and building a sustainable AI adoption strategy.

Gilbert warned against buying into “the ‘AI fixes everything’ myth,” urging executives to “cut through the AI hype” and define clear goals based on business value.

“Too often, CEOs demand ‘just put AI in place!’ with a myopic focus solely on implementation,” he said. “That’s a recipe for wasted effort and frustrated teams.”

Instead, leaders should focus on communicating the purpose behind each new tool and tying its introduction to tangible outcomes.

“The messaging surrounding the adoption of AI tools must focus on the real problems it helps solve,” Baghdasaryan said.

He also emphasized the importance of onboarding and training, calling them essential to long-term success.

“With a gradual timeline of integration tied to real use cases, leaders can avoid overwhelming teams with functionality and features that they don’t need.”

Root said he agreed training and support are critical to driving employees to understand how to use AI-enabled tools, but equally critical is when to double check AI.

“Hallucinations and incorrect results still occur regularly,” he cautioned.

Feedback loops and adaptability are also central to successful implementation.

“The introduction of new AI systems shouldn’t be static,” Baghdasaryan said. “They should be dynamic and evolving with built-in feedback cycles.”

AI in its Place

Not every task is suitable for AI–in fact, one of the biggest risks is overapplying the technology.

“A major challenge we see too often is organizations attempting to force-fit AI to every process, driven by hype,” Gilbert said. “This misalignment creates stress and complexity, undermining productivity.”

Instead, organizations should choose the right tool for the task—robotic process automation (RPA) for structured, repetitive jobs; classical AI for structured data analysis; generative AI for creative outputs; and agentic AI for complex coordination tasks.

“By embracing intelligent automation over force-fitting AI, companies can amplify human strengths,” Gilbert said.

Root added it’s important for executives to be transparent about their AI journey and to share that there are challenges that come with onboarding new solutions.

“They can do this by creating internal series that highlight how the tools are improving the workflow and addressing challenges,” he said.

He advised leaders to start with employees, work with them to understand what types of AI tools would be most impactful in existing workflows.

“Employees are unlikely to engage with a tool being built to replace them,” he explained.

From Baghdasaryan’s perspective, AI works best when it complements human effort, not when it seeks to replace it.

“The best tools support humans in real time by surfacing knowledge, reducing stress, and improving outcomes without removing human control,” he said.

Gilbert advised organizations to define clear success metrics, like reduced context-switching or faster decision-making, to ensure AI cuts through noise.

He said the biggest mistake is deploying AI without user-centric design or maintenance, leading to a deluge of alerts that erode productivity.

“By embedding AI inline, prioritizing usability, and fostering continuous improvement, companies can make AI a focused ally, not an overhyped distraction,” he said.

About The Author: Nathan Eddy is a journalist and filmmaker specializing in digital technology, architecture and urban issues. He is a graduate of the Northwestern University Medill School of Journalism and has been writing about business IT since 2008. He lives in Berlin, Germany.