Businessman is command AI on laptop computer in office to help analyze data or generate images picture draft word dialogue story writing with big data operating information in the world of digital.

By Dr. Gleb Tsipursky

Generative AI is not just another technological evolution—it’s a disruption of identity, expertise, and the unwritten social contracts that shape how people perceive their value in the workplace. According to my interview with Amy Loomis, Research Vice President for the Future of Work at IDC, organizations must recognize that Gen AI adoption is not only about tools and processes. It’s also about people and adapting to new ways of working.

Many companies eager to adopt Gen AI sometimes underestimate the resistance that arises not from technical obstacles, but from psychological ones. Loomis shared that while a growing number of organizations—up from 35% to 61% in just one year—are applying Gen AI for tasks like assistance and information retrieval, far fewer are venturing into more complex implementations such as agentic AI. At the end of 2024 only 20% of IT leaders surveyed said they were already widely using AI agents. That said, far more were using these technologies in a few specialized areas (28%) or conducting initial POC’s (27%)

The deeper friction lies within the workforce itself—particularly among experienced professionals who may feel that Gen AI compromises the value of  the very expertise they’ve spent decades building.

But the numbers only tell part of the story. The deeper friction lies within the workforce itself—particularly among experienced professionals who may feel that Gen AI compromises the value of  the very expertise they’ve spent decades building. This isn’t simple expertise insecurity ; it’s IT leaders having to navigate new technological bona fides. And according to Loomis, organizations that want to succeed with Gen AI must address that emotional truth head-on.

Build Skills Where People Are, Not Where You Wish They Were

Companies that want to overcome reluctance to adopt Gen AI solutions – particularly use of AI agents must first move beyond outdated assumptions about learning. Traditional training, isolated from day-to-day workflows, is no longer sustainable. “Telling someone to take an hour-long training during their quota-driven workday just doesn’t work,” Loomis pointed out.

Instead, the most effective organizations are embedding learning into the flow of work through tools like digital adoption platforms that guide users step-by-step in real time. This allows for just-in-time learning that doesn’t demand more from workers—it meets them where they are. Complementary strategies like microlearning, experiential simulations with AR or VR, and real-time feedback loops based on metadata help create an ecosystem where learning becomes part of doing.

And critically, it’s not just about technical acumen. Loomis emphasized that what many call “soft skills” should be more accurately understood as essential human skills. In a Gen AI world, the ability to translate human insight into prompts, contextual understanding, and ethical decisions is as crucial as coding. Skills like flexibility, cross-disciplinary thinking, and emotional intelligence become differentiators.

Resistance Isn’t Always About Fear—Sometimes It’s About Pride

One of Loomis’ most compelling insights from the interview is that resistance to Gen AI is not always about fear of job loss per se. It can be about concerns about perceived relevancy at the tail end of a career. A seasoned developer who has spent decades mastering Java or managing legacy systems may worry about their perceived Gen AI learning curve – despite support from AI assistance and natural language coding capabilities.

People need to feel that their worth is not erased by new technology but amplified through new opportunities.

“Nobody wants to look like a noobie,” Loomis said. This kind of ego-based resistance isn’t irrational—it’s deeply human. Organizations need to respond accordingly. That might mean offering those individuals mentorship roles, allowing them to maintaining their sense of professional status as they pass on valuable expertise . It might mean identifying ways for them to steward legacy systems even as new tools are adopted. It definitely means acknowledging that some transitions require two steps back before one step forward—and that’s okay.

Creating a psychologically safe culture of learning is vital. People need to feel that their worth is not erased by new technology but amplified through new opportunities. And for this to happen, the system itself must support that journey, with integrated pathways to upskilling that don’t require sacrificing productivity or personal time.

Customization Is The New Standard—And It’s Exhausting

Gen AI differs from previous digital transformations in one major way: it doesn’t come with a universal playbook. Unlike deploying an ERP system or adopting Salesforce, Gen AI requires companies to deeply customize use cases based on function, role, and even individual workflows. And because the technology is evolving so rapidly, what doesn’t work today may suddenly work tomorrow.

This creates a paradox. On one hand, Gen AI promises transformative productivity. On the other, it demands a relentless pace of adaptation. As Loomis explained, functions like marketing—which tend to be more fluid and creative—have undergone significant restructuring to accommodate AI. Roles have blurred. New governance models have emerged.

By contrast, departments with deterministic workflows, such as finance or procurement, face a different kind of transition—less radical in process but no less important in implementation. Ultimately, organizations must assess the “context level” of each role to determine where Gen AI can be most effectively integrated and what kind of change management is required.

Let Governance Guide Innovation, Not Restrain It

One of the risks in navigating this transformation is swinging too far toward control and locking down experimentation altogether. Yet, as Loomis noted, “Just because you can automate something doesn’t mean you should.” Governance is critical—but it must be fluid enough to evolve with the technology.

Best practices emerging from the field include creating a Gen AI Center of Excellence that brings together cross-functional stakeholders to define and refine governance parameters. This model enables organizations to identify which tools are authorized, which guardrails are needed, and—importantly—how to encourage and evaluate grassroots innovation without losing control.

Additionally, governance must account for risk management. Security, data privacy, hallucinations, and prompt injection vulnerabilities are very real concerns. But Loomis observed a curious trend: in practice, fear of missing out is often more powerful than fear of failure. Companies are moving ahead anyway—cautiously, but determinedly.

AI should be trained to flag deviations and pause operations when thresholds are breached.

Mitigating risk requires both automation and human oversight. AI should be trained to flag deviations and pause operations when thresholds are breached. Simultaneously, employees must be equipped with the right training to recognize and report failures—intentionally or not, human behavior remains a significant risk vector.

The Future: Embedded AI, Invisible Interfaces, and Reimagined Roles

Looking ahead, Loomis sees Gen AI becoming an invisible layer of the workplace experience—like cloud computing today. Most employees won’t think in terms of bots and agents; they’ll simply get their work done through orchestrated digital workflows. But the impact on job roles will be profound.

New roles such as AI trainers, ethicists, and workflow designers will emerge. Existing jobs will be redefined. Entry-level roles may be replaced by automation, creating a gap in professional development that only AI tutors or embedded coaching systems can fill. As AI handles repetitive tasks, new hires will be expected to contribute at higher levels without traditional on-the-job learning—raising the bar for initial performance and onboarding.

Loomis summed it up with a metaphor: “Organizations are like fish. They have to swim to stay alive.” In the Gen AI era, that means continuously evolving through inclusive skill development, ego-sensitive change management, adaptive governance, and a relentless focus on embedding learning into the flow of work.

Because in the end, overcoming resistance isn’t about mandates. It’s about meaning. And leaders who understand that will be the ones who not only integrate Gen AI effectively but build the kind of resilient, forward-looking organizations that thrive in its wake.

About the Author

Dr. Gleb TsipurskyDr. Gleb Tsipursky was named “Office Whisperer” by The New York Times for helping leaders overcome frustrations with hybrid work and Generative AI. He serves as the CEO of the future-of-work consultancy Disaster Avoidance Experts. Dr. Gleb wrote seven best-selling books, and his two most recent ones are Returning to the Office and Leading Hybrid and Remote Teams and ChatGPT for Leaders and Content Creators: Unlocking the Potential of Generative AI. His cutting-edge thought leadership was featured in over 650 articles and 550 interviews in Harvard Business Review, Inc. Magazine, USA Today, CBS News, Fox News, Time, Business Insider, Fortune, The New York Times, and elsewhere. His writing was translated into Chinese, Spanish, Russian, Polish, Korean, French, Vietnamese, German, and other languages. His expertise comes from over 20 years of consulting, coaching, and speaking and training for Fortune 500 companies from Aflac to Xerox. It also comes from over 15 years in academia as a behavioral scientist, with 8 years as a lecturer at UNC-Chapel Hill and 7 years as a professor at Ohio State. A proud Ukrainian American, Dr. Gleb lives in Columbus, Ohio.