Listen to this article
Getting your Trinity Audio player ready...

The conversation framework that helped one team go from resistance to 100% AI adoption in 3 weeks — without forcing a single tool.

📌 Quick Takeaways ::

  • Admitting uncertainty builds more trust than false confidence
  • Create dedicated “container” time for honest AI conversations
  • Involve employees in AI decisions — acceptance rates skyrocket
  • Early transparency about job changes preserves trust

Your team is scared.

Not of the technology itself. They’re scared they’ll become irrelevant. That their skills won’t matter. That you know something about their future that you’re not telling them.

And here’s the hard part: you’re probably scared too.

I spent the last three months talking to marketing leaders about AI adoption. Know what I heard most? “My team is anxious, and I don’t know what to tell them because I don’t know what happens next.”

Welcome to 2026. Nobody knows. According to a 2023 Ernst & Young survey, 71% of U.S. employees are concerned about AI, with nearly half saying they’re more anxious about it now than they were a year ago. A 2025 edX study found that 47% of workers view AI as a direct threat to their jobs, and 61% are actively considering upskilling or reskilling in response.

But here’s what I learned: you don’t need to have all the answers. You just need to stop pretending you do.

Most companies are handling AI anxiety badly. They’re rolling out new tools, talking about efficiency gains, and avoiding the real questions their people are asking:

Will I still have a job? What’s my purpose if AI does my best work? Am I valuable anymore?

The pandemic taught us something important: vulnerability works. For a brief moment, leaders were allowed to say “I don’t know what happens tomorrow” and teams actually appreciated the honesty.

Research from Ranjay Gulati at Harvard Business School shows that leaders who admit they’re anxious and uncertain actually build more trust than leaders who project false confidence. Organizations with transparent AI communication show 40% less resistance to new tools compared to companies that try to sugarcoat the disruption.

Translation :: saying “ChatGPT scares me too” is more powerful than pretending you have a five-year AI roadmap figured out.

One CMO I talked to started their team meeting with: “I don’t know what marketing looks like in 18 months. But I know we’re going to figure it out together.”

That honesty changed everything. Within a month, her team went from passive resistance to actively experimenting with AI tools — because they felt included in the uncertainty rather than left out of a secret plan.

You can’t promise your team that their jobs won’t change. You can’t guarantee AI won’t disrupt your department. But you can promise clarity, transparency, and real dialogue.

Research on psychological safety confirms that honest, transparent conversations build trust and create shared purpose. Teams where people feel safe to speak openly — without fear of judgment — perform better during disruption.

Here’s what works:

Acknowledge your own anxiety while taking action anyway. This models the behavior your team needs to see. When you say “AI developments are happening faster than I can keep up with” and then follow it with “so here’s what we’re going to do,” you normalize fear while demonstrating forward motion.

The best leaders I’ve studied don’t deny their fear. They admit they’re anxious, understand why, and choose to move ahead regardless. That mindset shift — from “I must have all the answers” to “we’ll navigate this together” — is what separates effective AI leadership from the chaos most organizations are experiencing.

You need dedicated time to talk about AI anxiety. Not squeezing it into a Friday standup. Not addressing it in passing during a project review. A real conversation.

Psychologists call this a “container” — a designated space where people can bring the tough stuff without judgment or consequences in “the real world.”

Here’s how I’ve seen it work:

One marketing director blocked 90 minutes on a Thursday afternoon. No agenda beyond: “Let’s talk about how we’re feeling about AI.” She started by admitting she’d spent the weekend doom-scrolling articles about AI replacing creative jobs.

Her team opened up. The senior copywriter admitted he’d been secretly using ChatGPT for months and felt guilty. The junior designer said she was terrified of being replaced before she even got good at her job. Another team member worried AI would eliminate the craft they’d spent years developing.

How to Structure Your AI Conversation Container

Frame the work clearly :: “We’re implementing major changes with AI. This brings up complicated emotions for all of us, including me. If we’re honest about these feelings, we’ll be stronger as a team.”

Set ground rules:

  • Use “I” statements
  • No judgment on anyone’s feelings
  • Different viewpoints are welcome
  • What’s said here stays here
  • This is about dialogue, not immediate decisions

Lead with your own vulnerability. Share one genuine fear or uncertainty you have about AI. This gives permission for others to do the same.

They didn’t solve AI anxiety that day. But they started talking about it honestly. And three weeks later, that same team had the highest AI tool adoption rate in the company — because they’d processed the fear together first.

Disruption expert Charlene Li taught me this: when someone asks “How does this AI tool work?” they’re often really asking “Am I still needed?”

Behind every technical question is a more vulnerable one. Your employees may wonder if they’re still valuable, what their purpose is if AI replaces key tasks, and whether they have a future in your organization.

I tested this with my own team.

When one content strategist asked about AI writing tools, I could have just showed her how to use Claude or ChatGPT. Instead, I asked: “What worries you about AI in content creation?”

Turns out she wasn’t worried about the tools. She was worried her strategic thinking — the part she loved most — would get lost in a rush to automate everything. She feared becoming just an “AI prompt engineer” instead of a strategic content leader.

We redesigned her role around strategy and audience insights, and let AI handle the repetitive formatting and repurposing work she hated anyway. Now she’s excited about AI because it gives her more time for the work she values. Her productivity is up 30%, and more importantly, her job satisfaction is higher than it’s been in two years.

Your move :: In your next 1:1, when AI comes up, ask “What’s the real concern here?” or “What would need to be true for you to feel good about using AI in your role?”

Talking about emotions is important. But people also need to see you doing something. Research shows that teams involved in AI implementation see 3x higher adoption rates than teams who have tools forced upon them. Data from Gallup shows that employee involvement in technology implementation leads to significantly higher acceptance rates and reduces resistance. When employees have influence over how new tools are adopted, they’re far more likely to embrace the change.

Here’s what actually works ::

One team I know dedicates two hours every other Friday to experimenting with AI tools together. No pressure to implement anything, just curiosity and exploration. Last month they discovered three tools that saved them 12 hours a week on reporting. More importantly, they discovered them together, which created shared ownership over the changes.

Create hackathons or “AI office hours” where people can try tools in a low-stakes environment. When people shift from being AI “passengers” — passively watching technology happen to them — to AI “pilots” who actively steer how tools are used, anxiety decreases and innovation increases.

This is where most leaders fail. HR expert Enrique Rubio warns that some companies already know which jobs AI will likely eliminate — and they’re staying silent until the last possible moment.

“What worries me is that some companies already know which jobs are likely to disappear, but they are not saying anything,” Rubio says. “They are not giving people time to prepare or explore other options.”

Don’t be that company. If roles are at risk, say so early. Offer reskilling support. Give people time to make their own decisions about their futures. Someone choosing to leave for a new opportunity is infinitely better than being blindsided by AI replacement with no alternatives.

Early intervention builds trust even during bad news. Silence erodes trust permanently.

Let your team decide what AI should and shouldn’t do in their work. This gives them agency during a time when everything feels out of control.

One creative team I worked with decided together: AI can draft initial concepts, but humans make final creative decisions. AI can suggest copy variations, but humans own the strategic messaging. AI can optimize for performance, but humans define what “performance” means based on brand values.

These norms gave them back a sense of control. Make it clear that while AI can be a tool, humans remain accountable for decisions and work quality. As Rubio suggests: “Tell your managers what they can use AI for, but establish that ultimately they are responsible for decisions and work product.”

Create a simple system to measure AI’s impact. Are people saving time? Where’s it frustrating? Which tasks are better with AI versus worse? Share this data transparently with your team.

One director I know keeps a shared doc where anyone can log “AI wins” and “AI fails” each week. The transparency helps people learn from each other and removes the pressure to make AI work perfectly from day one.

AI anxiety won’t disappear after one conversation. This needs to become part of your leadership rhythm—consistent, meaningful, and ongoing.

The best leaders I talked to make space for these discussions regularly:

  • Monthly AI check-ins with the full team
  • Quarterly “AI state of the union” discussions
  • Regular 1:1s that include “How are you feeling about the changes?”
  • Anonymous feedback channels for people who aren’t comfortable speaking up

One marketing VP told me: “We used to sprint toward AI dominance. Now we pause every month to ask: Is this still aligned with our values? Are our people okay? What have we learned?”

That spacious thinking — the kind that allows for reflection instead of constant execution—creates better outcomes. Research from fellow Megan Reitz and researcher John Higgins at Saïd Business School shows that leaders who operate in “spacious mode,” paying attention expansively without hurry, spot opportunities faster, build stronger relationships, and maintain team motivation through disruption.

While everyone races to “win at AI,” few are pausing to reflect. Yet the best competitors in any race care for their minds and bodies while competing. They reflect, review performance, and strategically decelerate.

Start small. Don’t try to implement everything at once.

This week:

  1. Ask your team: “What’s one task you wish you could delegate to AI?” Really listen to what they say — and what they don’t say.
  2. Block 60-90 minutes in the next two weeks for an honest AI conversation. No agenda beyond creating space for feelings and questions.
  3. Share one genuine uncertainty you have about AI with your team. Model the vulnerability you want to see.

This month:

  1. Identify one person on your team who seems particularly anxious and have a 1:1 focused on understanding their concerns
  2. Create a shared space (doc, Slack channel, regular meeting) for AI experiments and learnings
  3. Be transparent about one thing you don’t know about your company’s AI plans

This quarter:

  1. Establish team norms around AI use that align with your values
  2. Identify any roles that might change significantly and start planning for reskilling or transitions
  3. Measure and share AI’s actual impact on workload and job satisfaction

On building trust :: Leaders who admit uncertainty build 2x more trust than those who project false confidence. Transparency about what you know and don’t know creates psychological safety during disruption.

On creating containers :: Dedicate specific time for AI conversations separate from tactical meetings. Structure discussions with ground rules, lead with vulnerability, and focus on dialogue before decisions.

On taking action :: Shift your team from passive AI passengers to active pilots through shared learning, honest conversations about consequences, and collaborative norm-setting. Teams involved in AI implementation show 3x higher adoption rates.

On job security :: Early transparency about potential job changes builds more trust than silence, even when the news is difficult. Provide reskilling support and time for people to explore options.

On ongoing support :: AI anxiety requires consistent attention, not one-time fixes. Make space for regular check-ins, reflection, and course correction as part of your leadership rhythm.

On maintaining humanity :: Keep humans accountable for final decisions, measure AI’s impact transparently, and prioritize strategic thinking over task automation. The goal is augmentation, not replacement.

Remember :: Your leadership during AI transformation won’t be measured by how confidently you predicted the future or how smoothly you implemented new tools. It will be measured by how you supported your people through uncertainty, how you built trust during disruption, and how you helped them find purpose and agency in a rapidly changing landscape.

Q: What is AI anxiety in the workplace? A: AI anxiety in the workplace refers to employee concerns about job security, relevance, and purpose as artificial intelligence tools are implemented. According to a 2023 EY survey, 71% of U.S. employees report anxiety about AI’s impact on their roles.

Q: How can leaders reduce employee AI anxiety? A: Leaders can reduce AI anxiety by: (1) Admitting their own uncertainty, (2) Creating dedicated time for honest AI conversations, (3) Involving employees in AI implementation decisions, and (4) Being transparent about potential job changes.

Q: Should employees have input on AI tool selection? A: Yes. Research shows that employees with influence over technology adoption report significantly higher job satisfaction and acceptance rates compared to those who have tools forced upon them.

Q: What percentage of workers are concerned about AI? A: According to 2023-2025 research, 71% of U.S. employees are concerned about AI, with 47% viewing it as a direct threat to their jobs.



The gap is widening :: Teams that started these conversations 3 months ago are thriving. Teams that waited are now in crisis mode. Where will your team be in 3 months?