How AI Is Reshaping Leadership: A Conversation on Trust, Culture, and Organizational Change

I recently participated in a research discussion with a group exploring organizational transformation in the AI era. The conversation covered a lot of ground—from how trust becomes the foundation when everything is changing, to why culture is your only uncopied competitive advantage. I wanted to share that conversation here because I think the questions they asked are the ones every leader should be asking right now.


What I Got Wrong

A year ago, I was convinced about fully decentralized AI strategy. I’d spoken about it in panels—certain that giving teams autonomy would let them move fast.

But I got the adoption curve wrong.

Even though the tools got dramatically better—Claude made significant advances in December—adoption was still slower than I expected. And more importantly, it happened at wildly different rates across teams.

Some teams sprinted. Others took months to get started. Some didn’t start at all, waiting for permission that never came.

That’s when I realized: autonomy alone doesn’t create velocity. You can decentralize execution, but without a centralized forcing function, you get uneven adoption and teams moving at different speeds.

The answer is clear expectations set from the center. Not a mandate. Not “you must use AI.” But “we expect every team to be experimenting with AI by Q2. Here’s the budget, here’s the support, here’s what success looks like.”

The expectation creates synchronization. It says: this is not optional exploration—it’s the new baseline. And then teams still decide how they get there.

So the real learning: decentralization needs a spine. A centralized expectation that pulls everyone forward at roughly the same pace. Without it, you get pockets of innovation and vast deserts of “we’ll wait and see.”


Q: How is AI fundamentally changing the role of leaders in organizations?

A: I think it’s important I lead with perspective here—I’m advising startups across multiple industries and sectors. So this is what I see across different types of organizations, not just one company’s experience.

Leadership has gone through changes with every technological advancement—from manual to digital to mobile. But what’s different now is the rate. The speed at which everything is happening is unprecedented.

The critical piece, though? It’s still deeply human.

Yes, technology is changing. Yes, AI can produce things we never thought possible. But how that gets shipped—how it gets used, how it impacts people—that’s on leaders. And that’s where the human part becomes even more important.

Leadership is becoming more grounded in the human, not less. Because now humans have more power, they need more wisdom, more trust, more psychological safety to use that power well.


Q: What stage of the AI journey are we in, and what kind of environment do leaders need to create right now?

A: We’re at a critical moment. The technology is mature enough to be useful, but people are still figuring out what it means for their roles, their teams, their careers.

The environment leaders need to create starts with one thing: trust.

Think about human evolution. Humans survived because we built trust within tribes, created communities. We could collaborate, contribute, and grow together. That foundation is what makes everything else possible.

Now, in this new world with AI, trust becomes even more critical. Psychological safety is a big part of that. People need to feel safe using these tools, safe saying “I don’t know how to use this,” safe experimenting and failing.

Traditionally, we hired people for hard skills. I’m good at writing. I’m good at designing. I’m good at math. But now tools are becoming genuinely good at those things—high-quality output. So we’re not hiring for specific skills anymore.

We’re hiring for the new set of skills. For nuance. For how people think.

The person who solves problems in a creative way. The person who naturally gravitates toward ambiguous challenges. The person whose judgment you trust. These are the things that matter now.


Q: How should leadership development programs change in response to this?

A: Here’s where I think most programs get it wrong: they’re still teaching from the old playbook.

Leadership philosophy today comes from a mindset of scarcity. Go compete. Go grow. Go expand your share. That made sense when opportunities were limited. You had a piece of pie, and you needed to maximize it.

But now the pie is unlimited. The value you can create with AI is almost unimaginable.

So the leadership mindset needs to shift from growth to expansion.

Growth is vertical. You grow your market share, your domain, your vertical. But expansion is horizontal—you’re asking what’s possible that I haven’t considered yet? You’re staying curious. You’re learning on the fly.

You can’t plan for expansion the way you plan for growth. There’s no three-year roadmap. Things change too fast. What matters is being curious, staying on your toes, and understanding what’s happening right now.

That’s why I don’t use “growth mindset” anymore. I say “expansive mindset.”

And that’s also why the traditional leadership skills—empathy, servant leadership, psychological safety—are going to be more valuable going forward, not less. Because the pace is faster, people need more human connection, more clarity, more belief that someone has their back.


Q: You mentioned earlier that roles are blurring. How are workflows actually changing as a result?

A: Completely. The entire workflow structure is changing.

Traditionally, we’ve optimized workflows left to right: Step 1 → Step 2 → Step 3 → Outcome. We find the bottleneck and throw resources at it. You can apply this to a coffee shop, a tech company, a bank—anywhere.

Now? Things don’t happen sequentially anymore.

You can spin up multiple approaches in parallel. Try different solutions at the same time. See which one works. Iterate fast. This is especially true in tech, but I’m seeing it across industries.

The overarching outcome stays the same—you have a problem, you’re solving it. But how you get there is fundamentally different. It’s not a pipeline anymore; it’s a network.

This changes everything: how you measure progress, how you allocate people, how you structure teams. A traditional manager organized the workflow, controlled the gate, set priorities.

Now the manager’s role is different. It’s: Do I know my people well enough? Do I understand what they’re naturally good at? Can I match them to problems and then get out of the way?

That’s a completely different leadership muscle.


Q: That sounds like it requires some kind of guardrails or framework to work. How do you set that up?

A: Exactly. This is where most organizations fail. They try to either centralize everything (too rigid) or decentralize everything (too chaotic).

The answer is both.

Let me give you an analogy. I am a jazz fan, so going to use that example, It’s like jazz. The bandleader establishes the key, the tempo, and the style. Then each musician improvises within that framework. The pianist doesn’t check with the bandleader on every chord. The drummer doesn’t ask permission on every fill. But everyone knows the boundaries.

Same with companies.

Centralize your principles. Your values. What matters to you as an organization. What you won’t compromise on. Then decentralize everything else.

Let teams decide how to solve the problem. They have context you don’t. They know what’s happening on the ground. They can move fast. They can try things.

Here’s why this works: AI requires explicit rules. You can’t let an AI system “fill in the gaps” or “figure it out.” It will make assumptions that don’t match your values. So you have to define your playbook clearly.

And when you do that? Your people suddenly have clarity too. No more waiting for approval. No more ambiguity. Everyone knows how to decide.


Q: Should organizations integrate AI use into performance reviews and rewards? Or does that backfire?

A: You can’t be too early, and you can’t be too late. You need to find the sweet spot.

If you’re too early and put some random metric in place—measure AI adoption percentage—things will fail. Why? Because things are changing. You don’t know what the right metric is yet. And people will game it—open Claude, autocomplete a variable name, close it, mark achievement. That’s not real adoption.

If you’re too late and never put anything in, you’re also wrong because you’re not leveraging the full benefit of these tools.

Here’s what actually works: Start experimenting. Try different things. End of the day, always measure by outcomes.

You’re a business. You exist for a reason—to solve a customer problem. Focus on that. Pin everything to that. If you’re solving the customer problem and the outcomes are there, if those are measurable, then put it into your performance conversations.

But don’t define skills, career ladders, or traditional performance conversations the way you know them. Those won’t be relevant today. No one’s published the playbook yet. Everyone’s learning in real time.

The only thing is: don’t make horrible mistakes while you’re learning.


Q: You’ve touched on this, but let’s go deeper. How is organizational culture changing? Does it have to evolve?

A: Culture isn’t changing—it’s becoming more important.

Here’s the hard truth: your competition can build what you built with AI. Probably faster than you think. They can copy your product, your features, your roadmap.

They cannot build your culture overnight.

So if you’re a leader right now, invest in culture—or more precisely, invest in the environment—the way you’d invest in product. Psychological safety. Clarity on what matters. A sense of purpose. Community connection.

Because people don’t just want money. That excitement wears off. What keeps people is:

  • Feeling safe
  • Understanding what matters
  • Being connected to something larger than themselves
  • Knowing they’re valued

That’s what scales companies long-term. That’s what keeps your best people.

And this connects to something else: you can’t be self-centered as a company anymore. Your community is your customer. Your community is your talent pool. Your community is your reputation.

If you want to go fast, you can go alone. If you want to go far—if you want to build something sustainable—you have to bring people along.


Q: There’s a lot of talk about entry-level jobs disappearing. What do you think actually happens to those roles?

A: Entry-level roles will change. Everything is changing. But they’re not disappearing.

What’s happening is that people are entering with higher baseline skills—they’ve been exposed to AI in school, college, and side projects. They can do more, faster.

Before, an intern might spend a month running reports. Now they can generate that report in an afternoon. So they move on to bigger problems.

And here’s what’s interesting: because the barrier to doing technical work has lowered, the type of problems they go after changes. They don’t get bored and sit around. They go after bigger, harder problems.

Cancer research is a great example. For years we could only analyze DNA so much because of computational limitations. Now we can. So researchers are going after problems they couldn’t touch before.

Same with your entry-level people. They’re not leaving. They’re leveling up to harder problems. And when they eventually become leaders, they’re going after problems that have never been chased before.

That’s why the expansive mindset matters. That’s why diversity matters. That’s why bringing people along in this journey matters. The problems you solve five years from now will be shaped by who’s at the table today.


Q: If you could give one piece of actionable advice to an organization just starting to explore AI, what would it be?

A: Start logging.

I know that sounds simple, but it’s not. What I mean is: every day, record what you’re doing with AI. What worked. What didn’t. What you learned.

Not for compliance. For knowledge.

In three months, you’ll have a playbook built on your real data, your real problems, your real context. That beats any generic framework or consultant recommendation. That becomes your organizational memory. That becomes your strategy.

And because everyone’s learning on the fly right now, that log becomes invaluable. It’s your institutional knowledge.

Be curious as an organization. That’s the biggest one. But operationally, start logging everything.


Q: Last question—thinking about corporate social responsibility and AI: how should companies be thinking about this?

A: Companies can’t afford to leave people behind in this journey.

The problems you’re going to solve are going to be bigger. More complex. More impactful. You need more people, not fewer. You need diverse perspectives. You need community.

History has taught us this: if you want to go fast, go alone. If you want to go far, go together.

So CSR isn’t going to look like it used to. You’re not just writing a check. It’s embedded. Your community might be part of your customer base. They might be part of your team. Your customers and your community overlap.

And that’s not charity. That’s strategy. That’s how you build something that lasts.

The gap between technology and humanity is closing. The question is: who’s in that gap with you? And are you bringing them along?


The tools are getting better. The pace is accelerating. But the fundamentals of good leadership—knowing your people, trusting them, giving them clarity and safety, connecting work to purpose—those aren’t changing.

They’re just becoming more important.


What resonates most in this conversation? Is it the trust piece? The culture advantage? The idea of centralized expectations pulling teams forward? I’m genuinely curious what lands—reply in the comments and let’s continue the conversation. Part 2 will go deeper into what we’re learning as adoption accelerates.


Leave a comment