July 22nd, 2025
2 min read

Charlene Li, Senior Advisor and Author, Founder Altimeter Group, believes disruption is never about the technology and always about the people. To transform, leaders need to understand and prepare for the changes required of their people. And this is why strategy is so important to harness the power of AI.
Unfortunately, she says, AI strategies often lose their relevance after just a few weeks, given the rapid pace of change. We talked to Li about how to create a strategy in a changing environment – and still stay aligned with your long-term strategic goals and objectives.
Bridges and Barriers
When she meets with companies as an advisor, Li starts with three key questions:
-
What is your vision?
It’s easy for companies to become lost in day-to-day operations without grounding themselves in a larger vision. Li’s advice: Stay focused on where you’re going, not just where you are.
-
What is your strategy to get there?
What are the choices you’re going to make to get to that future? These questions are designed to ensure that everyone is on the same page. And being able to adapt and make strategic decisions on the fly is essential. According to Li, most teams struggle when it comes to defining their strategy. “When I ask people, ‘What’s your strategy?’ there usually isn’t clarity, and that’s the key,” she explains.
Unlike technologies such as cloud, mobile or social, which took years to mature, AI is different. It can create or destroy value overnight. Because of this, Li emphasizes an agile approach to strategy development. “You have to work on the strategy every quarter,” she advises. While you may have a three-year vision, it’s essential to regularly check in with your team to reassess, refine and adjust the strategy. “I recommend a six-quarter walk,” Li says, “For 18 months, sit down every quarter, check in, and see if the most important objectives are still relevant.” -
What is your personal role in making the strategy a success?
Finally, the most crucial question is about personal ownership. Li notes that every member of the organization must understand their responsibility in delivering on the vision.
According to Li, when everyone can answer these three questions clearly, a company is destined for success.
“The CEO needs to be driving the AI strategy, yet often we hear CEOs say, ‘My CIO is the technology lead, and my marketing or HR teams take the lead on that in their areas.’ But what happens when digital workers sit alongside human workers? What does the workforce look like then? And when you’re adopting the unproven technology of GenAI, how do you build trust? We need a trust model built on what I call the AI trust pyramid, built on governance, fairness, quality, responsibility and transparency. In terms of governance, I call it ‘Goldilocks governance.’ Not too much, not too little, but just the right amount to keep you safe while moving your business forward.”
Charlene Li, Senior Advisor and Author, Founder Altimeter Group
AI in Society
Li stresses the importance of being adaptable. With AI disrupting industries at such a rapid pace, organizations must be ready to pivot. For example, when new technologies like Deepseek emerge, it’s important to remain flexible. “Instead of saying, ‘we’re locked in,’ companies should be able to say, ‘If there’s a better way to do this, we can shift our priorities, people, and budgets,’” Li adds. Strategy, she argues, should be written in pencil, not ink.
There’s also the question of whether it’s best to build, buy or partner. Li believes the real value of AI lies in the application layer and the way it’s customized to solve an organization’s specific challenges. She suggests companies build just enough to understand how their data works, so they can make informed decisions when it comes to buying or partnering with AI companies.

The need for AI literacy means that technology departments can no longer work in isolation. With the advent of no-code AI, even business leaders must get hands-on. “C-suite execs can’t be insulated from these conversations anymore,” Li says. “They must be involved because the way you accomplish your goals has to involve technology.”
As AI becomes more integrated into business strategies, concerns around trust and governance have grown. Li shares an approach she calls the “AI trust pyramid,” which mirrors Maslow’s hierarchy of needs. At the bottom, she places safety, security, and privacy, followed by governance, fairness, quality, responsibility, and transparency. This framework helps companies build trust in AI, ensuring it aligns with organizational values and ethical standards.