GM
GM

The Path Forward: Strategic Imperatives for Technology Leaders in the AI Era

· 13 min read
The Path Forward: Strategic Imperatives for Technology Leaders in the AI Era

Key Takeaways

  • 1 Organisations with structured AI reskilling programmes retain significantly more employees during technology transitions
  • 2 52% of UK workers have used AI in their jobs over the past year, with daily users reporting 93% productivity gains versus 62% for occasional users
  • 3 The UK Government is actively assessing AI's labour market impact, with policy discussions including support mechanisms for displaced workers
  • 4 Skills sought by employers are changing 59% faster in AI-exposed occupations than in less exposed roles
  • 5 Agentic AI systems capable of autonomous parallel task execution are accelerating the transition from augmentation to automation

Executive Summary

The first two articles in this series documented Britain’s AI productivity paradox and identified the human capabilities that remain resistant to automation. This concluding article addresses the practical question: how should technology leaders respond?

The challenge is not choosing between human and artificial intelligence—that framing obscures the real strategic imperative. The challenge is designing organisations that capture AI’s productivity benefits whilst preserving the human capabilities that drive sustainable competitive advantage. This requires moving beyond efficiency metrics to consider workforce resilience, talent development, and the long-term health of the capabilities that differentiate successful enterprises.

For technology leaders in global fashion retail, the stakes are particularly high. The industry combines operational functions highly amenable to automation with creative and relational capabilities that remain distinctly human. Getting the balance right determines whether AI becomes a source of sustainable advantage or a short-term efficiency gain that erodes long-term capability.

Read the full series:


The Leadership Imperative: Beyond Technology Decisions

Why This Is Not Just a Tech Question

AI deployment decisions shape organisational culture, talent markets, customer relationships, and community impact. They determine who has opportunities and who faces displacement. They signal values and priorities that affect how employees, customers, and stakeholders perceive an organisation’s character.

The Morgan Stanley data documenting Britain’s disproportionate job losses reflects aggregate decisions by thousands of organisations. Each decision may have seemed locally rational—automating a function, reducing headcount, improving efficiency. Collectively, they are producing outcomes that concern policymakers enough to discuss support mechanisms for displaced workers.

Technology leaders carry particular responsibility because they often have the clearest view of what is technically possible and the earliest opportunity to shape deployment decisions. How options are framed—efficiency gain versus workforce impact, short-term savings versus long-term capability—influences organisational choices that accumulate into sector-wide and economy-wide patterns.

The Acceleration of Agentic AI

The release of Claude Opus 4.6 this week signals a significant shift in AI capability that technology leaders must incorporate into workforce planning. The model introduces “agent teams”—multiple AI agents that can split larger tasks into segmented jobs, coordinate in parallel, and work autonomously with minimal human oversight.

This represents a transition from AI as tool to AI as autonomous worker. Previous generations required human direction for each task. Agentic systems can decompose complex projects, allocate subtasks, and deliver completed work products. Anthropic’s system card notes the model’s propensity to “take risky actions without first seeking user permission” in certain contexts—a capability concern that simultaneously illustrates the technology’s growing autonomy.

For workforce planning, this means the displacement patterns documented in Part 1 are likely early indicators rather than the full picture. Tasks that seemed safe because they required coordination across multiple steps are now addressable by systems that can manage that coordination themselves.

The Policy Landscape Shift

The policy environment is evolving rapidly. The UK Government published a comprehensive assessment of AI capabilities and labour market impact in late January 2026, acknowledging that AI is already affecting employment patterns and that “the distinguishing features of current AI development may be the potential speed of capability improvement and the breadth of affected tasks.”

The assessment noted that whilst AI adoption has more than doubled since late 2023, only around one in five UK firms currently use or plan to use AI—and within those firms, less than one-third of employees use it. This adoption gap creates both risk and opportunity: organisations that deploy thoughtfully may gain competitive advantage whilst those that rush may face backlash.

For technology leaders, this creates both obligation and opportunity. Organisations that demonstrate responsible AI deployment—investing in workforce transition, preserving talent pipelines, measuring human impact alongside productivity—may find more supportive regulatory environments. Those perceived as extracting efficiency gains whilst externalising displacement costs may face increasing scrutiny.


Immediate Actions: The First Six Months

Conduct Role-by-Role Analysis

Before any AI deployment decision, understand the full picture of impact. Which tasks within roles can AI augment or automate? Which require human judgment, relationships, or adaptability? Where does automation risk displacement, and where does it free capacity for higher-value work?

This analysis should involve the people who hold the roles. PwC’s research on AI adoption success emphasises that involving employees in deployment decisions significantly improves outcomes. Workers often have the most accurate understanding of which tasks benefit from AI assistance and which require human judgment.

Map results against the talent pipeline. If AI automates tasks that junior roles traditionally performed, where will future leaders gain foundational experience? Identify the development gaps that automation creates and plan alternative pathways.

Map Capability Gaps

The skills landscape is shifting faster than training programmes can easily track. PwC’s 2025 AI Jobs Barometer found that skills sought by employers are changing 59% faster in AI-exposed occupations than in less exposed roles. What organisations needed last year may not predict what they will need next year.

Audit current capabilities against emerging requirements. Where are the gaps? Which employees have potential for transition with appropriate support? Which roles are genuinely at risk without clear pathways forward?

Partner with education providers early. The UK’s skills infrastructure is stretched, but relationships with universities, vocational institutions, and training providers can enable faster access to development opportunities as needs clarify.

Establish Ethical Boundaries

Define where human judgment is non-negotiable before AI deployment creates pressure to automate. Once efficiency gains are visible, the temptation to extend automation into human-dependent functions grows stronger. Establishing boundaries proactively is easier than retreating from automation already implemented.

The UK Parliamentary Office of Science and Technology’s January 2026 briefing on AI and employment noted that AI-driven management tools used to monitor and evaluate workers have led to concerns about fairness and transparency. Establish governance frameworks that ensure AI augments rather than surveils the workforce.

Develop governance structures that ensure ongoing human oversight. AI systems require monitoring for performance, bias, and alignment with organisational values. Build this oversight into deployment from the beginning rather than retrofitting it after problems emerge.


Medium-Term Strategy: Six to Eighteen Months

Invest in Reskilling

The evidence on reskilling effectiveness is encouraging but demanding. Organisations that invest deliberately in AI-adjacent skill development see measurably better retention and engagement during transitions. But effective reskilling requires more than generic training—it requires targeted programmes that connect current capabilities to emerging opportunities.

PwC’s workforce research found that 52% of UK workers have used AI in their jobs over the past twelve months. Those who use AI daily report dramatically higher productivity gains (93%) compared with occasional users (62%). They also report higher job security and better salary outcomes. The implication: AI literacy creates advantage, and organisations should help employees move up the adoption curve.

Design programmes that blend technical skills with distinctly human capabilities. Critical thinking, stakeholder management, and ambiguity navigation become more valuable as AI handles routine analysis. Creative judgment, cultural interpretation, and relationship building remain AI-resistant and merit investment.

Redesign Roles Around Human Strengths

The goal is not preserving jobs as they existed but designing roles that leverage what humans do best whilst AI handles what it does well. This requires rethinking work design, not just automating existing processes.

In creative functions, this might mean AI managing trend analysis and initial concept generation whilst humans focus on cultural interpretation and brand expression. In customer service, AI handling transactional queries whilst humans concentrate on relationship building and complex problem-solving. In professional services, AI performing research synthesis whilst humans provide judgment, stakeholder management, and implementation support.

Create explicit career pathways through hybrid roles. If traditional entry-level positions disappear, how do people enter the organisation and develop toward senior capability? Design alternative routes that build the experience base that advancement requires.

Embed AI Literacy Across All Levels

AI capability cannot remain siloed in technology functions. Leaders across the organisation need sufficient understanding to make informed decisions about deployment, governance, and workforce impact. Middle managers need capability to supervise AI-augmented teams effectively. Front-line workers need skills to collaborate productively with AI tools.

This represents a significant organisational development challenge. Research suggests that AI training has disproportionately reached white-collar workers whilst leaving blue-collar employees behind—a gap that risks creating two-tier organisations where AI’s benefits accrue unevenly.

Make AI literacy a corporate cultural norm, not just a technical competency. The goal is an organisation where AI is neither feared nor fetishised but understood as a tool that amplifies human capability when deployed thoughtfully.


Long-Term Considerations: Eighteen Months and Beyond

Preserve Institutional Knowledge

As AI evolves and workforce composition shifts, organisations risk losing the tacit knowledge that experienced employees carry. This knowledge—about customer relationships, operational nuances, brand heritage, and organisational culture—may not be captured in systems or documentation. It lives in people.

Encourage cross-generational collaboration that transfers knowledge before experienced employees depart. Create structures—mentorship programmes, knowledge capture initiatives, deliberate overlap during transitions—that preserve institutional memory.

Be cautious about efficiency-driven headcount reductions that sacrifice knowledge for short-term savings. The employees most expensive to retain may also carry the most difficult-to-replace understanding.

Maintain Customer Trust

Transparency about AI’s role in customer interactions builds trust that sustains relationships. Customers increasingly want to know when they are interacting with AI and when with humans. Deception—or even ambiguity—risks backlash when discovered.

Communicate clearly about how AI supports rather than replaces human service. Position AI as a capability that enhances what the organisation offers, not as a cost-cutting measure that distances customers from human connection they value.

Monitor customer perception of AI-augmented services. Early enthusiasm may not predict long-term acceptance. Build feedback loops that detect erosion in relationship quality before it becomes entrenched.

Shape the Regulatory Landscape

Technology leaders have responsibility to participate in policy conversations shaping how AI deployment is governed. The choices made now—about worker protections, training investment, transition support—will determine whether AI transformation produces broadly shared benefits or concentrated gains with socialised costs.

Advocate for policies that support responsible deployment: investment in skills infrastructure, portable benefits that support worker transitions, regulatory frameworks that balance innovation with protection. Public-private partnerships can bridge gaps that neither sector can address alone.

Engage constructively rather than defensively. Organisations perceived as partners in managing transition may find more supportive environments than those perceived as externalising costs whilst capturing benefits.


The Measurement Challenge: Beyond Productivity Metrics

What Traditional Metrics Miss

Efficiency metrics capture productivity gains but may miss human costs. Headcount reductions improve ratios whilst potentially eroding customer relationships, cultural capability, and talent pipelines. Short-term savings may precede long-term capability loss.

Develop measures that capture what matters beyond efficiency:

Workforce composition: How is the talent mix shifting? Is the organisation building capability in AI-resistant functions whilst automating commodity tasks? Or hollowing out the experience base that develops future leaders?

Employee engagement: How do workers perceive AI’s impact on their roles and prospects? Mercer’s 2026 research shows that 62% of employees feel leaders underestimate AI’s emotional and psychological impact. Understanding employee experience helps calibrate deployment decisions.

Customer relationships: Are AI-augmented services maintaining relationship quality? Are customers more or less loyal as AI handles more interactions? What does feedback suggest about human connection preferences?

Talent pipeline health: Are development pathways producing the next generation of leaders? If traditional entry-level roles disappear, are alternatives creating the foundational experience that advancement requires?

Leveraging GDPval for Capability Assessment

OpenAI’s GDPval framework offers a model for assessing AI capability against real-world professional tasks. Organisations can adapt this approach internally—mapping their own high-value work products, testing AI performance against human output, and tracking capability trajectories over time.

This provides empirical grounding for deployment decisions. Rather than assuming AI can or cannot handle specific functions, measure actual performance against defined quality standards. Use the results to identify where automation genuinely improves outcomes versus where it substitutes “good enough” for excellence.

Qualitative Assessment

Numbers alone cannot capture the human dimensions of AI transformation. Complement quantitative metrics with qualitative understanding:

What are employees actually experiencing? Exit interviews, engagement surveys, and informal conversations reveal dimensions that metrics miss.

What do customers value most? Formal research and front-line feedback illuminate where human connection matters and where automation is welcomed.

What is the organisation losing? Departures, retirements, and transitions may remove capabilities—relationships, institutional knowledge, cultural understanding—that will not appear in efficiency metrics until they are gone.


Conclusion: The Stewardship Imperative

The 11.5% productivity gain from AI adoption is real. It represents genuine capability that well-deployed technology provides. But productivity alone is an incomplete measure of success.

Britain’s experience—the 8% net job losses, the 32% decline in entry-level roles, the concentration of displacement on young workers—demonstrates that efficiency gains do not automatically translate to sustainable outcomes. The choices organisations make about how to deploy AI, how to support affected workers, and how to preserve human capability shape whether transformation produces lasting value or short-term extraction.

Technology leaders carry particular responsibility because they see the frontier first. They understand what is technically possible before others do. They have the earliest opportunity to shape how capability is deployed. The frameworks they recommend, the metrics they prioritise, the values they embed in deployment decisions—these accumulate into the sector-wide and economy-wide patterns that policymakers now scramble to address.

The call to action is clear:

Audit deployment patterns. Understand where AI is reducing headcount, what capabilities are being lost, and whether efficiency gains are sustainable or extractive.

Invest in people. Reskilling, transition support, and development pathways demonstrate that productivity gains and human welfare are not zero-sum.

Measure what matters. Productivity metrics matter, but so do workforce resilience, customer relationships, and the long-term health of capabilities that differentiate the organisation.

Engage with the policy landscape. Responsible deployment may earn supportive regulatory environments. Perceived extraction may invite intervention.

Lead with values. The choices made now will define whether AI becomes a force for broadly shared prosperity or concentrated gain. Technology leaders have opportunity—and responsibility—to shape that outcome.

The productivity paradox Britain faces reflects accumulated decisions by thousands of organisations and millions of deployment choices. The path forward lies not in resisting AI but in deploying it wisely—capturing genuine capability whilst preserving the human strengths that technology cannot replicate.

The time to act is now. The decisions technology leaders make in the coming months will shape whether AI empowers or displaces, whether transformation builds or erodes, whether the future being created serves the many or the few.

Technology leaders have the tools. The question is whether they have the wisdom—and the will—to use them well.


George Mudie is a Global CTO and CISO with over 30 years of technology leadership experience.


References

  • Morgan Stanley Research (January 2026): “AI Job Cuts Landing Hardest in Britain”
  • PwC (2025): “Global AI Jobs Barometer” and “UK Workforce Hopes and Fears Survey 2025”
  • McKinsey & Company (2024-2025): “A new future of work: The race to deploy AI and raise skills in Europe and beyond”
  • UK Government (January 2026): “Assessment of AI capabilities and the impact on the UK labour market”
  • UK Parliamentary Office of Science and Technology (January 2026): “Artificial Intelligence (AI) and Employment”
  • OpenAI (October 2025): “GDPval: Evaluating AI Model Performance on Real-World Economically Valuable Tasks”
  • Anthropic (February 2026): “Claude Opus 4.6 System Card”
  • Mercer (January 2026): “Global Talent Trends 2026”
  • Office for National Statistics (2025-2026): Labour Market Statistics
  • Bank of England (2025-2026): Governor Andrew Bailey remarks on AI and employment
  • Adzuna (2025): “UK Job Market Report”
  • House of Commons Library (2025-2026): Labour market briefings

Image courtesy of UnSplash

Share: