‘Corporate affairs needs to be at the table when AI governance is designed’

The first AI-related crisis in your sector is unlikely to be a software malfunction, which is why corporate affairs needs to be embedded in AI discussions from the outset.

Image of Sheena Shah on light blue background next to opinion text

Within the next decade, many knowledge driven organisations will operate with a workforce that is not entirely human. AI agents will sit alongside employees embedded in systems, assigned to functions and responsible for defined outcomes. 

Organisation charts will no longer just list people, they will include digital agents working across finance, compliance, marketing, customer service and, inevitably, communications itself.

These are not chatbots. AI agents are the next step beyond generative AI. If generative systems respond to prompts, agents initiate action. They execute tasks, monitor risk, trigger workflows and escalate issues autonomously within defined guardrails. In effect, they are digital colleagues operating at scale.

This shift is already under way. Microsoft is moving from individual copilots to multi agent orchestration across enterprise workflows. Financial institutions are deploying autonomous systems to detect fraud and manage compliance in real time. 

Moderna went further, merging HR and technology leadership under a chief HR and digital officer, signalling that AI is not simply a technology upgrade but a redesign of the workforce itself.

AI is not just a productivity programme 

This framing is already outdated. If a significant share of productive capacity is non-human, this is not just about efficiency. It is about identity and trust.

Employees will not simply need training on new systems. They will need clarity on where they add value in a hybrid model where execution is increasingly automated. Engagement strategies will have to move beyond capability building towards confidence building. 

Internal communications will need to address uncomfortable questions directly: What does career progression look like when machines handle routine cognitive work? How is performance assessed when output is co-created with AI? What remains uniquely human here?

Handled well, this can be empowering. Mishandled, it breeds anxiety and disengagement.

Customer touchpoints will also change fundamentally. When AI agents handle front line interactions, make credit decisions or resolve complaints, the brand is no longer expressed solely through human behaviour. It is expressed through algorithms. 

Customers will expect transparency about when they are engaging with a machine, how decisions are reached and who is accountable if something goes wrong. In that context, clarity becomes a reputational necessity, not a technical detail.

Corporate affairs can become either central or reactive

In many organisations, AI deployment still sits with digital and operations. HR focuses on reskilling. Communications is asked to announce the rollout once decisions have been made. That sequence reflects an outdated view of transformation.

When AI influences hiring, lending, safety or risk decisions, scrutiny intensifies. Regulators are drafting governance frameworks. Boards are asking about exposure. Investors are beginning to seek reassurance on oversight and accountability. 

The first AI-related crisis in your sector is unlikely to be a software malfunction, it will be a failure of transparency or judgement.

If corporate affairs is not at the table when AI governance is designed, it will spend the next decade defending decisions it did not shape.

The role of the chief communications officer must therefore expand. Corporate affairs should be embedded in AI governance boards, workforce redesign discussions and accountability frameworks from the outset. It must help define how hybrid teams are described internally, how leaders model responsible AI use and how disclosure is handled with employees, regulators and investors.

If, as some suggest, half the workforce could soon be AI, the organisations that lead will not be those that deploy the most agents, but those whose corporate affairs teams define clearly how human value, accountability and trust operate in a hybrid system. 

In the AI era, reputation will not be managed after deployment. It will be designed into the architecture from day one.

Sheena Shah is a Partner at Thoburns and a Committee Member of the CIPR Corporate and Financial Group