News & Insights

AI in Utilities Customer Service: What Digital Leaders Need to Know

Written by Pam McGee | Mar 19, 2026 11:07:22 AM

 In February 2026, SSEN brought together regulators, technology partners and sector peers to examine how AI should shape customer service during the ED3 period.  

Three questions framed the discussion:

  • Can AI materially improve customer outcomes?
  • Where is human judgement essential?
  • How do you embed AI across customer journeys without increasing regulatory and reputational exposure?

The following themes emerged - all of which have implications for how utilities leaders plan, invest and govern AI from here.

 1. Proactive Service vs Regulatory Exposure  

Proactive communication has always been essential in utilities. Outage alerts and planned works updates are well established across the sector. What is changing is the expectation of prediction and personalisation.

Customers increasingly expect services to anticipate issues, deliver timely updates and reduce the need to make contact in the first place. The ambition is clear: fewer inbound queries because the right information arrives before frustration builds.

But the risk is equally clear.

A confident but incorrect answer, repeated at scale, erodes confidence quickly. In storm conditions or safety-critical situations, reliability matters more than automation. Customers want clarity. Often, they want a person.

For regulated operators, errors can't be easily contained. Complaints increase and scrutiny follows along with regulatory attention.

Progress has to be balanced with maintaining control of the messaging. 

Implication: Proactive AI in utilities customer service must be accurate, explainable and governed before it is deployed at scale.

 2. AI Will Reshape the Operating Model  

There was strong support for AI as an internal enabler.

Applied carefully, it can reduce administrative burden, summarise complex case histories and allow agents to focus on the scenarios that need human judgement most - complex queries, vulnerable customers, high-stakes interactions. That is where measurable service improvement lives.

But scale changes the equation.

Automation bias, skills atrophy and error propagation were all raised as operational risks. In a regulated environment, small design flaws can multiply quickly. The consequences impact complaints data, audit findings and regulatory submissions.

Embedding AI successfully requires defined ownership, oversight and operating model clarity. New roles may be required along with clear accountability and continuous assurance.

Technology is the visible layer here, but governance determines how it behaves.

Implication: AI adoption is an operating model decision, not just a technology decision. The infrastructure around the tool matters as much as the tool itself.

 3. Trust, Transparency and Accountability  

Utilities operate under a different social contract to the likes of retail brands, for example.

Customers do not choose their network operator, and that changes the baseline. Expectations around fairness, transparency and accessibility are higher - and customers are less forgiving when they are not met.

Participants reinforced three priorities for building and maintaining trust:

  • Early and ongoing engagement with regulators and stakeholders
  • Clear communication about when and why AI is being used
  • A simple, visible route to speak to a human

Inclusive design matters too. Vulnerable customers and those who are digitally excluded cannot be an afterthought in AI service design.

Trust may be difficult to quantify directly, but its absence becomes visible fast - in complaint volumes, in media coverage, in regulatory intervention.

Implication: Transparency and accountability must be built in from the outset. Not retrofitted once problems emerge.

 Proportionate AI for a Regulated Environment

A consistent theme across the session was caution about complexity for its own sake.

Traditional machine learning often delivers greater predictability and explainability than large language models. In many regulated use cases, that makes it the more appropriate choice. The most advanced option is not always the right one.

Participants also raised the carbon footprint of AI investment and the optics of large digital programmes during a price control period. Where costs are ultimately borne by billpayers, the case for customer value must be clear and evidenced.

Implication: AI investment in utilities must be proportionate, justified and outcome-led. Ambition alone is not enough.

 What This Means for ED3 Planning  

For executives and digital leaders shaping ED3 submissions and digital roadmaps, the session pointed to five clear priorities:

  1. Link AI investment directly to measurable customer outcomes - not activity metrics or output volumes
  2. Design governance and oversight before scaling deployment - structure first, speed second
  3. Retain human judgement in complex and vulnerable scenarios - AI should support CS agents, not replace the judgement that protects customers
  4. Be transparent by default - in how AI is used, where it operates and what it cannot do
  5. Prioritise reliability and explainability over novelty - under ED3 scrutiny, explainable decisions matter more than impressive demonstrations

AI increases visibility. Of competence. And of mistakes. Governance determines which one customers experience.

How we helped SSEN build a digital experience that works for 4.1 million customers