Three questions framed the discussion:
The following themes emerged - all of which have implications for how utilities leaders plan, invest and govern AI from here.
Proactive communication has always been essential in utilities. Outage alerts and planned works updates are well established across the sector. What is changing is the expectation of prediction and personalisation.
Customers increasingly expect services to anticipate issues, deliver timely updates and reduce the need to make contact in the first place. The ambition is clear: fewer inbound queries because the right information arrives before frustration builds.
But the risk is equally clear.
A confident but incorrect answer, repeated at scale, erodes confidence quickly. In storm conditions or safety-critical situations, reliability matters more than automation. Customers want clarity. Often, they want a person.
For regulated operators, errors can't be easily contained. Complaints increase and scrutiny follows along with regulatory attention.
Progress has to be balanced with maintaining control of the messaging.
Implication: Proactive AI in utilities customer service must be accurate, explainable and governed before it is deployed at scale.
There was strong support for AI as an internal enabler.
Applied carefully, it can reduce administrative burden, summarise complex case histories and allow agents to focus on the scenarios that need human judgement most - complex queries, vulnerable customers, high-stakes interactions. That is where measurable service improvement lives.
But scale changes the equation.
Automation bias, skills atrophy and error propagation were all raised as operational risks. In a regulated environment, small design flaws can multiply quickly. The consequences impact complaints data, audit findings and regulatory submissions.
Embedding AI successfully requires defined ownership, oversight and operating model clarity. New roles may be required along with clear accountability and continuous assurance.
Technology is the visible layer here, but governance determines how it behaves.
Implication: AI adoption is an operating model decision, not just a technology decision. The infrastructure around the tool matters as much as the tool itself.
Utilities operate under a different social contract to the likes of retail brands, for example.
Customers do not choose their network operator, and that changes the baseline. Expectations around fairness, transparency and accessibility are higher - and customers are less forgiving when they are not met.
Participants reinforced three priorities for building and maintaining trust:
Inclusive design matters too. Vulnerable customers and those who are digitally excluded cannot be an afterthought in AI service design.
Trust may be difficult to quantify directly, but its absence becomes visible fast - in complaint volumes, in media coverage, in regulatory intervention.
Implication: Transparency and accountability must be built in from the outset. Not retrofitted once problems emerge.
A consistent theme across the session was caution about complexity for its own sake.
Traditional machine learning often delivers greater predictability and explainability than large language models. In many regulated use cases, that makes it the more appropriate choice. The most advanced option is not always the right one.
Participants also raised the carbon footprint of AI investment and the optics of large digital programmes during a price control period. Where costs are ultimately borne by billpayers, the case for customer value must be clear and evidenced.
Implication: AI investment in utilities must be proportionate, justified and outcome-led. Ambition alone is not enough.
For executives and digital leaders shaping ED3 submissions and digital roadmaps, the session pointed to five clear priorities:
AI increases visibility. Of competence. And of mistakes. Governance determines which one customers experience.
How we helped SSEN build a digital experience that works for 4.1 million customers