slideshare quotation-marks triangle book file-text2 file-picture file-music file-play file-video location calendar search wrench cogs stats-dots hammer2 menu download2 question cross enter google-plus facebook mastodon instagram twitter medium linkedin drupal GitHub quotes-close
Image of Wales Tech Week panel banner

Last week I went to Wales Tech Week because I wanted to get a clearer view of how other organisations are approaching AI. I expected a lot of talk about new technology and the latest breakthroughs. What I found instead was a reminder of the pressure AI is placing on energy, data quality and the AI infrastructure that has to keep everything running.

Here’s a quick guide to what we’ll cover:

What Wales Tech Week signalled about the next phase of AI infrastructure

AI felt both everywhere and overwhelming at Wales Tech Week, and it became clear how quickly expectations are rising for the infrastructure that has to support it.

I had assumed the focus would be on optimism and technical talk. Instead, I heard conversations about power, cooling, data quality and the limits of today’s systems. People were excited about AI, but the tension was obvious. Many organisations want to move fast, yet their platforms were never designed for the scale AI now demands.

The point landed when the panellist from EDF said “you cannot have AI without electricity”. Hearing that in a hall full of technologists grounded the whole discussion. AI is not just another application to deploy. It’s a shift in energy demand and long term planning.

The urgency behind that message grew throughout the event. Teams are keen to use AI, but the questions around cost, sustainability and governance are only getting louder. The event crystallised something I’d been sensing already. AI is racing ahead, and the infrastructure behind it is trying to catch up.

Why I attended and what I hoped to understand about real world AI adoption

I went to Wales Tech Week because I wanted to see how other organisations are approaching AI in practice. My day to day work sits at the point where ambition meets infrastructure, and I wanted to understand how far others had travelled on that journey.

I thought I’d hear more about the choices people are making around platforms and tooling. Instead, I heard about growth zones, grid constraints, data quality and fraud prevention. The scale of the challenge came across more clearly than any single technical detail could have.

In almost every session, the same concerns resurfaced:

  • How do we keep systems stable
  • How do we manage cost
  • How do we prepare for workloads that will scale faster than expected
  • How do we stay compliant when data is moving everywhere at once

Someone from Innovate UK captured the feeling in the room when they said “the rate of change is outpacing the capacity of organisations to adapt”. That explained why discussions kept looping back to fundamentals like energy, governance and resilient design.

I walked in thinking the value would be in the technical detail. What I found instead was a clearer sense of the strategic and operational pressures organisations are wrestling with.

The hidden truth beneath the AI hype

What became clear to me throughout the event is that AI readiness is not really about AI. It is about the foundations underneath it. Whenever people spoke about breakthroughs, the real story was the strain on compute, networks and data.

This showed up in side comments as much as on stage. I kept hearing about:

  • Cooling limits in older data centres
  • Tension between performance goals and energy budgets
  • Concerns about whether their data was reliable at all

A point from NVIDIA brought this into focus. One of their speakers said “each new GPU generation is 20-25% more efficient than the last”. It was an impressive number, but the discussion that followed made it clear that even these gains are struggling to match rising demand.

The thing that stayed with me was the acceptance that AI introduces weight. It pushes already stretched systems harder than ever. To use it safely, organisations need stability, predictability and transparency. The more I listened, the clearer it became. AI depends on basics that cannot be skipped.

Why sustainable cloud choices are becoming unavoidable

Sustainability came up in almost every session at Wales Tech Week, not as a corporate message but as a practical constraint. AI workloads are heavy. They draw more power, create more heat and place pressure on infrastructure that many organisations haven’t planned for.

What the energy sector is seeing

Speakers from the energy and data centre world talked openly about:

  • Strain on the grid
  • Limits of traditional cooling
  • The need for more efficient, low carbon approaches

At one point, EDF captured the situation simply when they said “AI demand is growing faster than our low carbon capacity can keep up”. It wasn’t framed as a warning. It was the reality.

Why this matters for cloud decisions

Efficiency gains are coming, but demand is rising faster. Energy is already one of the biggest operational costs in a data centre, and AI only increases that pressure. A sustainable cloud approach is not only better for the planet. It keeps budgets predictable and helps teams avoid surprises as workloads grow.

It was impossible to overlook the conclusion. If AI is in your plan, sustainability must be there too.

Image of panellists on the stage at Wales Tech Week

What does data governance for AI really involve?

Another strong theme at Wales Tech Week was the importance of clean and trustworthy data. Especially from people working in environments where accuracy is critical. It was a reminder that AI output is only ever as good as the data underneath it.

The scale of the challenge

The Companies House sessions brought this into sharp focus. One speaker noted that “our register was searched more than 10 billion times last year”. Hearing that number made it clear how much pressure sits behind every data quality decision they make.

Why this matters for AI

Many organisations want to adopt AI but are working with data that is:

  • Messy
  • Duplicated
  • Out of date
  • Poorly governed

That creates both a technical problem and a governance problem. If you can’t trust your data, you can’t trust your AI.

A shift in mindset

Speakers talked about moving from reactive cleanup to proactive validation, supported by preventative controls and continuous monitoring. That mirrors what we’re seeing with clients preparing for AI. They don’t just need compute. They need confidence.

The focus on data quality was impossible to miss. AI can’t work without it.

How are regulated sectors navigating AI with caution?

Throughout the event, people working in regulated environments shared a steady sense of caution. They are interested in AI, but they cannot afford mistakes. Their concerns were practical rather than theoretical. They deal with sensitive data, strict compliance rules and clients who expect certainty.

What regulated organisations are doing

Speakers described early steps such as:

  • Creating internal whitelists and blacklists
  • Restricting use of public AI tools
  • Exploring private or closed loop models
  • Limiting data movement across environments

One speaker on the Professional Services Panel, chaired by Liz Jones of Wales Deloitte, said “for regulated firms, public AI tools are off the table until the privacy issues are solved”.

The human side of adoption

Many people talked about using AI as a copilot, not a replacement. They want tools that improve accuracy and speed without raising risk. For IT managers, that means balancing uptime and compliance while managing new AI demand.

Regulated industries are not resisting AI. They are adopting it at the pace that safety and trust require.

What AI means for CloudOps, FinOps and engineering roles

As I listened to the discussions at Wales Tech Week, I kept thinking about what this shift means for the teams who run the infrastructure behind AI. The message was clear. Our roles are becoming even more important. AI is powerful, but it is also demanding, and it puts weight on systems that are already busy.

What CloudOps teams are facing

New AI workloads require:

  • Stronger observability
  • Tighter control of data flows
  • More predictable deployment patterns
  • Better planning for bursty, high-compute loads

What FinOps needs to prepare for

AI costs escalate quickly and not always predictably. During one session, someone from the Cloud Infrastructure Panel said “AI is the fastest growing driver of cloud cost we have ever seen”. That line captured the financial pressure many organisations are feeling.

What engineering teams must tighten

Fast iteration only works when environments are stable. AI increases the need for:

  • Clean deployments
  • Consistent pipelines
  • Clear documentation

I came away convinced that AI doesn’t replace what we do. It pushed me to think differently about the level of discipline AI now requires.

How is AI reshaping commercial models in professional services?

One of the more unexpected threads at Wales Tech Week came from the professional services world. Several speakers talked openly about how AI is already reshaping their commercial models. It was a reminder that AI changes not only how work is done, but how its value is measured.

The pressure on time-based pricing

AI tools are helping teams complete tasks far more quickly than before. For firms that charge by the hour, that creates a real challenge. Someone on the Consulting Panel put it bluntly when they said “time based pricing breaks the moment AI reduces a two day job to twenty minutes”.

What firms are shifting toward

Speakers described moves toward:

  • Outcome based pricing
  • Value based pricing
  • Clearer explanation of how AI supports delivery
  • Greater transparency to maintain client trust

Why this matters more widely

As tools like orchestration and automation accelerate routine work, commercial models will need to evolve alongside technical ones. AI is changing how work is delivered. Pricing models will need to catch up.

Practical steps organisations can take before rolling out AI workloads

By the end of the event, it was clear that most organisations want to adopt AI, but many have not prepared the foundations that make adoption safe and manageable. The same practical steps came up again and again, and they match what we see in day to day CloudOps and FinOps work.

1. Review your hardware readiness

AI workloads are heavy and unpredictable. Check whether your current hardware can cope, including GPU availability, lifecycle plans, cooling and power capacity.

2. Map your data flows

Several speakers stressed the importance of knowing exactly how data moves through systems. One line from the Cloud Infrastructure Panel captured it clearly: “do not start with models. Start with your data flows”.

3. Strengthen documentation and governance

Clear configuration histories, change logs and audit trails become essential as systems grow more complex.

4. Review your cloud architecture

Look at regions, flexibility and how well your current setup can scale without losing control of cost or sovereignty requirements.

5. Establish model governance

Whitelists, private endpoints and simple usage policies can prevent confusion and keep teams aligned.

These steps form the base of a platform that can support AI safely. Start here, and adoption becomes much more predictable.

What Wales Tech Week confirmed about the future of AI infrastructure

Leaving Wales Tech Week, I felt a mix of excitement and realism. The event showed how quickly AI is moving and how much potential it carries, but it also highlighted the weight it places on the systems underneath. In almost every session, the same conclusion appeared. AI depends on strong foundations, and many organisations are only beginning to understand what that means.

A moment in the closing keynote captured the scale of what is coming. A speaker from McKinsey said “global investment in AI infrastructure will exceed $7 trillion by 2030”. Hearing that number brought the scale of the challenge into focus. The future of AI will not be shaped by pilot projects. It will be shaped by the systems that can run these workloads reliably and responsibly.

The honesty in the room stuck with me. People were open about the strain they are seeing and the work still required to prepare. The next phase of AI will be defined by the quality of the sustainable cloud platforms that carry it. That is where the real progress will come from.