top of page
with my brand colours brigh purple, organges, pinks gree and blue can we do a face off wit

Excel Tech Show 2026: AI Grows Up


At London’s ExCeL (4 to 5 March), the conversation shifted from AI hype to governance, security, and operational discipline. TL;DR:


AI is embedded in business operations. The real challenge now is scaling it responsibly.


Arrival: Less Noise, More Maturity


The Excel Tech Show isn’t Web Summit and that’s precisely its strength.

Walking into ExCeL on 4 March, the energy felt more measured than manic. There were still glowing screens and confident product pitches, but the tone had shifted. This wasn’t about who had the loudest generative AI demo. It was about who had solved something real.


One pattern quickly emerged. Many of the more established companies hadn’t rushed to bolt AI onto their marketing pages. Instead, they had looked inward. They examined their own operational friction, experimented internally, and only once something proved genuinely useful did they turn it into a product.

That discipline shows.


When a company builds from lived experience, the edges are smoother. The constraints are understood. The demos don’t feel theoretical. They feel familiar.

As the founder of Agile Goes Ape, and working as a fractional VP of Operations, that lens matters. It all depends where you are on your journey. Some organisations are still testing use cases. Others,  like the client we’re currently supporting, have successfully embedded AI tools into everyday workflows.

At that point, the challenge changes. It’s no longer about adoption. It’s about resilience.


SentinelOne: Governance Without Slowing Innovation



If AI adoption decentralises capability, it also decentralises risk.

SentinelOne’s move into AI Security Posture Management feels like a response to that reality. Organisations are already juggling multiple AI tools across departments. The exposure isn’t hypothetical, it’s embedded in daily behaviour.

Traditionally positioned against heavyweights like CrowdStrike and Microsoft Defender, SentinelOne has built its reputation on autonomous, behaviour based threat detection. Where it’s now pushing further is into AI governance, monitoring how AI tools are used inside the organisation itself.


The demo scenarios were uncomfortably relatable: a developer pasting an API key into an LLM prompt; a finance team connecting AI to sensitive systems; customer support building automations that unintentionally include personal data.


Rather than restricting experimentation, SentinelOne wraps policy around it. It detects, redacts, and alerts  all without storing company data externally.

The upside is clear: unified visibility across endpoints, cloud workloads, identity, and now AI tool usage. The trade off? Like its competitors, it is enterprise-grade software, powerful, but not lightweight. 


Still, in a world where AI usage is fragmented across tools, a central governance layer feels increasingly inevitable.



Octopus Deploy: Order in an Accelerated Engineering Culture



If SentinelOne represents governance at the security layer, Octopus Deploy does something similar for release management.


Competing with platforms like Harness and GitLab CI/CD, Octopus occupies a slightly different space. It doesn’t try to be your entire DevOps ecosystem. Instead, it focuses on deployment orchestration, particularly in complex, regulated environments.


As AI coding assistants increase engineering output, release discipline becomes more important, not less. More code means more opportunities for misconfiguration, drift, and production instability.


Octopus shines in multi-environment setups where audit trails, staged approvals, and rollback capabilities are non-negotiable. For enterprises managing hybrid infrastructure or strict compliance requirements, that structure is reassuring.

The downside is equally clear: for startups already embedded in GitHub Actions or GitLab, Octopus can feel like an additional layer. It rewards operational maturity, and can feel heavy if that maturity isn’t there.



Tonic.ai: The Quiet Power of Synthetic Data



Data access is often the silent bottleneck in AI adoption.


Tonic.ai addresses that friction by generating synthetic datasets that mirror production data without exposing sensitive information. Competitors like Gretel.ai and Mostly AI operate in similar territory, but Tonic has focused heavily on preserving relational integrity, something that matters deeply in complex enterprise databases.


For regulated sectors, banking, healthcare, fintech, compliant test data is often harder to access than the models themselves. Developers wait. Compliance teams hesitate. Risk accumulates.


Tonic removes that tension.


The appeal is obvious: safer development, faster iteration, and reduced exposure to real customer data. But synthetic data is only as useful as its realism. Rare edge cases and subtle statistical quirks can be difficult to replicate perfectly.


This isn’t a universal requirement. But for organisations operating under strict regulatory oversight, it’s a compelling alternative to building internal data anonymisation systems from scratch.



Mend.io & Black Duck: Managing the Open Source Explosion


AI-assisted coding has accelerated development. It has also accelerated dependency sprawl.


Mend.io and Black Duck sit squarely in the Software Composition Analysis space, competing with players like Snyk and Sonatype. Their job is less glamorous but arguably more critical: tracking open-source components, identifying vulnerabilities, enforcing licence compliance, and generating Software Bills of Materials.


Mend.io has positioned itself strongly inside developer workflows, integrating directly into CI/CD pipelines and IDEs. Black Duck, with deeper enterprise roots, has long been trusted in regulated environments where licence risk is taken seriously.


The benefit is straightforward: visibility into what your software is actually built from. The trade off is familiar to anyone in security, alert fatigue. Without careful configuration, these tools can overwhelm teams with findings that vary in practical impact.


Used well, they reduce structural risk. Used poorly, they become noise.



Redis: The Engine Behind Real Time Personalisation



Redis has long been associated with speed sub millisecond data access powering everything from caching to messaging systems.


What’s evolving is its role in real time personalisation and AI-driven engagement.

In a landscape where competitors like Memcached or managed services such as Amazon ElastiCache offer overlapping functionality, Redis distinguishes itself with advanced data structures and vector search capabilities. That makes it particularly suited to recommendation engines, real-time behavioural updates, and context-aware applications.


For companies building personalised e-commerce experiences or AI driven customer journeys, Redis becomes the infrastructure layer that enables responsiveness.


But it’s infrastructure, not a finished product. Teams still need to build the logic and models on top of it. For organisations without engineering depth, that can be a limitation.


For those that do, it’s a powerful foundation.



Maze & Endor Labs: AI Defending the Stack

As AI spreads across cloud and development workflows, security tooling is adapting.


Maze operates in the cloud security posture management space, competing with players like Wiz and Orca Security. Its pitch is simple: prioritise what’s genuinely exploitable rather than drowning teams in alerts.


By using AI agents to analyse vulnerability context, Maze aims to reduce noise, a constant frustration in modern cloud environments. The promise is smarter prioritisation; the challenge, as with any newer entrant, is proving scale and reliability against established incumbents.


Endor Labs, meanwhile, focuses on securing AI-accelerated development. Competing with Snyk and Checkmarx, it differentiates itself through reachability-based vulnerability analysis, identifying not just whether a vulnerability exists, but whether it can actually be exploited in context.


As AI-generated code becomes more common, that contextual analysis becomes increasingly valuable. The trade off is integration complexity and the need for developer buy in  security tools are only effective if they’re embraced, not bypassed.



The Real Shift


The most striking change at this year’s show wasn’t technological. It was behavioural.


AI adoption is no longer the differentiator. Responsible scale is.


For organisations already seeing productivity gains, the next challenge is structural resilience, avoiding over-dependence on single tools, controlling data exposure, and ensuring governance keeps pace with velocity.


And honestly, I’m glad to see this more sensible approach emerging.

The industry has had its phase of “I built this in a weekend and made a million.” That energy might be exciting, but it rarely reflects operational reality. Everything comes at a cost, especially in technology.


Yes, if you are a brand new company operating in a greenfield environment, you have an advantage. You can architect from scratch. You can design cleanly. You can embed governance from day one.


But most organisations are operating on brownfield terrain. We have legacy systems. We have existing customers. We have regulatory obligations. We cannot simply rip and replace without consequence.


And even greenfield companies aren’t exempt. If you’re operating in banking, fintech, healthcare, or any regulated sector, safeguards aren’t optional. Without the right measures in place, you’re not just experimenting, you’re putting customer data and company viability at risk.


Move too fast without structure, and you won’t scale. You’ll stall or worse, fold.

That’s why it was refreshing to see common sense prevailing. Not hype. Not CEOs demanding “AI everywhere” without strategy. But thoughtful integration, governance, and measured ambition. The companies that stood out weren’t chasing noise. They were reinforcing foundations.


Final Takeaways


The Excel Tech Show 2026 wasn’t about spectacle; it was about sensible evolution. The conversation has moved beyond who can adopt AI the fastest to who can implement it with discipline and foresight. The real competitive edge will not belong to the loudest or quickest adopters, but to the organisations that scale AI responsibly,  embedding governance, protecting customers, and building systems that are resilient enough to last.



Comments


© 2025 Agile Goes Ape. All rights reserved

bottom of page