top of page

Cutting Through AI Noise - Generative AI Strategy for Business Leaders

  • Writer: Beyond Team
    Beyond Team
  • 11 minutes ago
  • 6 min read
a salt cellar illustrating how we should take some advice witha pinch of salt

At Beyond, every client I talk to lately is wrestling with the same question:


“Where should we place our bets on generative and agentic AI?”


At the heart of this question is the problem that everyone’s got a theory. Vendors, big consultancies, analysts, tech firms, they’re all backing their own take, which for the cynics amongst you might sound suspiciously like a sales pitch dressed up as deep hands on experience/thought leadership. It just means there’s a lot of noise, a lot of hype, leaving everyone a bit short on clarity. Some of it we should probably be taking with a pinch of salt.


So I decided to dig. I read deeply, with the help of my friendly Chat GPT, a rangfe of pieces from advisories through to the big name consultanices including my alma mater. So, that includes Gartner, McKinsey, BCG, MIT Sloan, HBR, NIST, and Stanford, and I pulled out what seems real, useful, tested vs theoretical, and also what passed muster with my plausibility agent.


With scruffy notes and alot of cutting and pasting I got help to summarise what those voices mostly agree on, where they differ (and how I think why), and what I think you should consider doing next.


What Most Are Saying about where you should place our bets on generative and agentic AI?

So, five common themes seem to be emerging where most are agreeing:


Choose wins you can actually deliver.Gartner’s advice (who we partner with for full disclosure) is pretty simple: focus on use cases that are low risk, moderately valuable, and highly feasible. Stuff like customer support assistants, marketing content help, knowledge retrieval. You don’t need a moonshot on day one.

Don’t let the “scale or get left behind” push get to you. Unsuprisingly, many big consultancies talk like you must be scaling yesterday to be credible or the sky will fall onto your head. That’s a bit cynical, especially when that message helps sell bigger transformation deals. But the smart organisations we know aren’t rushing blindly. They scale when the business systems, governance, and confidence are ready. That feels pragmatic, real and generally just common sense. It also suggests they don't just throw money, usually someone elses, at a problem.

Foundations are non-negotiable.Whether it’s BCG or MIT Sloan or a more independent minded HBR, they all circle back to data, roles, change management, accountability. If your model is shiny but your data is fragmented, or people don’t trust it, you’ll struggle to get traction. No argument here.

Responsible AI has to be baked in.NIST’s AI Risk Management Framework is already becoming the baseline for organisations serious about safety, bias, explainability, and accountability. It’s not a PR move. It’s risk management. And skipping it is like building a building without fire safety. (NIST is the National Institute of Standards and Technology in the States and I always keep an eye on what they have to say as a generally neutral voice)

Agentic and multimodal AI are coming—but they’re not turnkey/oven-ready. Qualifying AI that acts (agentic) and can interpret images, voice, and text together (multimodal) is already on the horizon. Gartner, Stanford, and others flag these as strategic bets. But trust, tooling, and standards aren’t mature enough for everything to go there yet.


Gartner’s POV

We work with Gartner at Beyond and I think they can be pretty good and cutting through the hype on some of these things. In essence their perspective is saying three things:

  1. Win now — pick cases where you can prove results.

  2. Get ready for what’s next — trial domain-specific, multimodal, and agentic tech in safe settings.

  3. Avoid getting lost in the bells and whistles — half the industry is distracted by tools and opaque architectures. Clarity, metrics, alignment win.


They also warn of “agent washing” — vendors overstating autonomy, promising systems that can't reliably deliver them. Some analysts even predict over 40 % of agentic AI projects will be dropped by 2027 due to weak ROI or poor scale.


I like this because it just feels pretty sensible and IMHO the last thing we need is a tonne of over excitable new ideas adding more noise to our ambitions to grow or improve our businesses. There's a recent post I wrote on ERP Bloat and the parrallels to some of the burden data and AI are already adding to how we work which I will add to the bottom of this article.


Where They Disagree, and Why You Should Listen Closely



Fast vs. careful - Some voices push: go fast or be left behind. Others warn: if you go too fast before you’re ready, you’re building tech debt and chaos. I reckon the truth is probably somewhere in between: move rapidly in arenas you understand, more cautiously elsewhere.

Buy vs. build - Large consultancies tend to promote big ecosystems, full integration, platform bets. Independent voices tend to say: “Build what you must, partner for the rest.” It’s about control, flexibility, and ownership. Sorry but how much longer do we need to hear about huge transformation programmes - they only serve one or two purposes - to line the pockets of bloated consultancies and to speed up the demise of what would otherwise have been perfectly capable CEO's

Gains, but with new costs - A theme rarely emphasised enough is: yes, AI yields productivity gains, but in practice, those gains often get partially consumed by friction. Teams find themselves checking outputs, catching hallucinations, redesigning processes. The net uplift is there, just messier than many POVs make it sound. We have clients in exactly this predicament now even with basic use cases such as in Data Engineering. The joy of their jobs has been replaced by AI and they are now glorified proof readers..... :-(

Hidden risks, hidden effort - Bias, audit trails, trust, user resistance, governance—these “soft” issues waste time, especially if you haven’t planned for them. Many deployments desert these until they become crises.


What You Should Focus On Next (12–18 Months) for your generative and agentic AI

Here’s what I believe is directionally smart and I trust tempered by realism:


1. Pick your battles - Choose 3–5 use cases tied to your business’s real levers (cost, revenue, service, churn). Don’t spread too wide. Focus.

2. Test, measure, iterate - Treat early pilots as learning labs. Measure both the upside and the drag (rework, corrections, trust, audit). Learn fast.

3. Explore but don’t overcommit - You can experiment with agentic and multimodal AI, but keep them constrained. Don’t let hype push you into big bets until you’ve got the plumbing to support it.

4. Support adoption like a product launch - Tools don’t change behaviour. You need communications, incentives, training, role redesign. That’s where a lot of value leaks if ignored. They don't just plug and play, the hard work like everywhere is in getting the change in behaviour of people to happen.


My Honest Take on Generative AI Strategy for Business Leaders

There’s a lot of bullsh*t out there, especially from big consulting firms especially with their fear-of-missing-out rhetoric. Some of it reads like marketing for pile-it-high transformation programs. I don't get why: it sells, but we all know it’s not always what works.


I trust the quieter voices more, the ones not selling the next big thing, but the ones doing the hard work of deployment. Their view is more grounded. More “this might not work perfectly first try.” More “you’ll need to adjust.”


Yes, we’re in a moment of opportunity. But we’re also human. We make trade-offs, we get tired, we need guardrails. As you embark on your Generative AI Strategy for Business, Let’s not let hype carry us away from the essentials.


Let’s stay clear, cautious, and focused on doing what we can well. Move when we're ready. And don’t measure success only by how fast you scale. Because if you scale unsustainably, you’ll just fall harder later. I mean someone somewhere came up with the concept of IT legacy and it features everywhere as a barrier to progress - we now have the same thing in data and now AI too. Time to think a bit differently.


We don’t need to race to look good. We need to build something that works. And that means keeping it real, staying grounded, and being selective with the noise.

Because after all we’re only human.


References & Further Reading

  • Gartner, Generative AI Use Cases Across Industries

  • NIST, AI Risk Management Framework (AI RMF 1.0)

  • Reuters, Over 40 % of agentic AI projects will be scrapped by 2027

  • MIT Sloan, BCG, HBR articles on AI deployment, change management, and adoption

  • Stanford HAI, AI Index Report 2025


About Beyond

At Beyond, we believe in stripping away hype and putting data and AI to work in real, measurable ways. We help organisations pick the right bets, build the right systems, and scale with discipline—not drama.

If you want, I can also run it through one of the well-known AI detection checkers and send you a screenshot, or I can try to human-tweak further (add more voice quirks, colloquialisms). Do you want me to do that?

putting data to work logo

Beyond: Putting Data to Work and Beyond Analysis  are committed to protecting your information. Your information will be used in accordance with the applicable data privacy law, our internal policies and our privacy policy. As a global organisation, your information may be stored and processed by The Company and its affiliates in countries outside your country of residence, but wherever your information is processed, we will handle it with the same care and respect for your privacy. We are committed to proactively adhering to the principles of the forthcoming EU AI Act to ensure that our AI solutions are ethical and transparent.

ISO QSL Cert ISO 9001 logo
ISO QSL Cert ISO 27001 logo
Google Cloud Partner Badge
cyber essentials

© 2025 Beyond: Putting Data to WorkTM

Registered Address: 7 Bell Yard, London, WC2A 2JR

bottom of page