Are LLMs the next Monoliths? In software, smaller is often better.
Like monolith platforms before them, LLMs are like a sledgehammer to crack a nut. In most cases, small models are not only cheaper but also better.
I avoid technical matters in this column, but I couldn't resist a take on this Economist article. Not because it is revolutionary or surprising, but because of the glaring parallels with my digital strategy work at Growcreate.
In the early stages of digital transformation, monoliths were necessary. Marketing teams needed fully-integrated, “one-stop-shop” Digital Experience Platforms (DXPs) to tame the complexity of their operations. Eventually, they gave way to leaner, “composable” architectures, i.e. tailor-made environments composed of best-tool-for-the-job services.
From an architectural perspective, this is ideal: composable architectures are easier to support and maintain, and are flexible enough to cater to non-standard requirements. They are also commercially savvy, helping marketing teams avoid vendor lock-in and negotiate more favourable pricing.
Are Large Language Models (LLMs) the new DXPs? Do we really need “God-like” models to build AI solutions? If not, what can replace them?
Have a nut? Get the sledgehammer.
LLMs are by definition “general purpose”. In most cases, you only need 1% of their capabilities, but the cost of using multiple specialist tools is high enough that most people accept the bloat. You don’t need a sledgehammer to crack a nut - but it will do the job, and you like to crack all kinds of things.
Alternatives exist, imaginatively called Small Language Models (SLMs). SLMs have specialist capabilities and are optimised for specific tasks. They are cheaper to train and deploy, and consume a lot less energy.
Importantly, these specialised models can be combined to support entire workflows. Like composable DXPs, they are easier to maintain and extend: you do not need to take down the whole apparatus to upgrade or substitute a component.
Why go Large?
Like DXPs before them, the industry wants you to believe that LLMs are the only game in town so that they can lock themselves in. However, businesses can create a better architecture by “composing” AI environments from available SLMs. You get the best model for each task; it is cheaper and more environmentally friendly to run, and you can easily swap it for a more suitable option later on.
OpenAI, Anthropic, and their peers may offer innovative technology, but their business models are not particularly noteworthy. Peel away the nonsense about AGI, and they are SaaS businesses trying to capture market share from each other. They may soon need to find another way: as AI Adoption marches on, companies will disregard the hype and remember that Small is often Best.


