LLMs, specialisms, and No Code

last updated:
Notepads on a shelf

LLMs, specialisms, and No Code

by:
last updated:

The specialization approach by these vendors, of course, makes sense for a few reasons, the primary one being that specialist language models are always going to form the next wave; there's only a certain amount you can do with general models, no matter how hard you flex your prompt tuning.

As spring turns into summer and the pace of industry announcements starts to reduce as the first conference season of the year draws to a close, we can begin to look at the first half of 2023 as something of a blockbuster movie. That’s not a comment on quality, but a single title – showing simultaneously on all 30 screens, all day and night – has drowned out everything else. This year’s rampaging, reincarnated dinosaurs or building-smashing superheroes – you can decide which – was the LLM, mainly in the form of Open AI’s GPTs (with or without the chatty bit bolted to the front). 

Chat GPT might have first appeared late last year, and the GPT-n series might have been around for a couple of years; the critical mass of interest rolled through our inboxes and social media feeds and onto mainstream news broadcasts, and we’ve already had a bit of a chat about those specific consequences. So let’s move on.

Instead, there’s an interesting secondary trend that you may have noticed amongst the related announcements of enterprise software vendors during the same period. Of course, everyone wanted to have their own LLM story, and many of the extensive splashy demos focus on using generalized (or foundation, if you wish) models to produce suggestions, write primary marketing copy, and populate customer-facing email templates. As a veteran of too many software demos from both sides of the laptop screen, I recognize the power of a good bit of praxis. But that’s not what I’m referring to here.

Also, present, but to marginally less fanfare, have been demonstrations showing code suggestion, completion, or generation. That is using an LLM cognisant of the structure and terminology of your chosen development language that can, to some extent – and I’m honestly biting my lip as I type this – write your code for you. OK, if developers can please stop throwing things at me at this point, I’m not judging whether that’s good or bad. I’m also not suggesting that all that is brand new because forms of code completion in IDEs have been around for decades. Again that’s not my point here.

When you get into the weeds of – for example – the raft of announcements that Salesforce has made around AI, including an entirely new AI Cloud conglomeration (more on this to come btw), the one place where it’s invested in building an LLM of its own – rather than utilizing a 3rd party – is for Apex code generation (Developer Code Gen LLM). They’re not alone here; just looking through the recent pages of my trusty hardback paper notepad, I can see Celonis doing the same for their PQL query language and Appian doing similar for the generation of SAIL. Google announced Codey, which supports a few common languages but also Go, GoogleSQL, and the Gcloud CLI

The specialization approach by these vendors, of course, makes sense for a few reasons, the primary one being that specialist language models are always going to form the next wave; there’s only a certain amount you can do with general models, no matter how hard you flex your prompt tuning. If you’re going to invest in the time and expense to build your own as a software company, then doing what nobody else is likely to do – support your proprietary coding language and technical structures – is a sensible apportioning of valuable resources. As it’s always a battle for vendors and their customers alike to find workers who are au fait with the vagaries of these proprietary languages, let alone keeping up to speed with the latest interactions and practices, so again, sensible points will be awarded for doing so. Then there are onboarding newbies, maybe even non-developers, to the game, and that’s when this starts to drift out of its lane somewhere. 

Eying this suspiciously will no doubt be the No and Low Code development platforms (nb: we a significant number of these as part of the Programmatics element of our Work Intelligence research). Sure, many of these employ visual programming languages, which either obscure or reject traditional programming approaches entirely in their approach. But stepping away from the how and back to the why here, if you’re looking to democratize your development or empower new workforce developers, are your incumbent suppliers about to offer you something to reduce your overall software stack even further? Do you want to shell out for more seats on your No Code platform when pragmatically, your ultimate goal is to make a core business application – that now comes with its own code free alternative – function better?

Let’s be clear, none of these developed-focused LLMs comes with maturity baked into the model. And as well-intentioned or constructed as any of them are, none will come aware of the delicate mix of business and technical interdependencies, homebrew codebases, and lash-up integrations that lurk just beneath the surface of almost every enterprise. That generative AI ultimately becomes an embedded part of code-free development seems to be given; its current form an early evolution of what it’ll finally arrive upon. It is for now – to coin a cliché – a faster horse.

Leave a Comment

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.

Work Intelligence Market Analysis 2024-2029