Dotcom Two

by:
last updated:

Dotcom Two

by:
last updated:

OpenAI and its ilk are defined as LLM systems, but they are not; they are gigantic and generic, and the word ‘large’ doesn’t go close to describing them. It’s not just a matter of semantics; LLMs often deliver the most value when they are large enough but not too large.

It is easy to get caught up in the ChatGPT hype and either be wowed by the recent advances in AI or be annoyed by the noise and confusion it has created. Whichever side of the fence you are on, you are right. The advances in computing are remarkable, and the ensuing chaos is also absolute. But as analysts, it’s our job to step back from the noise and take a stance on what it all means, and to my mind, there is little doubt that from a market perspective, at least, we are in the early days of dotcom two. 

Though tempting to take the stance that all the noise will soon die down, it likely won’t for two reasons. First, ChatGPT, GPT4, OpenAI, Google Bard, Amazon Bedrock, etc., are not the end game; they are just the start of things much more significant. Second, the money, or to quote the cliche “when they say it’s not about the money. Just remember, it is about the money”. Consider that $20B, that’s $20,000,000,000, has been invested in generative AI startups in 2023, and there is no sign of that tsunami of cash drying up anytime soon. 

In the short term, the most apparent area of impact in the Information and Automation Management sector is that every vendor will want to leverage APIs to add generative AI to their products, regardless if customers will use the new tech. That makes sense; nobody wants to be late to the party, and it’s something new and shiny to market. It’s already happening; Alkymi, Appian, Pega, and many more have already done that. Generative and conversational AI will rapidly become the successor or challenger to low and no-code applications. Why bother with a simplified coding system when you can tell the system what you want and it will go off and do it for you? And that the distraction caused by the focus on AI will likely mean that critical but traditional Information and Automation management projects may be delayed or re-engineered in light of these new technologies.  

In the medium and long term, though, it needs to be clarified what the implications will be; other than that, they will be profound. Whatever your view on these new AI platforms, they have 100X more power than anything previously. If experts are correct about the impact of the next generation of Nvidia hardware, capacity will increase more than tenfold over the coming years. The eagle-eyed may have noted I used the vague term ‘power’ here, for the twist is that neither those building these systems nor the AI itself knows what that power will ultimately deliver or its impact. What we do know, though, is that ‘knowledge work’ will be heavily impacted and most likely wholly transformed over the coming years. And in purely practical terms, that will have repercussions for every vendor, from those in the enterprise search and knowledge management sector, through workflow and automation to governance and compliance. Think of it this way, what is the value of a traditional KM or Intranet when I can ask the generative AI to resolve my query? No doubt KM, Search, and Intranet vendors will vehemently disagree, but that is the reality they face. Of course, they can integrate their platforms and tools with LLM-driven AI, but ultimately it a much like the move to the cloud; they will have to re-engineer their existing systems. And here, I believe, beyond the challenges they will face, is a rich vein of opportunity.

OpenAI, GPT and its ilk are defined as LLM systems, but they are not; they are gigantic and generic, and the word ‘large’ doesn’t go close to describing them. It’s not just a matter of semantics; LLMs often deliver the most value when they are large enough but not too large. What I mean by that is that they are specific, understand just one or two things in great detail, and deliver very high levels of accuracy. For example, LLMs trained around specific regional legal systems, supply chain requirements, or healthcare requirements. The key here is accuracy and specificity; these LLMs have been built and maintained using carefully curated and validated data, not just everything that could be foraged from the internet. Potentially, specific LLMs can and will sometimes work in harmony with VLLMs (very large), but they must maintain their independence, not just feed the larger models. 

Ultimately, we must note that this technological shift is profound; it’s not just the latest fad or craze. There is also a much more complex discussion to be had around generative or near sentient AI, which is the end goal of this, lest you think it’s to improve search functionality or to help with exam preparation; it’s not; that’s just a benign interface to play around with. Behind apps like Chat GPT are massive compute stacks with grand ambitions. Some of which has worrying implications for us all. It’s like a huge freight train that has already left the station and is building up speed, it’s going somewhere, but nobody is quite sure where, and it appears that nobody thought to add any brakes to the engine. We cannot do much about that, and governments and regulators are moving at a snail’s pace to address the issues. But from our myopic focus on the world of Information & Automation Management, there is much we can do as a community to embrace the good and mitigate the bad. Still, we need to get started immediately and not be caught stationary in the gaze of the oncoming trains’ headlights.

Leave a Comment