AI, building trust and selling defence

last updated:
Picnic Bear, an electro mechanical toy. It drinks.

AI, building trust and selling defence

last updated:

I’m beginning to wonder whether [..] the industry itself is seeing the perpetuation of fear as applicable and as a sales technique, perhaps even a lucrative one.

Back at the dawn of summer, I wrote about my exasperation at how overreactions in the mass media clouded me. My concerns were partly fueled by the industry outwardly asking for regulatory support, that it was creating a set of conditions that would mean that organizations would find themselves unnecessarily fearful of finding any favor in examining whether the technology had potential positive outcomes in their circumstances.

A few months later, my instinctual concerns have not gone away – and our advice to be cool but step carefully still stands. But I’m beginning to wonder whether, as the broad media narrative moves on, the industry itself is seeing the perpetuation of fear as applicable and as a sales technique, perhaps even a lucrative one.

Big themes and big dreams

Writing as I am in mid-September, we’re well into tech conference season, which, much like “back to school” promotions and the appearance of Xmas stock in the supermarket, appears to begin early every year. All involved like to have big, bright pictures to share; generally, new products or significant upgrades to those already marketed, and the big, multi-day conferences provide a broader canvas on which they can be painted. Beyond the standard keynotes, there are often lower key sessions, where topics can be discussed more in-depth, away from directly focusing on the products themselves. 

This week, it has been Salesforce’s annual Dreamforce conference, where it takes over a few blocks in downtown San Francisco and runs probably the most unabashedly showbiz event of any software vendor. It’s always been able to attract big names from inside and beyond the industry to speak, and in this significant year for generative AI, appended closely to the tail of the primary conference keynote. The tone of the conversation – where it’s safe to imagine questions have been prepared in advance to a great degree – was directed toward indulging those already fearful. Sample question to Altman: “What’s the scariest thing you’ve seen in the lab?”.

Towards the conclusion of the session – 31:50 if you want to skip there – a potential throwaway remark about the CIA from Benioff amid a thread about regulation brought a response from Altman that, and I’m paraphrasing here, he could imagine worse organizations to be judges of AI oversight. Whether a rehearsed exchange or not, it wouldn’t have calmed the spirits of some observers, especially those outside the United States. It’s curious how both parties believe such an exchange might have been helpful to the current discourse.

Imbuing the spirits with intent

While I’d not call out anyone using the term directly, I refrain from using the word “hallucination” about a response from a generative AI model (LLM). The term imbues upon the model the idea that it somehow has a creative consciousness, the conjuring of which can embroider a response beyond a set of supposed pieces of knowledge it retains. To be clear, while ingesting factual content as part of their creation, models do not know the relative positionalities of one term versus another. The generation of responses that might indicate a complete factual regurgitation is, in fact, only proof of the strength of that positional information within the engineering of the model itself.

I will, however, suggest that accusing generative AI of “lying” or “telling lies” – both terms used by Benioff in his preceding keynote – should be refrained upon entirely. This goes beyond hinting about consciousness and suggesting that models display intent. This is completely incorrect; these mistakes are nothing more than “a swing and a miss,” a wrong guess to a question prompted.  

You can trust me to defend you from the monster that I created.

The irony in calling out these two consecutive events is that Salesforce, in particular, has done a decent job in fronting up the realities of transparently melding generative AI’s capabilities to your existing organizational infrastructure. Indeed, we said as much when recently discussing their new AI Cloud capabilities in a Vendor Analysis report. This management platform, along with a raft of new and re-announced (and renamed) allied generative AI capabilities to bridge into the daily activities of the workforce, were the centrepieces of what Salesforce wanted to discuss at Dreamforce, which this year itself was billed as “the world’s largest AI conference.” 

There’s a danger here of Benioff becoming Carl Denham in simultaneously wanting to celebrate both the power and potential risk of generative AI while wishing to demonstrate a singular ability to control it. 

I’m not suggesting that generative AI will break those shackles and scale the Salesforce Tower further up Mission Street, batting away aerial attempts to dislodge it. However, attempting to harness the power of nightmares to sell defensive capabilities is a zero-sum game if you believe it represents a generational opportunity for the industry with your own company as a leader. 

I doubt that this was the output of a strategic predetermination, instead in its way being demonstrative of the early stages – and related difficulties – that we collectively as an industry display when trying to articulate balance in the use of emerging technology, especially one which diverges from our core area of knowledge. 

Building trust through explanation

Beyond these selected headlines, many sessions at Dreamforce veer much more toward the instructive side of understanding the plethora of issues around adopting generative AI. The tone of building trust through explanation and transparency is one that all vendors, whether 1st party creators, 3rd party users – or, as is the case with many of the most prominent vendors, a combination of both – seek to adopt. As I’ve detailed above, it’s a wise path, even with occasional missteps.

For our part, as part of our continued work in the area, next month we’ll be publishing an analyst report, “Generative AI and the Desktop (R)evolution” – a guide to generative AI tools destined for the desktops of the workforce and how to plan for their use and their impact in your organization. If you’d like an early sight of the findings or a conversation on the subject, please get in contact. 

Leave a Comment

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.

Work Intelligence Market Analysis 2024-2029