Copilots may have misdiagnosed the problem; humans don’t do what we thought they did

last updated:

Copilots may have misdiagnosed the problem; humans don’t do what we thought they did

last updated:

Much supposition about what makes AI Copilots worthwhile, perhaps vital, is already in the rear-view mirror of the tech industry. However, customers still have to decide whether they agree with that definition of vitality.

Documents are generally symptoms that work has occurred but rarely the object of the work itself. However, that rarity is inversely proportional to the volume of documents organizations generate. The sheer volume of documents generated tends to be conflated with their importance in achieving the work we complete. That false importance tends to drive us to misdiagnose the role that humans play in the work itself, one that is beginning to accelerate as AI is being baked more readily into our business applications and the data on which they operate.

Earlier this year, I wrote that making an incorrect assumption as to basic human motivation within a workplace was at the core of a very UK scandal – one which is still a headline maker as the inquiry continues – and there’s an element of that within some of the essential thinking of what we can call our current “Copilot” generation. The incorrect assumption is that humans are always chronically slower at processing a task; therefore, that lack of processing rate is their primary deficiency. It also suggests that throughput is the most critical measurement for a work task, as opposed to simply being one of the easiest ones to measure and compare.

Our current way of thinking about Copilots is that they assist us with tasks that consume the most elapsed time within an instance of a process. That’s assuming that we truly understand the overall process, which is a very big if, but let’s park that for now. Easily measured elapsed time guides us to take a leap of faith that says it takes a long time because humans are inefficient at completing the work; therefore, AI completes it for them, or at least guides them to the solution instantly. Yet in doing so, we’re potentially sacrificing all nuance of the quality of human contributions to that task at the altar of speed.

Previously, I discussed the idea of an AI usefulness quotient; it’s interesting, but is it actually useful? Now, I’m asking for a seemingly more subtle judgment; AI is faster, but is it good? I do not doubt that there’s a desire for AI tools to create good outcomes, along with that increased throughput rate. Yes, the focus seems to be a little out of whack with that focus on the sheer volume and weight of the data, but I’m giving the benefit of the doubt here for now. Even so, we are led to believe that this data is the font of our organizational knowledge and therefore the working context for Copilots. The crux is this: if we take the mountain of documents (data) as both the outcome of work and good exemplars of that work without question, we’re in danger of missing the point. The documents and data provide context to the fact that humans made decisions but rarely provide any neat encapsulation of how and why those decisions were made. Human interaction with a task typically syntheses explicit sources – relevant linked data – and other hidden implicit sources. Furthermore, in this Copilot age, there is a tendency to see all implicit sources as biased, that all bias is negative, and that better results will come with its removal.

To see humans as task-processing units within broader business processes is convenient but dumb. It’s convenient because you can look at the inputs and outputs and create programming replicating what appears to be happening to fit the broad purpose. As we increasingly see, you can do that to a high degree of measurable quality with copilots. What we cannot do, though, is replicate that synthesis, and because we cannot, we’ve decided that removing the synthesis itself was actually the point in the first place.

In many of the points in task decision-making, that synthesis that humans apply is a feeling. Does that feel right? Does that feel appropriate? Does that feel just? They can add and subtract those explicit and implicit sources on the fly to compose an output that produces the right feeling. It’s not an average, and while it can be mathematically analyzed and modeled, it can’t be replicated using the data, mainly because it wasn’t arrived at through that data alone.

We are very much at V1.00 of Copilots – very much as assistants – but things are moving fast toward the concept of them being agents. Much supposition about what makes copilots worthwhile, perhaps vital, is already in the rear-view mirror as far as the tech industry is concerned. Yet most customers still have yet to decide whether they agree with that definition of vitality. And those who arrive on the scene with a set of predetermined views about human efficiency will rapidly find promised results that reflect those beliefs. I’m just not sure how substantial that source of buyers will prove to be.

Robot image via Microsoft Copilot Designer

Leave a Comment

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.

Work Intelligence Market Analysis 2024-2029