Those who know me well will know that I’m a massive public transport nerd outside my weekday life. Avoid me at parties, but specifically because I’ll end up talking about this. It’s not the trains. It’s more about networks and their interconnectivity, how these networks have grown over time, often through accident rather than design, and how their development tracks against our collective social history.
In-between everything else, I’ve recently read Paris Marx’s excellent “Road to Nowhere – What Silicon Valley Gets Wrong about the Future of Transportation.” The book plots the path for how technology and transportation have intersected in the past. And as you might have grasped from the sub-title, where those adventures have not produced universally positive outcomes. Speaking as someone who has embarked on several research trips to Silicon Valley, traveling on a combination of public transport and walking between appointments, it’s certainly a life choice for the determined (or foolhardy).
These days, however – since moving from east London to the east Kent coast – I spend the majority of my traveling time in my car. While it does have some forms of modern driver aids, it is controlled in a manner familiar to anyone who has ever trained for the task. As is explained in some detail here, expensive and publicly humiliating attempts towards full automation for road vehicles have yielded little progress (a subject also covered extensively in Marx’s work).
Stepping back, it’s clear that driving road vehicles are a poor candidate for automation. The bare analytics of the historical progress will tell you that long before you think about the actual complexities of the task itself (or, instead, the thousands of concurrently connected micro-tasks). The same goes for trains unless the entire system is discrete and homogeneous. Those who like to maintain that all trains should be automated should not be allowed anywhere near a train set, let alone transport technology planning.
So why do our critical faculties desert us when deciding on a good candidate for automation? Why are we so bad at what should, on the face of it, be an easy decision to make? Do we need to automate the process of deciding what we should think about automating?
Well, the good news is that we’re better at this than those illustrations might suggest. But if you want an automated way of understanding what might be a good candidate for automation, process and task mining exists, and we’ve got an extensive report for you.
As Marx’s book points out, when the splashy attempts to introduce technology to transport go wrong, it’s often because the motivation behind the decision does not arise from the nexus of technologies as much as the desire to manage and control human labor. Just as most of the bluster in the UK about automated trains comes not from a passion for efficiency but from a perception that the people employed don’t behave as they imagine a machine would. Those factors are not unconnected when labor is scarce or perceptively expensive. There’s a misapprehension that technology can fill that space. And if it doesn’t, we’ll force it to fit using as much capital as we can lay our hands on. Forcing technology into a space where it is not a good fit tends not never to bring positive outcomes.
The answer lies right now in more minor things.
I was reminded of this notion the other day when I was in the supermarket, scanning my groceries off the shelf and into my bags while employees did the same for baskets destined for home delivery. In those actions, we’ve got as good an illustration of human-aided computing as you’ll find. The humans select what should be picked and manage the actual physical picking and packaging; computing addresses the cost calculations and data augmentation for that order and the stock overall. Both are ideally suited for their roles in the process.
As a consumer, I choose the level of interaction I wish in the process – I like to visit the physical store and look at the goods but also make the conscious choice to self-scan – based on my personal and cultural preferences. I also realize that there are labor-based implications in this human-computer interaction. For some, those personal cultural preferences are driven by how they interpret that state (just as I’ve studiously avoided ever using any ride hailing apps).
It’s not just us. As we recently noted, Google decided to focus on very human-scale AI at its recent Google Cloud Next event as “human in the loop.” It breaks a process into the requisite tasks and apportions those to the most appropriate form of processing.
Back to our driving example, it is not a single process, but thousands of processes made up of thousands of nested tasks. Some are ripe for automation – overripe in the sense that they are likely automatic in your vehicle already – the human should always manage others in the loop. Automatic lighting, automatically adjusting to ambient conditions, is a good candidate for automation. However, finding the correct gap to turn through traffic, not so much.
It turns out that when we’re making the ultimate choice of whether a task or an entire process should be a candidate for automation, the perfect arbiter is those who created and currently manage it. It’s nice to know we’ll always be good at something.