Is This Where the iPhone and AI Are Taking Us Next?
“I said, ‘Be careful, his bowtie is really a camera.’” America, Simon and Garfunkel (1968)
Eighteen years ago, Apple co-founder Steve Jobs took the stage at the Macworld Expo at the Moscone Center in San Francisco to introduce the iPhone to the world. The Expo was the brainchild of Peggy Kilburn, a conference planner, who knew the power of bringing people with like passions together in a large room. The Expo included a keynote address by Jobs that became a template for the technology industry. The basics of a great keynote? Proudly tout your successes; introduce new products and ideas with great fanfare; convey your incredibly clear and compelling vision for the future; and end with a “one more thing . . .” moment.
There is always a lot of hype surrounding these annual tech announcements, and this was certainly true in 2007. And there is intricate theater and choreography too. On that day in 2007, for Jobs, the aesthetic included John Mayer’s Waiting on the World to Change as a walk-up song. It included Jobs’ signature black turtleneck and blue jeans and just enough scruff so that everyone knew he needed a shave. It included thousands of cheering fans in the Moscone Center and a presentation screen at the back of the stage the size of a basketball court. Jobs said during that keynote, “Every once in a while, a revolutionary product comes along that changes everything.”
Yes, hyperbole is endemic to Northern California. So much so that Theranos founder Elizabeth Holmes convinced a federal judge there that she should be allowed to argue it as a defense to her fraud indictment. Just puffery, she said. Jobs’ 2007 keynote, though, wasn’t that. Eighteen years later, no one doubts that the iPhone profoundly impacted – and continues to impact – society in ways large and small. It made computing and the Internet accessible in a portable and user-friendly way. It spurred mobile app development and in doing so, transformed the way we communicate and consume media, and made our attention a commodity to be extracted by greedy capitalists. Many believe it led to the rewiring of our brains, an anxious generation, and an epidemic of mental illness.
Jobs’ 2007 keynote may have actually understated the revolution that was to come. But one understatement does not reshape an industry so fluent in exaggeration and hyperbole. Today, the hyperbole is artificial intelligence (AI). We are told with great confidence that it will upend nearly all aspects of civilization. Elon Musk believes so strongly that AI can totally remake our government – just as autonomous cars are coming any day now – that he is haphazardly taking a chainsaw to agencies and Departments across the Executive Branch. We hear how AI will soon take all of our jobs, that the robots are coming for us, that they will end up as our overlords, and that we are all doomed.
Later this year, the Federal Sentencing Reporter will publish a double issue on AI and criminal justice, an issue I edited, and which includes some really interesting articles and authors. I’m generally not one to buy in to the fear and the hype, at least not totally. But AI is certainly here now, and it is already impacting our justice system. It is a force to be reckoned with. Stay tuned for more on that.
- - -
When I ask lawyers who are about my age if they ever use any of the popular AI products, it turns out almost none have ever even tried one. When I speak with my students about AI – and I now routinely do on the first day of class – I get different responses. From the technology early-adopters, there’s a pride and defiance and a declaration that it would be professional and educational malpractice not to run a work product through AI before finalizing it. For the other students, there is confusion about how best to use AI and often a fear that if they use it, they will be accused of cheating or that they will risk inaccurate work product due to hallucinations or other problems with the technology.
We are still in the early innings of the AI (r)evolution. Even as artificial intelligence has exploded onto the scene, it is still unclear how it will integrate into our phones and other devices and if and how we will use it in our everyday lives.
But I did get a hint of what it all might look like a couple of weeks ago, when I attended a version of the tech expo: Axon Week. Axon is the company formerly known as TASER International. It develops technology – hardware and software – for the military and law enforcement, and besides Tasers, is probably best known for its body-worn cameras. The revolutionary part of Axon’s product line, I believe though, is not the company’s hardware but rather the cloud-based software platforms it is creating which allow for all the data we’re now collecting – including camera feeds – to work with artificial intelligence and build an ecosystem that provides law enforcement, in a euphemism for the ages, “unified awareness.” It’s intriguing and scary stuff at the same time. It will be the subject of a future essay.
Axon Week is a marketing and training conference where public and private sector safety professionals – mostly cops and capitalists – gather to learn about and discuss new technologies, best practices, ethical challenges (yes, there’s some of that too), and all kinds of strategies for improving public safety and making a buck at it. It follows the Macworld Expo model, with a keynote, a huge audience in a convention center – this year, Axon Week was held in the Phoenix Convention Center – lots of boasting about the company’s successes, and the introduction of new products and services together with company partners.
I was asked to come to the event to speak at the “Legal Track” and to discuss how AI is making its way into criminal justice practice and how prosecutors, defense attorneys, judges, and other criminal justice professionals can effectively deal with the change already underway.
But before my session, I attended the keynote, which was presented by Axon CEO Rick Smith. It was eye-opening, and for anyone interested in how technology is changing law enforcement and the implications of those changes for privacy and civil liberties, I recommend watching the entire keynote, which I’ve pasted in below. It included presentations by Jamie Siminoff, the founder of Ring, the doorbell company, about how its partnership with Axon will enable law enforcement to request video from Ring users during investigations; Phil Thomson, the CEO of Auror, a company that makes software to help prevent retail crime, about streamlining crime reporting and analysis through the Axon ecosystem and connecting retail workers, in particular, directly to police; Andrew Frame, the founder of Citizen, a company that makes an app for people worried about crime in their communities, about how its partnership with Axon will allow law enforcement to activate nearby Citizen app users for real-time alerts and live video; and Ian Aaron, the CEO of ubicquia, about the installation of its smart cameras on city streetlights, allowing for city-wide camera and license plate reader coverage, all integrating seamlessly into Axon’s platform. These are some of the steps Axon is taking to provide law enforcement with “unified awareness.” Like I said, it’s intriguing and scary stuff at the same time.
But it was when I saw the unveiling of Axon Assistant – an AI-powered voice assistant that runs on Axon’s body-worn camera – that I thought, for a moment anyway, that I saw the future, and not just for law enforcement. Axon’s Senior Vice President Ran Mokady demonstrated the product and talked about three specific features: Real-Time Translation, Policy Chat, and General Q&A. Real-Time Translation allows police officers to communicate in over 50 languages without looking down at their phone, using an app, or waiting for an interpreter. Policy Chat allows officers to ask about department policies and get a voice response immediately. And General Q&A lets officers tap into the Internet to ask almost anything, all by voice. Axon will be adding other features, including searches of criminal history and other databases and lots more in the coming months. Here’s an excerpt of the demonstration.
For over a decade, the big tech companies have been trying to develop a product that puts the kind of information and applications you get on a smartphone into something wearable and more easily integrated into our lives as we go about living them. Staring down into our hands at a phone interrupts living, at least in the here and now. It is antisocial and isolating, in that moment and that place at least, and the tech companies have been trying to solve that problem with wearables for a long time.
Google Glass is probably the most well-known of these efforts. It was a failure. It was launched in 2014 but was pulled from the market in 2015. There were comeback attempts, but they failed too. There are many reasons for the failures. The price of the product was certainly too high. There was no serious marketing launch.
But it strikes me that there were three reasons Google Glass didn’t solve the antisocial/isolation problem. First, it did not have a well-defined set of functions and uses that could attract a large customer base. It had only two real stated purposes: it could quickly capture images and search the Internet for anything. There was no regular, defined, and practical application for the product. Second, the camera was front and center, directly in view of anyone who came close to the user. The camera drew harsh criticism, particularly about how it could be used in public spaces with users recording video and images at any time. It was just too creepy. I went to a farmers’ market recently, and there was a drone flying overhead. That drone turned what was supposed to be a peaceful, idyllic, early Sunday morning experience into one of surveillance. This is what Google Glass did to everyday human interaction. And third, Google Glass interfaced with a user’s vision, making multitasking difficult and dangerous. It didn’t really solve for the isolation brought on by the phone.
Axon’s Assistant seems fundamentally different in several respects. First, it is being built incrementally for particular use cases and for a particular customer base. In that decision lies an important lesson, I think, for developers. The uptake of AI, among the older set at the very least, has been slow. Part of that has been that foundational AI products – ChatGPT, Claude, Gemini – require the user to understand the product and its capabilities and then also how to utilize the tool to add value to their lives. The user must determine how the tool will help them do their work more efficiently or live their lives more happily.
This is basically the way most of the current AI assistant products, like Google Assistant, Amazon Alexa, and Apple’s Siri, work. Axon, on the other hand, in developing Axon Assistant is serving as an intermediary between the foundational AI tool and the end user, determining the best use cases for the AI assistant, building those capabilities and use cases into the product, and providing easy training and access to them to the end user. If the big tech companies do this type of engineering more and more, it strikes me that these AI assistants will be adopted at greater scale.
Second, the Axon interface uses a fully voice and audio interface rather than a visual one, which seems far more conducive to a more socially healthy multitasking. It is designed to allow users to stay present – at least partially – in their immediate environment while engaging with the Internet. As you see in the demonstration, the user’s head is up while using the product, and he is engaging with those in his presence. This is a must for a cop on the street, and probably good for the rest of us and our society too. And because interrupting an AI bot and making it wait is not impolite, the priority can become the real human standing in front of the user and not what the Internet is serving up at that moment.
Third, the interface doesn’t require any new hardware, just earpods or some other earbuds that also have a microphone built in. The phone can sit in your pocket – it doesn’t need to be sitting on your chest, as in the demonstration, or even in your hands. But what about the camera and the issues of privacy? The creepiness? For law enforcement, the camera can be on the officer’s chest, as body-worn cameras now seem to be a socially accepted part of law enforcement equipment. For the rest of us, I suspect the camera will need to be more discreet to be accepted. Maybe in a bowtie. Maybe in a lapel pin. Maybe we’ll just skip the camera.
But one way or another, it strikes me that this might very well be where the iPhone and AI are taking us next. Yes, we’ll still be partially engaged with the Internet most of the time, but with our heads up and our eyes active and maybe just ready to engage with people in our presence a bit more. Is it revolutionary? I don’t think so. Not like the 2007 iPhone introduction, anyway. But can it be an improvement – allowing most of us, most of the time, to keep our heads out of our phones, our phones in our pockets, and at least a little more engaged with the people in front of us, all while still satisfying our desire to stay tied to the Internet? Maybe.
…more humane is encouraging. Thx for giving me a kind of grudging optimism about police practice.
Great piece. I really don’t want to like cops, but this application of tech potentially to make their work more humane is