Skip to Content

Office of the Vice President for Research

  • Banner Image

Breakthrough Magazine

Q&A with Bryant Walker Smith, USC assistant law professor

As an internationally recognized expert on the law of self-driving vehicles, Bryant Walker Smith is frequently asked to weigh in on legal issues related to automated driving. But the USC assistant law professor’s expertise isn’t limited to self-driving cars. His insights into tort law and product liability, and his broader interest in what he terms “the law of the newly possible,” are helping prepare USC law students for an evolving legal landscape.

You describe your field as the “law of the newly possible.” What does that mean?

I look at the descriptive, the predictive and the normative. Descriptive is, “What is the world today?” “What is the state of technology?” “What is the state of law?” Predictive is, “How is law going to change?” “How are technologies going to change?” “How are society and society’s values going to change and what will that mean for technology and for the law?” The normative is, “What should the world of tomorrow look like, and what role should the law play in that?” It’s important to start with what we know before we decide where we want to go.

 

Let’s start with the descriptive. What areas of law and technology are on your radar?

Everyone is thinking about data and automation, and how the two combine. Specifically, I’m thinking about questions of trust and trustworthiness — earning trust and evaluating trustworthiness. I used to think that simply providing information was sufficient. That is, if a government agency released data, or a court released a judgement, or a company released information about the performance of a product, that was the extent of the obligation, and that in itself was sufficient for whatever public policy goals exist. I’ve realized that is not enough. Everybody needs to think about the way that information is used and received — and we need to understand that information as part of a narrative.

 

What specific challenges does your interdisciplinary approach present?

A field of engineering called human factors is concerned with how humans actually use the things that engineers design. So you’ve made a toaster, but are people going to electrocute themselves with it? You’ve made a road, but are people going to feel so comfortable on it that they drive too fast? For years, human factors experts have warned us, “You need to see the human in the system, you need to think about how what you do is going to work in the real world…”

So much of the dialogue between engineering and law, between technology and society, is incremental. We don’t ever have a chance to step back and say, “Well, what should be?” rather than just, “What is?” It’s useful to set out benchmarks, to say, “This is what we want the world to look like,” and then to evaluate those benchmarks in a year, five years, ten years, and say either, “Our vision of the world has changed, and that’s OK,” or, “We’ve gone astray.”

 

The human factor is a big variable, though.

It is, and it’s one that tort law deals with all the time — think about consumer misuse and abuse, warnings and instructions — but human factors have not been fully appreciated in either the engineering or the judicial realms. I think we’re coming to a greater appreciation for the point these specialists have been making for a half century — that the same expertise needs to be applied in the domain of data and information.

 

But humans are not all the same. How do you adjust for such a squishy variable?

Everyone from designers to regulators is puzzling over how to treat non-deterministic systems, when you have machines where you’re not sure what the outputs will be even when you know the input. With most machines, if you know the inputs, you know the outputs. Press on the gas, and you know how fast you’re going to go. Increasingly complex systems are non-deterministic: there are so many inputs that you can’t understand them, or the interaction between inputs within the system can’t be fully understood. So you say, “How do we possibly regulate these non-deterministic systems?” Well, the law has been doing that for millennia. The human is the ultimate non-deterministic system — and that may actually offer a useful analogy for how we regulate these new, complex non-human systems. This is where law and engineering so desperately need the insights of psychology and neurology and social science. This is one reason why the humanities are so important. We should really be pulling from those domains that don’t have perfect answers but probably have better answers.

 

You’re well known for your work relating to law and self-driving cars. We’ve already started down that road — we have cars that help us park, for example. At what point do we even call them self-driving?

People talk about the self-driving car as a technology, singular, but, really, it’s this diverse set of technologies, and applications of those technologies. When I got into automated driving in 2011, I joined the Society of Automotive and Aerospace Engineers and became very involved in their efforts to define levels of automation. As a technical matter, not a legal matter, once you can take your eyes off the road, we say the system is highly automated. People imagine sleeping in the backseat and our cars driving us. That’s fully automated driving, and that’s a ways off, but there is a lot of lower-hanging fruit.

The first path is increasing driver assistance. Those will do more and more of the real-time driving tasks. The human can take their hands off the wheel, and their feet off the pedal, and then both hands off the wheel — and then maybe they can take their eyes off for a little bit. That takes us to the mushy middle of automation that engineers and lawyers have to deal with — imperfect people misusing these systems in all kinds of ways. Another path is increasing safety systems, introducing intervention systems that won’t act until a crash is imminent. These are going to get more sophisticated and start intervening earlier, so people might still think they’re driving, but actually they won’t be because these systems will be introduced so often.

 

What fascinates you about the law as it relates to emerging technologies?

It’s an easy way to explain all the ways that life has changed and is changing, and to be a part of that excitement. Think about all of the innovation, for good and bad, and all of the people who have had the opportunity to shepherd those or navigate those. That’s a privilege. I’m both intellectually fascinated by how that happens, and also, practically, as a lawyer and an engineer, very interested in being part of that process. Also, I’ve always been interested in complex relationships and understanding systems, recognizing that everything is ultimately part of one system, even though we draw artificial boundaries. New technologies give me a chance to see those relationships, the way that new ones emerge, or are strengthened, or are challenged.


Challenge the conventional. Create the exceptional. No Limits.

©