Published: Sept. 27, 2022
Margo E. Kaminski

Associate Professor Margot E. Kaminski teaches, researches, and writes on law and technology. Her groundbreaking work has focused on privacy, speech, and online civil liberties, in addition to international intellectual property law and legal issues raised by artificial intelligence (AI) and robotics. She also serves as director of the Privacy Initiative at Silicon Flatirons.

What initially attracted you to this area of the law?

I worked in publishing after college, right around the time that e-books were becoming popular. That, along with the rise of social media, got me interested in how we arrange existing legal rights around new technologies鈥攆rom speech to privacy to intellectual property rights.

Later, while in law school, I interned for a summer at the Electronic Frontier Foundation (EFF), a nonprofit in San Francisco that describes itself as 鈥渄efending civil liberties in the digital world.鈥 There was a team there working on national security surveillance litigation. That really opened my eyes to the central significance of human rights in the digital age. But I鈥檝e always been interested in the nitty gritty aspects of regulation, especially the role that transparency can play. I came back to law school and co-founded a clinic, the Media Freedom and Information Access (MFIA) clinic, which litigates government accountability and transparency cases, including cases involving new technologies. We found ourselves asking who should count as a journalist, and what role information technologies were playing in the face of existing power disparities鈥攂oth as tools of surveillance and tools of accountability.

By the time I became a professor, I had really honed in on privacy. It鈥檚 such a fast-moving policy space, with big implications for democracy and individual freedom. I spent a few years working on the privacy issues raised by unmanned aerial vehicles (UAVs), or drones, and then got the opportunity to go to Europe through a Fulbright grant. That鈥檚 when I turned to working increasingly on comparative data privacy law, with a focus on automated decisionmaking systems and AI.

Have legal issues raised by AI and robotics changed since you graduated from law school?

Gosh, yes. When I was in law school many of these questions felt ahead of the curve or hypothetical. Now, we have companies using facial recognition (a type of computer algorithm) to scan applicants鈥 faces to try to determine emotions and extrapolate personality traits, for example. We have government agencies using algorithms, including AI systems, to try to allocate benefits or catch fraud. The use of and investment in AI systems is everywhere, and lawmakers are taking the potential harms of AI systems seriously. baby直播app, for example, just enacted a new law on facial recognition.

You鈥檝e written extensively about the role of AI algorithms in decision making. How do you see the future of balancing decisional authority between humans and machines?

This is a hard one. I don鈥檛 see my role as predicting the future, necessarily. I鈥檓 more interested in trying to figure out what the law can do to ensure that human values stay on the table鈥攖hat as we increasingly create and use new sociotechnical systems, we put in place whatever鈥檚 necessary to make sure we don鈥檛 lose sight of what matters. I definitely believe, for a number of reasons, that there are some decisional realms where we will always use humans. Legal decisions, for example, aren鈥檛 just about correctness and efficiency. Legitimacy, justification, accountability, even a respect for the dignitary rights of the person affected by a decision鈥攖hese are all reasons why law, at least as an ideal, isn鈥檛 suited to automation.

The most interesting problems aren鈥檛 about whether to use a machine or a human, but about how to get them to work together. For example, I鈥檝e coauthored this recent article on 鈥渉umans in the loop,鈥 or the people involved in automated decisions. Often, well-intentioned lawmakers will look at a decision made by an AI system and try to solve some set of perceived problems by requiring that a human be involved. It鈥檚 not that humans are worse decision makers than machines鈥攊n fact, humans still do a lot of things, like crossing contexts or dealing with edge cases, much better. But putting a human in the loop thoughtlessly actually creates new problems. Hybrid human-machine systems have known weaknesses and can be subject to complex failure cascades. So if we鈥檙e going to put a human in the loop of particularly significant automated decisions, we have to know why we鈥檙e putting her there, and set her (and the system) up to succeed.

You recently worked with baby直播app Law鈥檚 Samuelson-Glushko Technology Law & Policy Clinic to develop comments responding to the Attorney General鈥檚 Pre-Rulemaking Considerations for the baby直播app Privacy Act. Tell us about that work.

I feel so lucky to be at a school with a tech law clinic! Professor [Blake] Reid 鈥10 is a joy to work with, and his students (who are often also my students, from other classes) take their work very seriously and produce impressive and important output. This most recent project, responding to the baby直播app AG鈥檚 office on the baby直播app Privacy Act, is a great example. Two clinic students worked to exhaustively identify aspects of the act that could benefit from focused rulemaking. They did an extraordinary amount of research, ranging from technical articles on how to best design an effective consent stream, to organizational literature on how to make an impact assessment successful. They also pointed the AG鈥檚 office to resources on other privacy laws, both in Europe and in other states like California. This is particularly important as states like ours weigh the benefits of harmonization, which typically makes for lower compliance costs for businesses, with the appeal of being a policy leader in the consumer protection space.

baby直播app Law鈥檚 Tech Law and Policy program, along with the law school鈥檚 Silicon Flatirons Center for Law, Technology, and Entrepreneurship, are nationally recognized. How would you like to see these programs evolve or grow?

Again, I feel very lucky to be at a school that has so many baby直播app members working in related spaces. Each of us does something slightly different鈥擯rofessor [Kristelia] Garcia works on copyright law, Professor [Brad] Bernthal '01 on entrepreneurship, Professor Reid on telecommunications and platform law, Professor [Harry] Surden on patent law and a different area of AI鈥攂ut we鈥檙e able to collectively offer our students a depth of expertise and classes that aren鈥檛 really available elsewhere, except at a few very top law schools. There is, however, always room for growth. I would love to see us be able to offer our students more privacy courses, in particular. It would be amazing to be able to offer data privacy for practitioners or an international privacy course. We also, despite the expertise on our baby直播app, have yet to offer a class on law and AI!

What research themes or projects are you most looking forward to digging into in the coming year?

I have a few projects I鈥檓 really excited about. This 鈥渉umans in the loop鈥 piece I already mentioned is a big one. So is a piece I鈥檝e been revising this summer called Regulating the Risks of AI. Most laws targeting AI have been risk regulation鈥攖he kind of thing we use, for example, in environmental law, or that companies use to try to mitigate risks. Risk regulation comes with a particular set of policy baggage. It鈥檚 been fun鈥攁nd challenging!鈥攖o dig into how aspects of it do and don鈥檛 work when it鈥檚 applied to algorithms and associated practices.

I鈥檓 also really looking forward this fall to getting back into a piece I鈥檝e been calling Data as Speech Infrastructure, where I鈥檒l be looking at data privacy laws through the lens of the First Amendment. And there鈥檚 a good chance something will come of all of the discussions I鈥檝e been having about the data privacy implications of the Supreme Court鈥檚 decision revoking the right to abortion in Dobbs. In short, there鈥檚 always something to do.