At the Center for Technological Responsibility, Re-imagination, and Redesign (CNTR), second-year PhD student Rui-Jie Yew is carving out a niche for herself at the intersection of law and computer science. Her ideas in this field were recently recognized at the 2024 Artificial Intelligence, Ethics, and Society (AIES) Conference.
“I’m really interested in the incentives surrounding technology law and how those systems are implemented,” says Yew. “The relationship between technology and law is often characterized as being riddled by natural gaps and tensions. I’m interested in how companies can harness these tensions to secure their own competitive advantages by lowering legal costs.”
Yew submitted her paper, “You Still See Me: How Data Protection Support the Architecture of AI Surveillance,” co-written with Lucy Qin (recent CNTR graduate) and Suresh Venkatasubramanian (CNTR Director), to the 2024 AIES Conference earlier this year. Yew was selected to speak on her article at the conference, and it received a runner-up award for Best Student Paper. Yew’s paper was one of the four papers out of 458 submissions recognized with an award at AIES.
Yew’s work explores the relationship between AI and data protection laws. Across the technological world, companies collect all sorts of data about their users. That data forms the backbone of AI systems. At the moment, there are laws and regulations that restrict what kinds of data companies are allowed to collect. On the surface, data protection and privacy regulations are supposed to protect individuals. However, in her paper, Yew illustrates their role in AI architectures that further surveillance infrastructure.
“The application of data protection and privacy laws typically hinges on the collection of ‘personal data’ —data that could render a person identifiable. The legal requirements associated with personal data present a huge obstacle because AI technologies require a lot of data–usually about people–to build,” says Yew.
“But, by using privacy-preserving techniques like private set intersection and federated learning, companies have argued that, since they’re not collecting personal data, these laws don’t apply to them. Then, they use these techniques to access and synthesize more data, new data, data that was otherwise unavailable to them, without surveillant consequences.”
Citing her co-author Lucy Qin’s research on applied cryptography, Yew emphasizes that privacy-preserving techniques play an important role in the public interest and in confronting power structures. However, “when using privacy-preserving techniques as cover, those in power can further consolidate and use information about people,” says Yew.
Yew’s paper details how the use of specific privacy-preserving techniques at each stage of AI development and deployment “can enable companies to ‘see us,’ while flying under the radar of current data protection and privacy regimes–without ‘being seen’ themselves.”
The question remains: how can we draft and enforce technology policies to better protect people? At the CNTR, Yew is working on structural frameworks to chart the technological implications of emerging regulations. “We need to have an adversarial mindset when looking at laws and building systems. What might the flaws be? What is the least-cost path of compliance? How might burdensome requirements be avoided altogether? Given the enormous impact of technologies like AI in our lives, it is urgent that we grapple with the technological consequences of these paths.”
Read Rui-Jie Yew’s paper, “You Still See Me: How Data Protection Support the Architecture of AI Surveillance” on the ArXiv.