Data Science Institute
Center for Technological Responsibility, Reimagination and Redesign

Participatory design, human-AI systems, and imaginaries

Every now and then I read something and an entire post emerges from my brain, demanding to be written in full and not thrown away in a few bluesky ….. skeets? snarks?

Technically it was a something I listened to. This was a talk by one-time coauthor1 and all-around impressive person Janet Vertesi who manages to be both an STS scholar par excellence and a hacker beyond anything I can imagine.

Her talk, delivered at UMD on Nov 21, has a title longer than most tweets: 
“How NASA’s robot teams reveal the future of human-AI collaboration (or, ten provocations for studying and crafting responsible human-AI teams)

[click the image below to watch the video]

Participatory Design

The talk caught my eye because at the CNTR we have been thinking about participatory design and the challenges of community-led research. I’m also the co-director of ARIA — a new NSF AI Institute focused on the science of intuitive and trustworthy assistants (so new we don’t have a website yet).

One of the use-inspired research directions we will be exploring at ARIA is the totally obscure and not-at-all-on-anyones-mind topic of AI use in mental health (I kid, I kid). More generally, we want to understand the way AI tools (including chatbots, but not just them) might be used (or not!) for mental and behavioral health therapies.

ARIA is committed to a community-driven approach to thinking about these issues, and some of our early work will be to bring in stakeholders — patients, families, therapists, clinicians, policymakers, advocates, and others — to help understand what is it that people actually want (and don’t want) from any kind of automated assistance. And more importantly, what it would mean to co-create the systems, evaluations, and practices that we need. We have a lot of expertise on the team to guide this effort, including but not limited to partners from Dartmouth, Data and Society, and UNM.

A thing I rant about a lot is how researchers constantly say we “should” do something and spend much less time saying HOW we should do something. To a degree I feel this has been true for community-driven research and evaluation. I don’t doubt that participatory design is important and is the best way to imagine alternate futures with AI (more on that in a bit), but I struggle to figure out in any specific instance how to proceed. And that’s going to be important for us in ARIA.

Lessons learned from NASA Human-AI teams

This is where Janet’s talk piqued my interest. What indeed CAN we learn from NASA Human-AI teams. I’m not going to go into the 10 lessons that Janet presents in detail. Here’s a screenshot summary for your edification. But really, go watch the talk.

What stood out for me were three things.

1. There are interesting parallels between AI development and NASA work

The talk makes a pretty strong case that we can learn from the history of work at NASA when we think about effective human-AI interactions. In particular, we can understand how to work with custom-built technology to serve a specific mission purpose while embedded in a larger human-driven setting. It’s a concrete example of how to imagine a different arrangement of human and machine, something we severely lack right now in AI-land, and something that Janet calls out repeatedly. I appreciate this level of concreteness precisely because it helps us get beyond the “should” to the “how”, at least by example.

2. Organization drives our understanding of, and relation to, technology.

The stories of how the teams organized around the technology for different purposes, and how the way in which they approached the machine was dependent on this organization, was really fascinating. It reinforced her point that we only seem to think of AI systems as interacting 1-1 with an operator, and that we should imagine (again, that word) what a team interaction might look like, and what we should build to support that. I don’t yet know the answer in the context of ARIA, but this helps center our thinking in a different place.

3. But is this just a problem of scarcity?

I couldn’t help but wonder if there was a limit to the NASA analogy, and if that limit was the extreme constraints presented by the Mars Rover or the spacecraft exploring Saturn. These were remote devices, with huge delays in communication that necessitated a certain degree of autonomy, and were very precious and rare resources. None of these things are true for (say) LLMs. Does this mean that some of these lessons are adaptations out of necessity, rather than principles that can be transferred to other human-tech interactions?

On imagination, AI discourse, and the pleasure of deep and careful thinking

At the ARIA launch event, we had a great keynote by Julian Jara-Ettinger. He talked about his work exploring how people establish understanding, connection, and communication with each other. It was a rich and nuanced talk that showed (with some fascinating experiments) how we build models of others in our head, how we communicate outside of language, and how social cues help us build an understanding and context for what’s being communicated.

In the same vein, Janet’s talk went deep into the details of team-machine interactions in two NASA projects, with nuance, fun anecdotes, and a lot of texture that illustrates the real creativity that goes into making these hybrid systems work in very distinct ways.

Current AI “Discourse” has none of this.  It has no nuance, no science, and no imagination. 

Anyone who listened to Julian’s talk would come away with a deep appreciation for the sophisticated way in which people communicate verbally, non-verbally, and in an embodied way, with each other. And would be astonished at the presumption that current LLMs could capture any of that. 

Anyone who listens to Janet’s talk would come away with a deep appreciation for the sophistication and subtlety with which teams of people organize in very different ways to use technology to carry out some shared mission and purpose. And would be astonished at the poverty of imagination in the shouts of “Agents!” “Chatbots” “Assistants” that echo across the tech landscape. 

What gets me most excited about the moment we are in is the sense of openness and possibility - that we can take all these shiny new AI tools into a mental garage and tinker around to come up with all kinds of crazy ideas. What gets me most depressed is that in order to do this we have to rise above, or hunker down below (pick your favorite direction), the clamor of BS around specific products that specific companies need to be able to sell to justify even more investment. And that this clamor is also forcing misguided structural changes in how we organize (or refuse to organize) society. 

Real innovation is not coming from those selling us pre-canned shrink-wrapped products. It’s coming and will come from those who have the courage to imagine new ways of working with and using AI tools that let us – people – drive, rather than be driven.

 

 

 1Janet is the reason I can blithely go around saying ‘Latour’, and “ANT”, and almost know what I’m saying. It’s like a secret handshake that gets me into cool STS clubs (ok not that cool, but definitely full of lots of complicated words).