New paper to share! @qveraliao and I lay out our vision of a human-centered research roadmap for “AI Transparency in the Age of LLMs.”
https://arxiv.org/abs/2306.01941
There's lots of talk about the responsible development and deployment of LLMs, but transparency (including model reporting, explanations, uncertainty communication, and more) is often missing from this discourse.
We hope this framing will spark more discussion and research.
Attempting my first mastodon thread below...
We argue for developing and designing approaches to transparency by considering stakeholder needs, novel types of LLM-infused applications, and new usage patterns around LLMs—all while building on lessons learned from human-centered research.
We reflect on challenges that arise providing transparency for LLMs: complex model capabilities, massive opaque architectures, proprietary tech, complex applications, diverse stakeholders, rapidly evolving public perception, and pressure to move fast.
We synthesize lessons from HCI and RAI/FATE research—specifically, around taking a goal-oriented perspective, supporting appropriate levels of trust, the importance of mental models, the importance of how we communicate information, and the need to support control.
Finally, we lay out 4 common approaches to transparency—model reporting, publishing evaluation results, providing explanations, and communicating uncertainty—and open questions around how these approaches might be applied to LLMs.
We hope this sparks discussion! We’ll continue to iterate on this roadmap and would love to hear your constructive feedback.