Welcome! I am Wesley Hanwen Deng, a forth-year Ph.D. student at the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University. At CMU, I am extremely fortunate to be advised by Ken Holstein and Motahhare Eslami, and work closely with Jason I. Hong and Hoda Heidari. I also work with Jenn Wortman Vaughan and Solon Barocas from Microsoft Research on a set of exciting projects on AI impact assessment and regulation. Before coming to CMU, I graduated with the Highest Distinction and an EECS Research Honor in Computer Science from UC Berkeley, where I conducted Human-centered AI research with Niloufar Salehi.
I work on responsible AI (RAI), AI safety, AI ethics, algorithmic fairness, and human-AI interaction. My current work aims at building tools and processes to support both AI practitioners and end users in designing and developing safer, more responsible AI systems, with a specific focus on AI auditing, red-teaming, and impact assessment. My work has been recognized through awards such as Microsoft AI & Society Fellowship, K&L Gates Presidential Fellowship, CERES Fellowship, and Best Paper Awards at the AAAI HCOMP and AIES conferences. My PhD has also been supported by fundings from CMU Block Center for Technology and Society, Norte-Dame IBM Tech Ethics Lab, and a Google Research Scholar Award.
In my free time, I like to cook 🍲🥘, play pickle ball 🥒🎾, do hot yoga 🧘♂️♨️, and read/write at a coffee shop ☕️📚. My long-term (non-academic) career goal is to open a coffee shop that transitions into an intimate bistro serving Chinese-fusion cuision and cocktails in the night.
[ Curriculum Vitae ● Google Scholar ● Full Publications Page ● LinkedIn ● Twitter ● BlueSky]
01/2025:
01/2025:
12/2024:
11/2024:
11/2024:
10/2024:
10/2024:
09/2024:
05/2024:
04/2024:
02/2024:
01/2024:
10/2023:
10/2023:
07/2023:
06/2023:
05/2023:
07/2022:
09/2021:
See the full list of papers here
*: These authors contributed equally to the work.
WeAudit: Scaffolding User Auditors and AI Practitioners in Auditing Generative AI.
arXiv ●
PRE-PRINT
* Invited talks at Apple Human-Centered Machine Learning team; Big Design Seminar at Zhejiang University.
Featured as educational materials for more than 600 students across 10 classes at CMU.
Supporting Industry Computing Researchers in Assessing, Articulating, and Addressing the Potential Negative Societal Impact of Their Work
CSCW 2025 ●
PAPER ●
TEMPLATE ●
PROJECT PAGE (coming soon)
Red-Teaming for Generative AI: Silver Bullet or Security Theater?
AIES 2024 ●
Best Paper Award ●
PAPER
Investigating What Factors Influence Users’ Detection of Harmful Algorithmic Bias and Discrimination.
HCOMP 2024 ●
Best Paper Award ●
PAPER
Understanding Practices, Challenges, and Opportunities for User-Engaged Algorithm Auditing in Industry Practice
CHI 2023 ●
PAPER ●
VIDEO
* Invited talks at Faculty of EEMCS and Industrial Design Engineering at TU Delft, Algorithmic Fairness and Opacity (AFOG) Group at UC Berkeley, Tecent Shanghai, Google Shanghai, Salesforce, and Capital One.
Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice
FAccT 2023 ●
PAPER ●
VIDEO
Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits
FAccT 2022 ●
PAPER ●
VIDEO
* Invited talks at FairLearn developer teams at Microsoft Research, AIF360 developer monthly meeting at IBM, People+AI Research team at Google, and AI Governance group at PwC.
The 2nd HEAL (Human-centered Evaluation and Auditing of Language Models) Workshop
CHI 2025 Workshop ●
WEBSITE
CoDE RAI (Collaboratively Designing and Evaluating Responsible AI Interventions) Special Interest Group
CSCW 2024 Special Interest Group ●
WEBSITE
RAI-CrowdAudit: Responsible Crowdsourcing for Responsible Generative AI:
Engaging Crowds in AI Auditing and Evaluation
HCOMP 2024 Workshop ●
WEBSITE
The 1st HEAL (Human-centered Evaluation and Auditing of Language Models) Workshop
CHI 2024 ●
WEBSITE
Supporting NIST’s Development of Guidelines on Red-teaming for Generative AI
CMU 2024 Workshop ●
WEBSITE ●
REPORT
CoDE RAI (Collaboratively Designing and Evaluating Responsible AI Interventions) Special Interest Group
>CSCW 2023 Workshop ●
WEBSITE
User Engagement in Algorithm Testing and Auditing: Exploring Opportunities and Tensions between Practitioners and End Users.
FAccT 2023 CRAFT ●
WEBSITE
Academic Service
●
●
●
Mentoring (with research projects that lead to publications)
●
●
●
●
●
●
Teaching
●
●
●
●
●
●