Hua Shen

Hua Shen, Ph.D.

Hello There! 🤗

Google Scholar        Codes        Twitter        LinkedIn        CV        huashen@uw.edu       

I am currently a Postdoctoral Scholar at the iSchool and the RAISE Center at the University of Washington, closely working with my amazing advisor Prof. Tanu Mitra who co-founded the center.

Starting in Fall 2025, I will be a tenure-track Assistant Professor of Computer Science at NYU Shanghai, affiliated with the NYU Tandon CSE Department. I will be recruiting students through NYU Courant CS and NYU Tandon CSE Departments. Find out my Research Overview below and Openings at this page.

I completed my Ph.D. at Penn State from 2019.09 to 2023.07, fortunately advised by my awesome advisor Prof. Kenneth Huang, amazing committe members Profs. Mary Beth Rosson, C. Lee Giles, S. Shyam Sundar (Penn State), Sherry Wu (CMU), and external mentor Dr. Andreas Stolcke (Distinguished AI Scientist at Uniphore). Throughout my Ph.D., I interned in several amazing teams at Google Research (now Google DeepMind) and Amazon Alexa AI (now Amazon AGI). I also spent one wonderful year as a Postdoc Research Fellow at University of Michigan with many great professors and students. Deeply grateful for my collaborators and mentored students 🧡!

Research Overview



I'm a HCI+AI researcher leading the research efforts on Bidirectional HumansAI Alignment (A systematic review), studying AI agent collaboration with both individuals and society, to:

Maximize Co-performance and Minimize Harms in Human-AI Alignment


If you are interested in joining us to explore these topics, I'll be looking for undergraduate, master or PhD students! Find out our openings at this Page.

I'm also leading our BiAlign ICLR 2025 Workshop and CHI 2025 SIG. Love to chat in person!

Research Highlights

See the details of the listed publications on this page.

Fundamentals in Human-AI Alignment
(Study fundamentals topics in huamn-AI alignment reseasrch).
  • Position: Bidirectional Human-AI Alignment: The Systematic Survey, BiAlign @ 2025 ICLR & CHI
  • Value Alignment of Human-AI (Agents): ValueCompass, Value-Action Gap;
  • Epistemic Alignment: Human-LLM Knowledge Delivery;
  • Aligning Humans with AI (HCI/CSCW/Design-oriented)
    (Empower humans to collaborate with deployed AI through explanations and interactions)
  • Human-Centered AI Explanation: ConvXAI (Interactive and Conversational XAI); XAI Not Useful ;
  • Evaluating & Auditing LLM and AI Agents: Parachute, PromptAuditor;
  • Multi-Agent Learning & Collaboration: Hypocompass , ScatterShot ;
  • Aligning AI with Humans (NLP/Speech/ML-oriented)
    (Integrate human feedback and values into developing and customizing AI)
  • Values in LLM & Spoken LLMs: Improving Fairness in Spoken LMs, SpeechPrompt
  • LLM Benchmark with Human-in-the-loop: MultiTurnCleanup ;
  • Human-Agent Interaction & Trustworthiness: DeepFake Identification, Gentopia.AI;