Hi. I'm Yiwei Yang.
I am a PhD candidate at the University of Washington, advised by Bill Howe. I received my undergraduate degree in Computer Science in 2019 from the University of Michigan.
I am interested in making our models more reliable and trustworthy. Currently, I am working on benchmarking and mitigating spurious correlations of Large Multi-modal Models (LMMs). Recently, I worked on improving robustness to spurious correlations using out-of-distribution examples.
Selected Publications
- Y. Yang, A. Liu, R. Wolfe, A. Caliskan, B. Howe. Label-Efficient Group Robustness via Out-of-Distribution Concept Curation CVPR 2024
- B. Han, Y. Yang, A. Caspi, B. Howe. Towards Zero-shot Annotation of the Built Environment with Vision-Language Models SIGSPATIAL 2024
- R. Wolfe, S. Issac, B. Han, B. Wen, Y. Yang, L. Rosenblatt, B. Herman, E. Brown, Z. Qu, N. Weber, B. Howe. Laboratory-scale AI: Open-Weight Models are Competitive with ChatGPT Even in Low-Resource Settings FAccT 2024
- Y. Yang, B. Howe. Does a Fair Model Produce Fair Explanations? Relating Distributive and Procedural Fairness. HICSS 2024
- Y. Yang, A. Liu, R. Wolfe, A. Caliskan, B. Howe. Regularizing Model Gradients with Concepts to Improve Robustness to Spurious Correlations ICML SCIS 2023
- R. Wolfe, Y. Yang, B. Howe, A. Caliskan. Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias FAccT 2023