Pinar Yanardag

E-mail/Google Scholar/Twitter/Github

Featuring a dress and a necklace generated by GANs -- made by human designers at Conference of the Future'19, Chile.

I am a tenure-track assistant professor at Virginia Tech, Department of Computer Science where I lead đź’Ž GEMLAB. I am also a member of Sanghani Center for AI and Discovery Analytics.

Prior to Virginia Tech, I was a postdoc at MIT. I received my Ph.D. in Computer Science at Purdue. During my graduate studies, I also worked at Amazon (P13N team) and VMware. I am a Fulbright PhD Fellow and Google Anita Borg Memorial Scholar.

My research is published at top computer science conferences such as CVPR, ICCV, NeurIPS, and featured in mainstream media (e.g., The Washington Post, BBC, CNN) and magazines (e.g., Motherboard, Rolling Stone). For more information, see the Publications page.

My main research area is centered on the development of generative AI methods, targeting three key aspects:

  • Image generation and editing: I design diffusion- and flow-based text-to-image models that support fine-grained, controllable edits (Conform, CVPR’24, Fluxspace, CVPR’25) and advance methods for interpretability (NoiseCLR, CVPR’24) and explainability (DiffEx, CVPR’25) .

  • Video and motion editing: I design video and motion editing methods that carry T2I controls into the temporal domain (Rave, CVPR’24), video diffusion models (Motionflow) and dynamic view sysnthesis (Inverse DVS) to provide high-level control over motion and camera view.

  • Personalization:</b> I design methods (LoRACLR, CVPR ’25) to generate (CLoRA) and edit multi-subject personalized compositions (LoRAShop), and explore personalized artistic styles (Stylebreeder, NeurIPS’24).

Prior to joining Virginia Tech, I was CEO of AI Fiction, a creative design studio specializing in AI. Some of our work includes generative AI for HBO’s Westworld, for which I was a Creative Director nominee at 72nd Primetime Emmy Awards. I also co-founded GLITCH–the world’s first generative AI clothing line.

I am deeply passionate about promoting the understanding and appreciation of generative models among the general public. At MIT, I started How to Generate (Almost) Anything project, which demystify generative AI through collaboration with artists and artisans, thereby encouraging a broader dialogue about the future of generative AI and its everyday implications. I also taught “AI & Fashion” at London College of Fashion, further contributing to this educational pursuit. See Courses section for more information.

Please contact me via pinary at vt.edu.

news

Feb 3, 2025 Three papers are accepted to CVPR 2025!
Oct 3, 2024 One paper is accepted to NeurIPS’24!
Apr 3, 2024 Two papers are selected as Oral and Highlight at CVPR 2024!
Mar 3, 2024 Three papers are accepted to CVPR’24!

selected publications

  1. CVPR 2025
    FluxSpace: Disentangled Semantic Editing in Rectified Flow Transformers
    Yusuf Dalva, Kavana Venkatesh, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition
  2. CVPR 2025
    LoRACLR: Contrastive Adaptation for Customization of Diffusion Models
    Enis Simsar, Thomas Hofmann, Federico Tombari, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition
  3. CVPR 2025
    Explaining in Diffusion: Explaining a Classifier Through Hierarchical Semantics with Text-to-Image Diffusion Models
    Tahira Kazimi*, Ritika Allada*, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition
  4. CVPR 2024
    [ORAL] NoiseCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions in Diffusion Models
    Yusuf Dalva, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition
  5. CVPR 2024
    CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image Diffusion Models
    Tuna Han Salih Meral, Enis Simsar, Federico Tombari, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition
  6. CVPR 2024
    [Highlight] RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models
    Ozgur Kara*, Bariscan Kurtkaya*, Hidir Yesiltepe, James M. Rehg, and Pinar Yanardag
    Conference on Computer Vision and Pattern Recognition