I am a fourth year Computer Science Ph.D. student at UT Austin, supervised by Prof. Alexandros Dimakis. Before starting my Ph.D., I received my undergraduate degree in ECE from the National Technical University of Athens (NTUA).
My research is on generative modeling. I am particularly interested in the problem of learning generative models from corrupted data.
Internships:
- NVIDIA Research (2023) with Arash Vahdat.
- Google Research (2022) with Peyman Milanfar, Mauricio Delbracio and Hossein Talebi.
- Google Research (2022) with Vincent Chu and Abhishek Kumar.
News
- [Paper Acceptance]: Consistent Diffusion Meets Tweedie got accepted to ICML 2024.
- [Paper Acceptance]: Ambient Diffusion: Learning Clean Distributions from Corrupted Data got accepted to NeurIPS 2023.
- [Paper Acceptance]: Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be Consistent got accepted to NeurIPS 2023.
- [Paper Acceptance]: Solving Linear Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models got accepted to NeurIPS 2023.
- [Paper Acceptance]: DataComp: In search of the next generation of multimodal datasets got accepted to NeurIPS 2023 (Oral).
- [Paper acceptance]: Restoration-degradation beyond linear diffusions: A non-asymptotic analysis for DDIM-type samplers got accepted to ICML 2023.
Publications
Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data
Published in ICML 2024 [Paper] [Code]
Citation: Giannis Daras, Alexandros G. Dimakis, Constantinos Daskalakis, "Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data", ICML 2024
Solving Linear Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models
Published in NeurIPS 2023 [Paper] [Code]
Citation: Litu Rout, Negin Raoof, Giannis Daras, Constantine Caramanis, Alexandros G. Dimakis, Sanjay Shakkottai, "Solving Linear Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models", NeurIPS 2023
Ambient Diffusion: Learning Clean Distributions from Corrupted Data
Published in NeurIPS 2023 [Paper] [Code]
Citation: Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans, "Ambient Diffusion: Learning Clean Distributions from Corrupted Data", NeurIPS 2023
DataComp: In search of the next generation of multimodal datasets
Published as an Oral in NeurIPS 2023 [Paper] [Code]
Citation: Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt, "DataComp: In search of the next generation of multimodal datasets", NeurIPS 2023
Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic Analysis for DDIM-Type Samplers
Published in ICML 2023 [Paper]
Citation: Sitan Chen, Giannis Daras, Alexandros G. Dimakis, "Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic Analysis for DDIM-Type Samplers", ICML 2023
Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be Consistent
Published in NeurIPS 2023 [Paper] [Code]
Citation: Giannis Daras, Yuval Dagan, Alexandros G. Dimakis, Constantinos Daskalakis, "Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be Consistent", NeurIPS 2023
Multiresolution Textual Inversion
Published as an Oral in NeurIPS 2022, SBM Workshop [Paper] [Code]
Citation: Giannis Daras, Alexandros G. Dimakis, "Multiresolution Textual Inversion", NeurIPS 2022, SBM Workshop
Soft Diffusion: Score Matching for General Corruptions
Published in TMLR 2023 [Paper]
Citation: Giannis Daras, Mauricio Delbracio, Hossein Talebi, Alexandros G. Dimakis, Peyman Milanfar, "Soft Diffusion: Score Matching for General Corruptions", TMLR 2023
Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for Inverse Problems
Published in ICML 2022 [Paper] [Code]
Citation: Giannis Daras (*) , Yuval Dagan (*), Alexandros G. Dimakis, Constantinos Daskalakis, "Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for Inverse Problems", ICML 2022
Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve
Published in NeurIPS 2022 [Paper] [Code]
Citation: Giannis Daras (*) , Negin Raoof (*), Zoi Gkalitsiou, Alexandros G. Dimakis, "Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve", NeurIPS 2022
Discovering the Hidden Vocabulary of DALLE-2
Published in NeurIPS 2022 Workshop on Score-Based Methods [Paper]
Citation: Giannis Daras (*) , Alexandros G. Dimakis, "Discovering the Hidden Vocabulary of DALLE-2", NeurIPS 2022 Workshop on Score-Based Methods
Solving Inverse Problems with NerfGANs
Preprint [Paper]
Citation: Giannis Daras, Wen-Sheng Chu, Abhishek Kumar, Dmitry Lagun, Alexandros G. Dimakis, "Solving Inverse Problems with NerfGANs"
Robust Compressed Sensing MRI with Deep Generative Priors
Published in NeurIPS 2021 [Paper] [Code]
Citation: Ajil Jalal, Marius Arvinte, Giannis Daras , Eric Price, Alexandros G. Dimakis, Jonathan I. Tamir, "Robust Compressed Sensing MRI with Deep Generative Priors", NeurIPS 2021
Intermediate Layer Optimization for Inverse Problems using Deep Generative Models
Published in ICML 2021 [Paper] [Code]
Citation: Giannis Daras , Joseph Dean, Ajil Jalal, Alexandros G. Dimakis, "Intermediate Layer Optimization for Inverse Problems using Deep Generative Models", ICML 2021
SMYRF: Efficient Attention using Asymmetric Clustering
Published in NeurIPS 2020 [Paper] [Code]
Citation: Giannis Daras, Nikita Kitaev, Augustus Odena, Alexandros G. Dimakis, "SMYRF: Efficient Attention using Asymmetric Clustering", NeurIPS 2020
Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models
Published in CVPR 2020 [Paper] [Code]
Citation: Giannis Daras, Augustus Odena, Han Zhang, Alexandros G. Dimakis, "Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models", CVPR 2020