Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps C Lu, Y Zhou, F Bao, J Chen, C Li, J Zhu Advances in Neural Information Processing Systems 35, 5775-5787, 2022 | 356 | 2022 |
Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models F Bao, C Li, J Zhu, B Zhang arXiv preprint arXiv:2201.06503, 2022 | 168 | 2022 |
Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models C Lu, Y Zhou, F Bao, J Chen, C Li, J Zhu arXiv preprint arXiv:2211.01095, 2022 | 103 | 2022 |
Egsde: Unpaired image-to-image translation via energy-guided stochastic differential equations M Zhao, F Bao, C Li, J Zhu Advances in Neural Information Processing Systems 35, 3609-3623, 2022 | 85 | 2022 |
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation Z Wang, C Lu, Y Wang, F Bao, C Li, H Su, J Zhu arXiv preprint arXiv:2305.16213, 2023 | 84 | 2023 |
All are worth words: A vit backbone for diffusion models F Bao, S Nie, K Xue, Y Cao, C Li, H Su, J Zhu Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
, 2023 | 58* | 2023 |
Estimating the optimal covariance with imperfect mean in diffusion probabilistic models F Bao, C Li, J Sun, J Zhu, B Zhang arXiv preprint arXiv:2206.07309, 2022 | 44 | 2022 |
One transformer fits all distributions in multi-modal diffusion at scale F Bao, S Nie, K Xue, C Li, S Pu, Y Wang, G Yue, Y Cao, H Su, J Zhu arXiv preprint arXiv:2303.06555, 2023 | 35 | 2023 |
Maximum likelihood training for score-based diffusion odes by high order denoising score matching C Lu, K Zheng, F Bao, J Chen, C Li, J Zhu International Conference on Machine Learning, 14429-14460, 2022 | 31 | 2022 |
Equivariant energy-guided sde for inverse molecular design F Bao, M Zhao, Z Hao, P Li, C Li, J Zhu arXiv preprint arXiv:2209.15408, 2022 | 21 | 2022 |
Stability and generalization of bilevel programming in hyperparameter optimization F Bao, G Wu, C Li, J Zhu, B Zhang Advances in neural information processing systems 34, 4529-4541, 2021 | 17 | 2021 |
Bi-level score matching for learning energy-based latent variable models F Bao, C Li, K Xu, H Su, J Zhu, B Zhang Advances in Neural Information Processing Systems 33, 18110-18122, 2020 | 14 | 2020 |
Diffusion models and semi-supervised learners benefit mutually with few labels Z You, Y Zhong, F Bao, J Sun, C Li, J Zhu arXiv preprint arXiv:2302.10586, 2023 | 11 | 2023 |
Revisiting discriminative vs. generative classifiers: Theory and implications C Zheng, G Wu, F Bao, Y Cao, C Li, J Zhu arXiv preprint arXiv:2302.02334, 2023 | 10 | 2023 |
Variational (gradient) estimate of the score function in energy-based latent variable models F Bao, K Xu, C Li, L Hong, J Zhu, B Zhang International Conference on Machine Learning, 651-661, 2021 | 8 | 2021 |
Why Are Conditional Generative Models Better Than Unconditional Ones? F Bao, C Li, J Sun, J Zhu arXiv preprint arXiv:2212.00362, 2022 | 6 | 2022 |
A closer look at parameter-efficient tuning in diffusion models C Xiang, F Bao, C Li, H Su, J Zhu arXiv preprint arXiv:2303.18181, 2023 | 3 | 2023 |
Gaussian Mixture Solvers for Diffusion Models H Guo, C Lu, F Bao, T Pang, S Yan, C Du, C Li arXiv preprint arXiv:2311.00941, 2023 | | 2023 |
Boosting Generative Models by Leveraging Cascaded Meta-Models F Bao, H Su, J Zhu arXiv preprint arXiv:1905.04534, 2019 | | 2019 |
Appendix for All are Worth Words: A ViT Backbone for Diffusion Models F Bao, S Nie, K Xue, Y Cao, C Li, H Su, J Zhu | | |