A global convergence theory for deep ReLU implicit networks via over-parameterization T Gao, H Liu, J Liu, H Rajan, H Gao International Conference on Learning Representations (ICLR 2022), 2022 | 16 | 2022 |
Randomized bregman coordinate descent methods for non-lipschitz optimization T Gao, S Lu, J Liu, C Chu arXiv preprint arXiv:2001.05202, 2020 | 16 | 2020 |
Did: Distributed incremental block coordinate descent for nonnegative matrix factorization T Gao, C Chu Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018 | 11 | 2018 |
Minimum-volume-regularized weighted symmetric nonnegative matrix factorization for clustering T Gao, S Olofsson, S Lu 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP …, 2016 | 10 | 2016 |
Hybrid classification approach of SMOTE and instance selection for imbalanced datasets T Gao Iowa State University, 2015 | 9 | 2015 |
Leveraging two reference functions in block bregman proximal gradient descent for non-convex and non-lipschitz problems T Gao, S Lu, J Liu, C Chu arXiv preprint arXiv:1912.07527, 2019 | 4 | 2019 |
On the convergence of randomized Bregman coordinate descent for non-Lipschitz composite problems T Gao, S Lu, J Liu, C Chu ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021 | 3 | 2021 |
Gradient descent optimizes infinite-depth relu implicit networks with linear widths T Gao, H Gao arXiv preprint arXiv:2205.07463, 2022 | 2 | 2022 |
Wide Neural Networks as Gaussian Processes: Lessons from Deep Equilibrium Models T Gao, X Huo, H Liu, H Gao Neural Information Processing System s (NeurIPS 2023), 2023 | 1 | 2023 |
On the optimization and generalization of overparameterized implicit neural networks T Gao, H Gao arXiv preprint arXiv:2209.15562, 2022 | 1 | 2022 |
Infinitely Deep Residual Networks: Unveiling Wide Neural ODEs as Gaussian Processes T Gao, X Huo, H Liu, H Gao | | 2023 |