Stebėti
Enayat Ullah
Enayat Ullah
Research Scientist, Meta
Patvirtintas el. paštas jhu.edu - Pagrindinis puslapis
Pavadinimas
Cituota
Cituota
Metai
Fetchsgd: Communication-efficient federated learning with sketching
D Rothchild, A Panda, E Ullah, N Ivkin, I Stoica, V Braverman, J Gonzalez, ...
International Conference on Machine Learning, 8253-8265, 2020
3952020
Communication-efficient distributed SGD with sketching
N Ivkin, D Rothchild, E Ullah, I Stoica, R Arora
Advances in Neural Information Processing Systems 32, 2019
2162019
Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration
S De, A Mukherjee, E Ullah
arXiv preprint arXiv:1807.06766, 2018
1042018
Machine unlearning via algorithmic stability
E Ullah, T Mai, A Rao, RA Rossi, R Arora
Conference on Learning Theory, 4126-4142, 2021
852021
Streaming Kernel PCA with Random Features
E Ullah, P Mianjy, TV Marinov, R Arora
Advances in Neural Information Processing Systems 31, 2018
402018
Convergence guarantees for RMSProp and ADAM in non-convex optimization and their comparison to Nesterov acceleration on autoencoders
A Basu, S De, A Mukherjee, E Ullah
arXiv preprint arXiv:1807.06766, 28, 2018
362018
Faster rates of convergence to stationary points in differentially private optimization
R Arora, R Bassily, T González, CA Guzmán, M Menart, E Ullah
International Conference on Machine Learning, 1060-1092, 2023
212023
Differentially private generalized linear models revisited
R Arora, R Bassily, C Guzmán, M Menart, E Ullah
Advances in neural information processing systems 35, 22505-22517, 2022
182022
Adversarial robustness is at odds with lazy training
Y Wang, E Ullah, P Mianjy, R Arora
Advances in Neural Information Processing Systems 35, 6505-6516, 2022
102022
Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration. arXiv 2018
S De, A Mukherjee, E Ullah
arXiv preprint arXiv:1807.06766, 2018
102018
Improved algorithms for time decay streams
V Braverman, H Lang, E Ullah, S Zhou
arXiv preprint arXiv:1907.07574, 2019
82019
Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration, 2018
S De, A Mukherjee, E Ullah
arXiv preprint arXiv:1807.06766 42, 0
8
Optimistic rates for multi-task representation learning
A Watkins, E Ullah, T Nguyen-Tang, R Arora
Advances in Neural Information Processing Systems 36, 2207-2251, 2023
52023
Private federated learning with autotuned compression
E Ullah, CA Choquette-Choo, P Kairouz, S Oh
International Conference on Machine Learning, 34668-34708, 2023
52023
Private stochastic convex optimization: efficient algorithms for non-smooth objectives
R Arora, TV Marinov, E Ullah
arXiv preprint arXiv:2002.09609, 2020
32020
From adaptive query release to machine unlearning
E Ullah, R Arora
International Conference on Machine Learning, 34642-34667, 2023
22023
Generalization bounds for kernel canonical correlation analysis
E Ullah, R Arora
Transactions on machine learning research, 2023
22023
Clustering using Approximate Nearest Neighbour Oracles
E Ullah, H Lang, R Arora, V Braverman
Transactions on Machine Learning Research, 2022
22022
Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates
M Menart, E Ullah, R Arora, R Bassily, C Guzmán
International Conference on Algorithmic Learning Theory, 868-906, 2024
12024
Machine unlearning and retraining of a machine learning model based on a modified training dataset
E Ullah, AB Rao, T Mai, RA Rossi
US Patent App. 17/451,260, 2023
12023
Sistema negali atlikti operacijos. Bandykite vėliau dar kartą.
Straipsniai 1–20