Follow
Adrian Riekert
Title
Cited by
Cited by
Year
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
P Cheridito, A Jentzen, A Riekert, F Rossmannek
Journal of Complexity 72, 101646, 2022
252022
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
A Jentzen, A Riekert
Zeitschrift für angewandte Mathematik und Physik 73 (5), 188, 2022
202022
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
A Jentzen, A Riekert
Journal of Mathematical Analysis and Applications 517 (2), 126601, 2023
192023
On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks
A Jentzen, A Riekert
arXiv preprint arXiv:2112.09684, 2021
172021
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation
S Eberle, A Jentzen, A Riekert, GS Weiss
arXiv preprint arXiv:2108.08106, 2021
172021
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear …
A Jentzen, A Riekert
Journal of Machine Learning Research 23 (260), 1-50, 2022
152022
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
M Hutzenthaler, A Jentzen, K Pohl, A Riekert, L Scarpa
arXiv preprint arXiv:2112.07369, 2021
102021
Convergence rates for empirical measures of Markov chains in dual and Wasserstein distances
A Riekert
Statistics & Probability Letters 189, 109605, 2022
7*2022
On the existence of infinitely many realization functions of non-global local minima in the training of artificial neural networks with ReLU activation
S Ibragimov, A Jentzen, T Kröger, A Riekert
arXiv preprint arXiv:2202.11481, 2022
62022
Convergence to good non-optimal critical points in the training of neural networks: Gradient descent optimization with one random initialization overcomes all bad non-global …
S Ibragimov, A Jentzen, A Riekert
arXiv preprint arXiv:2212.13111, 2022
52022
Strong overall error analysis for the training of artificial neural networks via random initializations
A Jentzen, A Riekert
Communications in Mathematics and Statistics, 1-50, 2023
32023
Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations
A Jentzen, A Riekert, P von Wurstemberger
arXiv preprint arXiv:2302.03286, 2023
22023
Normalized gradient flow optimization in the training of ReLU artificial neural networks
S Eberle, A Jentzen, A Riekert, G Weiss
arXiv preprint arXiv:2207.06246, 2022
22022
Deep neural network approximation of composite functions without the curse of dimensionality
A Riekert
arXiv preprint arXiv:2304.05790, 2023
12023
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
A Jentzen, A Riekert
arXiv preprint arXiv:2402.05155, 2024
2024
A proof of the corrected Sister Beiter cyclotomic coefficient conjecture inspired by Zhao and Zhang
B Juran, P Moree, A Riekert, D Schmitz, J Völlmecke
arXiv preprint arXiv:2304.09250, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–16