Is Physics Informed Loss Always Suitable for Training Physics Informed Neural Network? C Wang, S Li, D He, L Wang Advances in Neural Information Processing Systems 35, 8278-8290, 2022 | 46 | 2022 |
Your transformer may not be as powerful as you expect S Luo, S Li, S Zheng, TY Liu, L Wang, D He Advances in Neural Information Processing Systems 35, 4301-4315, 2022 | 37 | 2022 |
Stable, fast and accurate: Kernelized attention with relative positional encoding S Luo, S Li, T Cai, D He, D Peng, S Zheng, G Ke, L Wang, TY Liu NeurIPS 2021, 2021 | 37 | 2021 |
Can vision transformers perform convolution? S Li, X Chen, D He, CJ Hsieh arXiv preprint arXiv:2111.01353, 2021 | 16 | 2021 |
Learning physics-informed neural networks without stacked back-propagation D He, S Li, W Shi, X Gao, J Zhang, J Bian, L Wang, TY Liu International Conference on Artificial Intelligence and Statistics, 3034-3047, 2023 | 14 | 2023 |
Functional Interpolation for Relative Positions Improves Long Context Transformers S Li, C You, G Guruganesh, J Ainslie, S Ontanon, M Zaheer, S Sanghai, ... arXiv preprint arXiv:2310.04418, 2023 | 9 | 2023 |
Learning a fourier transform for linear relative positional encodings in transformers K Choromanski, S Li, V Likhosherstov, KA Dubey, S Luo, D He, Y Yang, ... International Conference on Artificial Intelligence and Statistics, 2278-2286, 2024 | 5 | 2024 |