Stebėti
Shengjie Luo
Shengjie Luo
PhD Student, Peking University
Patvirtintas el. paštas stu.pku.edu.cn - Pagrindinis puslapis
Pavadinimas
Cituota
Cituota
Metai
Do Transformers Really Perform Bad for Graph Representation?
C Ying, T Cai, S Luo, S Zheng, G Ke, D He, Y Shen, TY Liu
NeurIPS 2021, 2021
10562021
Graphnorm: A principled approach to accelerating graph neural network training
T Cai*, S Luo*, K Xu, D He, T Liu, L Wang
ICML 2021, 2021
1632021
Rethinking the Expressive Power of GNNs via Graph Biconnectivity
B Zhang*, S Luo*, L Wang, D He
ICLR 2023 Outstanding Paper Award, 2023
912023
One Transformer Can Understand Both 2D & 3D Molecular Data
S Luo, T Chen, Y Xu, S Zheng, TY Liu, L Wang, D He
ICLR 2023, 2022
692022
Benchmarking Graphormer on Large-Scale Molecular Modeling Datasets
Y Shi, S Zheng, G Ke, Y Shen, J You, J He, S Luo, C Liu, D He, TY Liu
arXiv preprint arXiv:2203.04810, 2022
512022
Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
S Luo, S Li, T Cai, D He, D Peng, S Zheng, G Ke, L Wang, TY Liu
NeurIPS 2021, 2021
442021
Your Transformer May Not be as Powerful as You Expect
S Luo, S Li, S Zheng, TY Liu, L Wang, D He
NeurIPS 2022, 2022
402022
First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
C Ying, M Yang, S Zheng, G Ke, S Luo, T Cai, C Wu, Y Wang, Y Shen, ...
KDD CUP 2021, 2021
122021
Learning a fourier transform for linear relative positional encodings in transformers
K Choromanski, S Li, V Likhosherstov, KA Dubey, S Luo, D He, Y Yang, ...
International Conference on Artificial Intelligence and Statistics, 2278-2286, 2024
52024
Masked Molecule Modeling: A New Paradigm of Molecular Representation Learning for Chemistry Understanding
J He, K Tian, S Luo, Y Min, S Zheng, Y Shi, D He, H Liu, N Yu, L Wang, ...
42022
Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products
S Luo, T Chen, AS Krishnapriyan
ICLR 2024 Spotlight Presentation, 2024
32024
GeoMFormer: A General Architecture for Geometric Molecular Representation Learning
T Chen*, S Luo*, D He, S Zheng, TY Liu, L Wang
ICML 2024, 2023
12023
Revisiting Language Encoding in Learning Multilingual Representations
S Luo, K Gao, S Zheng, G Ke, D He, L Wang, TY Liu
arXiv preprint arXiv:2102.08357, 2021
12021
Let the Code LLM Edit Itself When You Edit the Code
Z He, J Zhang, S Luo, J Xu, Z Zhang, D He
arXiv preprint arXiv:2407.03157, 2024
2024
Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation
Z He*, G Feng*, S Luo*, K Yang, D He, J Xu, Z Zhang, H Yang, L Wang
ICML 2024, 2024
2024
Sistema negali atlikti operacijos. Bandykite vėliau dar kartą.
Straipsniai 1–15