Follow
Xue Bin Peng
Xue Bin Peng
Assistant Professor at Simon Fraser University, Research Scientist at NVIDIA
Verified email at sfu.ca - Homepage
Title
Cited by
Cited by
Year
Sim-to-real transfer of robotic control with dynamics randomization
XB Peng, M Andrychowicz, W Zaremba, P Abbeel
2018 IEEE international conference on robotics and automation (ICRA), 3803-3810, 2018
16172018
Deepmimic: Example-guided deep reinforcement learning of physics-based character skills
XB Peng, P Abbeel, S Levine, M Van de Panne
ACM Transactions On Graphics (TOG) 37 (4), 1-14, 2018
11922018
Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning
XB Peng, G Berseth, KK Yin, M Van De Panne
Acm transactions on graphics (tog) 36 (4), 1-13, 2017
6732017
Learning agile robotic locomotion skills by imitating animals
XB Peng, E Coumans, T Zhang, TW Lee, J Tan, S Levine
arXiv preprint arXiv:2004.00784, 2020
5342020
Advantage-weighted regression: Simple and scalable off-policy reinforcement learning
XB Peng, A Kumar, G Zhang, S Levine
arXiv preprint arXiv:1910.00177, 2019
5222019
AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control
XB Peng, Z Ma, P Abbeel, S Levine, A Kanazawa
ACM Transactions on Graphics (TOG) 40 (4), 1-20, 2021
3432021
Terrain-adaptive locomotion skills using deep reinforcement learning
XB Peng, G Berseth, M Van de Panne
ACM Transactions on Graphics (TOG) 35 (4), 1-12, 2016
3362016
Sfv: Reinforcement learning of physical skills from videos
XB Peng, A Kanazawa, J Malik, P Abbeel, S Levine
ACM Transactions On Graphics (TOG) 37 (6), 1-14, 2018
2702018
Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow
XB Peng, A Kanazawa, S Toyer, P Abbeel, S Levine
arXiv preprint arXiv:1810.00821, 2018
2562018
Reinforcement learning for robust parameterized locomotion control of bipedal robots
Z Li, X Cheng, XB Peng, P Abbeel, S Levine, G Berseth, K Sreenath
2021 IEEE International Conference on Robotics and Automation (ICRA), 2811-2817, 2021
2462021
Mcp: Learning composable hierarchical control with multiplicative compositional policies
XB Peng, M Chang, G Zhang, P Abbeel, S Levine
Advances in neural information processing systems 32, 2019
2252019
Learning locomotion skills using deeprl: Does the choice of action space matter?
XB Peng, M Van De Panne
Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation …, 2017
2132017
Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters
XB Peng, Y Guo, L Halper, S Levine, S Fidler
ACM Transactions On Graphics (TOG) 41 (4), 1-17, 2022
2102022
Deep reinforcement learning for modeling human locomotion control in neuromechanical simulation
S Song, Ł Kidziński, XB Peng, C Ong, J Hicks, S Levine, CG Atkeson, ...
Journal of neuroengineering and rehabilitation 18, 1-17, 2021
1232021
Offline meta-reinforcement learning with advantage weighting
E Mitchell, R Rafailov, XB Peng, S Levine, C Finn
International Conference on Machine Learning, 7780-7791, 2021
1102021
Legged robots that keep on learning: Fine-tuning locomotion policies in the real world
L Smith, JC Kew, XB Peng, S Ha, J Tan, S Levine
2022 International Conference on Robotics and Automation (ICRA), 1593-1599, 2022
1052022
Dynamic terrain traversal skills using reinforcement learning
XB Peng, G Berseth, M Van de Panne
ACM Transactions on Graphics (TOG) 34 (4), 1-11, 2015
982015
Adversarial motion priors make good substitutes for complex reward functions
A Escontrela, XB Peng, W Yu, T Zhang, A Iscen, K Goldberg, P Abbeel
2022 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2022
952022
Reward-conditioned policies
A Kumar, XB Peng, S Levine
arXiv preprint arXiv:1912.13465, 2019
942019
Trace and pace: Controllable pedestrian animation via guided trajectory diffusion
D Rempe, Z Luo, X Bin Peng, Y Yuan, K Kitani, K Kreis, S Fidler, O Litany
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
692023
The system can't perform the operation now. Try again later.
Articles 1–20