Pinxin Long
pinxinlong at gmail dot com

I am working with Prof. Huazhe Xu on learning for robotic manipulation. I was a Staff Software Engineer at Baidu Apollo working on the Apollo Navigation Pilot (ANP, i.e. Baidu's Full Self-Driving, an advanced driver-assist system) which has been successfully deployed on Jiyue-01 cars and delivered to customers by the end of 2023. Before delving into self-driving cars, I was a Researcher and Senior Software Engineer under the mentorship of Prof. Ruigang Yang and Prof. Dinesh Manocha at Robotics and Auto-driving Lab (RAL), Baidu Research, focusing on the design and development of the Autonomous Excavator Systems. Prior to this role, I held the position of Research Scientist at Dorabot Inc. and served as a Research Assistant collaborating with Prof. Jia Pan on projects related to robotic navigation and multi-robot collision avoidance.

Google Scholar

News

Research

I am interested in artificial intelligence, machine learning and robotics. Representative papers are highlighted (* denotes equal contribution).

An autonomous excavator system for material loading tasks
Liangjun Zhang, Jinxin Zhao, Pinxin Long, Liyang Wang, Lingfeng Qian, Feixiang Lu, Xibin Song, Dinesh Manocha
Science Robotics , 2021.
video

We present an autonomous excavator system (AES) for material loading tasks. Our system can handle different environments and uses an architecture that combines perception and planning. AES has been deployed for real-world operations for long periods and can operate robustly in challenging scenarios.

Optimization-based framework for excavation trajectory generation
Yajue Yang, Pinxin Long, Xibin Song, Jia Pan, Liangjun Zhang
IEEE Robotics and Automation Letters (RAL), 2021
arXiv

We present a novel optimization-based framework for autonomous excavator trajectory generation under various objectives, including minimum joint displacement and minimum time.

Distributed multi-robot collision avoidance via deep reinforcement learning for navigation in complex scenarios
Tingxiang Fan*, Pinxin Long*, Wenxi Liu, Jia Pan
The International Journal of Robotics Research (IJRR) , 2020.
project / video / arXiv

We present a decentralized sensor-level collision avoidance policy for multi-robot systems. The learned policy is also integrated into a hybrid control framework to further improve the policy's robustness and effectiveness. Our learned policy enables a robot to make effective progress in a crowd without getting stuck.

Learning resilient behaviors for navigation under uncertainty
Tingxiang Fan, Pinxin Long, Wenxi Liu, Jia Pan, Ruigang Yang, Dinesh Manocha
International Conference on Robotics and Automation (ICRA) , 2020.
video / arXiv

We present a novel approach for uncertainty-aware navigation by introducing an uncertaintyaware predictor to model the environmental uncertainty, and propose a novel uncertainty-aware navigation network to learn resilient behaviors in the prior unknown environments




Getting Robots Unfrozen and Unlost in Dense Pedestrian Crowds
Tingxiang Fan*, Xinjing Chen*, Jia Pan, Pinxin Long, Wenxi Liu, Ruigang Yang, Dinesh Manocha
IEEE Robotics and Automation Letters (RAL), 2019.
project / video / arXiv

We aim to enable a mobile robot to navigate through environments with dense crowds, e.g., shopping malls, canteens, train stations, or airport terminals. Here we propose a navigation framework that handles the robot freezing and the navigation lost problems simultaneously.

Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning
Pinxin Long*, Tingxiang Fan*, Xinyi Liao, Wenxi Liu, Hao Zhang, Jia Pan
International Conference on Robotics and Automation (ICRA), 2018.
project / video (youtube), video (bilibili) / arXiv

As a first step toward reducing the performance gap between decentralized and centralized multi-robot collsion avoidance, we present a multi-scenario multi-stage training framework to find an optimal policy which is trained over a large number of robots on rich, complex environments simultaneously using a policy gradient based reinforcement algorithm.


Deep-Learned Collision Avoidance Policy for Distributed Multi-Agent Navigation
Pinxin Long, Wenxi Liu, Jia Pan
IEEE Robotics and Automation Letters (RAL), 2017
project (video) / arXiv

This paper is our first step toward learning a reactive collision avoidance policy for multi-agent collision avoidance. By carefully designing the data collection process and leveraging an end-to-end learning framework, our method can learn a deep neural network based collision avoidance policy which demonstrates an advantage over the state-of-theart ORCA policy in terms of ease of use, success rate, and navigation performance.

DoraPicker: An Autonomous Picking System for General Objects
Hao Zhang, Pinxin Long, Dandan Zhou, Zhongfeng Qian, Zheng Wang, Weiwei Wan, Dinesh Manocha, Chonhyon Park, Tommy Hu, Chao Cao, Yibo Chen, Marco Chow, Jia Pan
International Conference on Automation Science and Engineering (CASE), 2016
video / arXiv

We present our pick-and-place system in detail while highlighting our design principles for the warehouse settings, including the perception method that leverages knowledge about its workspace, three grippers designed to handle a large variety of different objects in terms of shape, weight and material, and grasp planning in cluttered scenarios.

Data-Driven Contextual Modeling for 3D Scene Understanding
Yifei Shi, Pinxin Long, Kai Xu, Hui Huang, Yueshan Xiong
Computer & Graphics (C&G), 2016
project

We propose a data-driven approach to modeling contextual information covering both intra-object part relations and inter-object object layouts. Our method combines the detection of individual objects and object groups within the same framework, enabling contextual analysis without knowing the objects in the scene a priori.

Full 3D Plant Reconstruction via Intrusive Acquisition
Kangxue Yin, Hui Huang, Pinxin Long, Alexei Gaissinski, Minglun Gong, Andrei Sharf Computer Graphics Forum (CGF), 2016
project / data

We present an intrusive acquisition approach for acquiring and modeling of plants and foliage, which disassembles the plant into disjoint parts that can be accurately scanned and reconstructed offline.

Autoscanning for Coupled Scene Reconstruction and Proactive Object Analysis
Kai Xu, Hui Huang, Yifei Shi, Hao Li, Pinxin Long, Jianong Caichen, Wei Sun, Baoquan Chen
ACM Transactions on Graphics (SIGGRAPH Asia 2015), 2015
project / slides / video (youtube), video (youku) / code

We propose autonomous scene scanning by a robot to relieve humans from such a tedious task. The presented algorithm interleaves between scene analysis for extracting objects and robot conducted validation for improving the segmentation and object-aware reconstruction.

Quality-driven Poisson-guided Autoscanning
Shihao Wu, Wei Sun, Pinxin Long, Hui Huang, Daniel Cohen-Or, Minglun Gong, Oliver Deussen, Baoquan Chen
ACM Transactions on Graphics (SIGGRAPH Asia 2014), 2014
project / slides / video (youtube), video (youku) / live show / code

We propose a quality-driven, Poisson-guided autonomous scanning method to ensure the high quality scanning of the model. This goal is achieved by placing the scanner at strategically selected Next-Best-Views (NBVs) to ensure progressively capturing the geometric details of the object, until both completeness and high fidelity are reached.

Old Projects

Dr. Tea
Pinxin Long, Zhe Hu, Wei Li, Ruigang Yang. 2019.06

We built a mobile manipulator to perform traditional Sichuan tea acrobatics, where the robot pours tea from a long-spouted teapot into a teabowl.

Quadruped Robots
Pinxin Long, Hongjin Yu, Haoxing Guo, Ke Zhao. 2010 - 2011

We designed several quadruped robots from scratch and implemented discrete reaching movement and rhythmic movements (four different gaits) on these robots using Central Pattern Generator based locomotion control methods.

Speech Controlled Mobile Robot
Pinxin Long, Ke Zhao, Jian Zheng. 2010

We created a robot using STM32 and LD3320 chips running Chinese Speech Recognition application. This application enables the robot to perform various movement (e.g. move forward, turn left, stop, etc.) based upon user interaction by speech.



This nice webpage is "stolen" from here.