I am working on perception, decision making and motion planning for autonomous driving.
Before digging into self-driving cars, I was a robotics engineer/researcher at
Dorabot Inc., where I worked on multi-robot systems.
Before that, I spent half a year at the City University of Hong Kong (CityU) ,
supervised by Prof. Jia Pan on multi-agent collision avoidance.
And ever since then, I've been collaborating with Prof. Jia Pan on machine learning
for robotic perception, planning and control.
My current interests lie in the intersection of autonomous driving, robotics, reinforcement learning and deep learning.
In particular, I'm interested in designing machine learning algorithms to learn a driving policy that can enable robotic cars
driving safely, reliably and efficiently in complex environments.
As a first step toward reducing the performance gap between decentralized and centralized multi-robot collsion avoidance,
we present a multi-scenario multi-stage training framework to
find an optimal policy which is trained over a large number
of robots on rich, complex environments simultaneously using
a policy gradient based reinforcement algorithm.
This paper is our first step toward learning a reactive
collision avoidance policy for multi-agent collision avoidance. By carefully designing the data collection process
and leveraging an end-to-end learning framework, our method
can learn a deep neural network based collision avoidance
policy which demonstrates an advantage over the state-of-theart ORCA policy in terms of ease of use,
success rate, and navigation performance.
We present our pick-and-place system in detail while highlighting
our design principles for the warehouse settings, including
the perception method that leverages knowledge about its
workspace, three grippers designed to handle a large variety of
different objects in terms of shape, weight and material, and
grasp planning in cluttered scenarios.
We propose a data-driven approach to modeling contextual information
covering both intra-object part relations and inter-object object layouts.
Our method combines the detection of individual objects and object groups within the same framework,
enabling contextual analysis without knowing the objects in the scene a priori.
We present an intrusive acquisition approach for acquiring and modeling of plants and foliage,
which disassembles the plant into disjoint parts that can be accurately scanned and reconstructed offline.
We propose autonomous scene scanning by a robot to relieve humans from such a tedious task.
The presented algorithm interleaves between scene analysis for extracting objects and robot conducted validation for
improving the segmentation and object-aware reconstruction.
We propose a quality-driven, Poisson-guided autonomous scanning method to ensure the high quality scanning of the model.
This goal is achieved by placing the scanner at strategically selected Next-Best-Views (NBVs) to ensure progressively capturing the
geometric details of the object, until both completeness and high fidelity are reached.
We designed several quadruped robots from scratch and implemented discrete reaching movement
and rhythmic movements (four different gaits) on these robots using Central Pattern Generator based
locomotion control methods.
Speech Controlled Mobile Robot
Pinxin Long, Ke Zhao, Jian Zheng, 2010
We created a robot using STM32 and LD3320 chips running Chinese Speech Recognition application.
This application enables the robot to perform various movement (e.g. move forward, turn left, stop, etc.)
based upon user interaction by speech.