"Learning visual servo policies via planner cloning", Ulrich Viereck, Kate Saenko, and Robert Platt. 2020 International Symposium on Experimental Robotics (ISER 2020) [paper, slides].
A longer version of the paper with more details can be found on arxiv.
Learning control policies for visual servoing in novel environments is an important problem. However, standard model-free policy learning methods are slow to learn. This paper explores planner cloning: using behavior cloning to learn policies that mimic the behavior of a full-state motion planner in simulation. We propose Penalized Q Cloning (PQC), a new behavior cloning algorithm. We show that it outperforms several baselines and ablations on some challenging problems involving visual servoing in novel environments while avoiding obstacles. Finally, we demonstrate that these policies can be transferred effectively onto a real robotic platform, achieving approximately an 87% success rate both in simulation and on a real robot.
"Adapting control policies from simulation to reality using a pairwise loss", Ulrich Viereck, Xingchao Peng, Kate Saenko, and Robert Platt. 2018 International Symposium on Experimental Robotics (ISER 2018) [paper]
This paper proposes an approach to domain transfer based on a pairwise loss function that helps transfer control policies learned in simulation onto a real robot. We explore the idea in the context of a ''category level'' manipulation task where a control policy is learned that enables a robot to perform a mating task involving novel objects. We explore the case where depth images are used as the main form of sensor input. Our experimental results demonstrate that proposed method consistently outperforms baseline methods that train only in simulation or that combine real and simulated data in a naive way.
"Learning a visuomotor controller for real world robotic grasping using easily simulated depth images", Ulrich Viereck, Andreas ten Pas, Kate Saenko, and Robert Platt. 2017 1st Annual Conference on Robot Learning (CoRL 2017) [paper,youtube (presentation),arXiv]
We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.
"An arm for a leg: Adapting a robotic arm for gait rehabilitation", Giulia Franchi, Ulrich Viereck, Robert Platt, Sheng-Che Yen, Christopher J Hasson. Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE [paper]
The purpose of this study was to adapt a multipurpose robotic arm for gait rehabilitation. An advantage of this approach is versatility: a robotic arm can be attached to almost any point on the body to assist with lower- and upper-extremity rehabilitation. This may be more cost-effective than purchasing and training rehabilitation staff to use several specialized rehabilitation robots. Robotic arms also have a more human-like morphology, which may make them less intimidating or alien to patients. In this study a mechanical interface was developed that allows a fast, secure, and safe attachment between a robotic arm and a human limb. The effectiveness of this interface was assessed by having two healthy subjects walk on a treadmill with and without a robotic arm attached to their legs. The robot's ability to follow the subjects' swinging legs was evaluated at slow and fast walking speeds. Two different control schemes were evaluated: one using the standard manufacturer-provided control algorithm, and another using a custom algorithm that actively compensated for robot-human interaction forces. The results showed that both robot control schemes performed well for slow walking. There were negligible differences between subjects' gait kinematics with and without the robot. During fast walking with the robot, similar results were obtained for one subject; however, the second subject demonstrated noticeable gait modifications. Together, these results show the feasibility of adapting a multipurpose robotic arm for gait rehabilitation.
"Exploitation of the External JTAG Interface for Internally Controlled Configuration Readback and Self-Reconfiguration of Spartan 3 FPGAs", Katarina Paulsson, Ulrich Viereck, Michael Hübner, Jürgen Becker. IEEE Computer Society Annual Symposium on VLSI, pp. 304-309, 2008 [paper]
Field Programmable Gate Arrays, FPGAs, are increasingly often applied in various industrial applications as well as investigated in different research projects. Due to the possibility for performing parallel computations, this kind of hardware architecture is especially interesting for high-performance applications. Dynamic and partial hardware reconfiguration, which is provided by several FPGA families such as the Xilinx Spartan 3 and Virtex 2/4 families, further increases the flexibility of these architectures. The Spartan 3 family was a less attractive choice for performing dynamic and partial reconfiguration due to the lack of an internal configuration port. However, a virtual internal configuration port, JCAP, has previously been realized by using the external JTAG interface. This paper presents an approach for internal configuration readback for failure detection and task migration by extending the JCAP core functionality. The paper also presents the first results from implementing self-reconfiguration over JCAP.
Transferring robotic visuomotor manipulation skills trained on simulated depth images to the real world
We want to build robots that are useful in unstructured real world applications, such as doing work in the household. In our recent work [arXiv] (Published at CoRL conference 2017) we propose an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. Despite being trained in simulation, our approach works well on real noisy sensor images without fine tuning the trained model to real images.
Now, we are extending our approach to a more challenging manipulation task, more specifically an assembly task such as inserting a peg into a hole. For this task, the sensor is mounted with an angle and offset to the gripper such that the peg (with part of the gripper) and the hole are visible in the sensor image. The network needs to learn the relative pose between the peg and the hole in order to predict the distance of the peg to the hole after moving the gripper.
It seems that training the same network for this task is more challenging for the following reasons:
The coordinate frames for the motion of the gripper and coordinate frame of the sensor are not the same (there is now a rotation in addition to the translation). If there is a slightly different angle or offset between camera and gripper on the real robot than it was in simulation, then predictions are off.
The model needs to perceive both the pose of the peg and of the hole, not only of the objects on the table as it was the case for grasping.
The predictions of the distance between the peg and the hole are more sensitive to perceptual noise and areas with missing depth information in the image might appear like a hole.
The hole might become occluded by clutter objects. Occlusion was not an issue with the grasping task, since the network predicted the distance to the closest best grasp (no matter which object). If an object is occluded, then the network would not predict it to be a grasp since it would collide with the occluding object.
Our experiments confirm that the peg insertion task seems to be more challenging. Preliminary results indicate that the network trained with only simulated images does not transfer well to real images for the peg insertion task and requires some methods of domain adaptation between the simulated image and real image domains or fine-tuning with real images.
The following video shows trials of simulated peg insertion in the presence of clutter: