Human thumb next to our OmniTact sensor, and a US penny for scale.
Touch has been shown to be important for dexterous manipulation in
robotics. Recently, the GelSight sensor has caught significant interest
for learning-based robotics due to its low cost and rich signal. For example,
GelSight sensors have been used for learning inserting USB cables (Li et al,
2014), rolling a die (Tian et al. 2019) or grasping objects (Calandra
et al. 2017).
The reason why learning-based methods work well with GelSight sensors is that
they output high-resolution tactile images from which a variety of features
such as object geometry, surface texture, normal and shear forces can be
estimated that often prove critical to robotic control. The tactile images
can be fed into standard CNN-based computer vision pipelines allowing the use
of a variety of different learning-based techniques: In Calandra et al.
2017 a grasp-success classifier is trained on GelSight data collected in
self-supervised manner, in Tian et al. 2019 Visual Foresight, a
video-prediction-based control algorithm is used to make a robot roll a die
purely based on tactile images, and in Lambeta et al. 2020 a model-based
RL algorithm is applied to in-hand manipulation using GelSight images.
Unfortunately applying GelSight sensors in practical real-world scenarios is
still challenging due to its large size and the fact that it is only sensitive
on one side. Here we introduce a new, more compact tactile sensor design based
on GelSight that allows for omnidirectional sensing, i.e. making the sensor
sensitive on all sides like a human finger, and show how this opens up new
possibilities for sensorimotor learning. We demonstrate this by teaching a
robot to pick up electrical plugs and insert them purely based on tactile
feedback.