en

Postdoctoral position - Mobile robot learning with minimal supervision

Postdoctoral position - Mobile robot learning with minimal supervision

France 19 Aug 2021
LabEx IMobS3

LabEx IMobS3

State University (France), Browse similar opportunities

OPPORTUNITY DETAILS

Total reward
0 $
State University
Area
Host Country
Deadline
19 Aug 2021
Study level
Opportunity type
Specialities
Opportunity funding
Not funding
Eligible Countries
This opportunity is destined for all countries
Eligible Region
All Regions

In the framework of the research project “Innovative systems and services for transport and production” IDEX/I-SITE CAP 20-25 (Challenge 2) and the LabEx IMobS3, and thanks to a FrenchTech chair program, a postdoctoral position is proposed for highly motivated candidates interested in computer vision and mobile robots.

Subject

Mobile robot learning with minimal supervision

context of the project

This postdoc is funded through the FrenchTech/I-SITE CAP 20-25 Chaire d’Excellence program. The candidate will be joining the Image, Perception Systems and Robotics group of Institute Pascal which has long experience in computer vision and mobile robots. This research will be conducted in the context of an ongoing collaboration between Institut Pascal and Prof. Jochen Triesch from the Frankfurt Institute for Advanced Studies (FIAS).

Scientific project and objectives

The combination of reinforcement learning and deep neural networks has led to impressive results in the past few years, such as computers outperforming humans in certain games (Mnih et al., 2015, Silver et al., 2016) and it has shown promising results in robotic tasks when some priors are available (Lilicrap et al. 2015, Finn et al. 2016). However, learning complex tasks for mobile manipulation robots without strong priors remains a challenge since only very specific behaviors may lead to any rewards. Discovering these behaviors by exploring the consequences of random movements is extremely improbable. This suggests that the learning process cannot rely on random exploration but must be structured intelligently.

In a recent PhD thesis, we have proposed a new deep reinforcement learning framework to address this problem in the context of learning visuo-motor tasks (de La Bourdonnaye et al. 2017, 2018, 2019). We have considered an object reaching task on a robotic platform comprising an active binocular vision system and a robot arm. We demonstrated stage-wise sensori-motor learning using only miminal supervision. In particular, no forward/inverse kinematics, pre-trained visual modules, expert knowledge, nor calibration parameters were used. Nevertheless, by following a stage-wise learning regime, where difficult skills are learned on top of simpler ones, the complex touching skill was learned quickly. In this postdoc project, we propose to address more challenging tasks in the context of mobility by integrating a mobile robot platform.

First, our binocular vision framework will be adapted to deal with varying backgrounds and scenes that arise when the robot is moving. Second, using additional degrees of freedom, the robot will autonomously learn to approach objects so it can touch and manipulate them. In our previous work, we have already shown how the robot can autonomously learn whether an object is within reach. Now the robot will also learn to move towards the object so that it can be reliably touched and manipulated. In the final step, the robot will autonomously learn to follow objects (or people) through the environment. We will first restrict ourselves to simulations. And consider real-robot experiments depending on progress and results.

Requirements

References

De La Bourdonnaye, F., Teulière, C., Triesch, J., & Chateau, T. (2019). Within Reach? Learning to touch objects without prior models, in ICDL-EpiRob.

De La Bourdonnaye, F., Teulière, C., Triesch, J., & Chateau, T. (2018). Stage-wise learning of reaching using little prior knowledge. Frontiers in Robotics and AI, 5.

De La Bourdonnaye, F., Teulière, C., Chateau, T., & Triesch, J. (2017, May). Learning of binocular fixations using anomaly detection with deep reinforcement learning. In IJCNN 2017  (pp. 760-767).

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529.

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587).

Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., ... & Wierstra, D. (2015). Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.

Finn, C., Tan, X. Y., Duan, Y., Darrell, T., Levine, S., & Abbeel, P. (2016, May). Deep spatial autoencoders for visuomotor learning. In ICRA 2016.

Information and Contact

Advisors:

               -  Prof. Jochen Triesch (Frankfurt Institute for Advanced Studies)

               -  Dr. Céline Teulière (Institut Pascal, UCA)

Duration: One year contract

Starting date: June 2021

Research Group: Institut Pascal

University: Université Clermont Auvergne (UCA) – Clermont Ferrand - France

Contact: celine.teuliere@uca.fr

Other organizations


Choose your study destination


Choose the country you wish to travel to study for free, work or volunteer

Please find also