Searching-Eye
DFRL-E: Initiated as a part of ECAN and was generalized for planners April 2011 onwards.
Target: Autonomous navigation with absolute zero prior knowledge of the environments.

Description: DFRL-E is a coordinated RL agent framework. One RL agent (A1) decides the waypoints in the field-of-view (FOV) of the robot and the other (A2) decides how many waypoints A1 should locate. A1 operates independently of the path planner whereas A2 is dependent on the path planner. The goal of A2 is to make planning more efficient, beyond the capabilities of the path planner. For example, if a path planner (say unconstrained spline interpolation) has no inherent collision avoidance, then A2 can make the path planner to avoid obstacles by correctly locating the number of waypoints, which are then located by A1 (experimentally tested). A1 learns a policy that is independent of the path planner or the environment. Once learned, A1 doesnít need to re-learn its policy, even if the space dimensionality is increased. For example, A1ís policy for planning in 2D can be directly used for planning in 3D space without learning from a single 3D sample. A2 learns a specific for every path planner, which is environment independent. This unique property of the RL agents, i.e. learning domain/environment/space independent policies lets the DFRL-E architecture applicable in completely unknown and unseen environments. Robot uses no information beyond its FOV.

DFRL-E effectively breaks the planning problem into smaller problems. This property lets the DFRL-E to use NP-Hard MILP based motion planners efficiently. In experiments, for the domain requiring roughly 14 hours of planning time with MILP planner, it took only 78 seconds to plan a trajectory with DFRL-E + MILP planner.
DFRL - i : Delta Formation of RL Agents For Navigation in Unknown Environments

Features:
- Autonomous Waypoint Generation For Navigation In Unknown & Unseen Environments With Any Existing Path or Motion Planner
- Intra- and Inter-Space Transfer of Value Function: Learning in 2D-space but using the same policy or value function for navigation in 3D-space

Recent Breakthrough
- December 27, 2012: Robust Waypoint Generation with 2-RL Agents and Quadratic Programming based Oscillation Controller for Motion Planning of a Car-Like Robot in Unknown and Unseen 2D-Environments
Development: For setp-by-step development of the DFRL-E or AWSF framework, please see the Old Page

Most Recent Developments:
  • Safe and Robust Waypoint Generation for On-line Motion Planning of a Car Like Robot (December 27, 2012)
  • Non-Holonomic Online Motion Planning in Unknown Environments: Video (August 1, 2012)

On Going Research: Robust Waypoint Generation for Safe Motion Planning in Unknown Enviroments (Breakthrough - December 27, 2012)
Previous work in DFRL-E project focused on path planning in unknown environments. However, when the robot's dynamics constraints are incorporated into the framework, a better strategy, that is collision free and can guarantee that no matter what obstacles the robot may encounter during the navigation, it will never collide with them, is needed. The ongoing research might use machine learning techniques to classify the safe and unsafe waypoints generate by the RL agent. This safety information may be then fed back to the RL agent to decide whether it should continue pursuing the current waypoint or should do something else.


Software:
A python3.2 package for the DFLR-E framework is under developement. This package will implement the AWGS framework discussed in the IROS 2012 Workshop paper. The package will also contain the extended framework, AWSF, using two RL-agents for autonomous navigation in unknown 2D- and 3D-environments. Integrated planners will include RRTs, A-Star, Holonomic Motion Planner, Unconstrained Spline Interpolation and ECAN.

Expected Date: February 1st, 2013


Publications: