IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Modeling and Control of Robots - Home

Motion Planning

Motion planning #.

Motion planning is the problem of finding a robot motion from start state to a goal state that avoids obstacles in the environment (also statisfies other constraints).

Fig. 42 Robot arm motion planning #

Configuration Space #

Configuration of a robot : a representation of a robot pose, typically using the joint vector \(q\in R^n\) .

Configuration space (C-Space) : the space that contains all possible robot configurations.

Free space \(C_{free}\) : the subset of configurations where the robot does not contact any obstacle.

Obstacle space \(C_{obs}\) : the subset of configurations where the robot is in collision with an obstacle.

The robot configuration space \(C\) is a union of free space \(C_{free}\) and obstacle space \(C_{obs}\) :

State of a robot : is defined as \(x=(q,v)\) , joining the robot configuration \(q\) and its velocity \(v=\dot{q}\)

Equation of motion (dynamics) of a robot : an ordinary differentiable equation (ODE) defined on the robot state \(x\) and control input \(u\) (e.g., the input force).

Some examples:

../_images/exp2_cspace.png

Fig. 43 (a) A circular mobile robot (open circle) and a workspace obstacle (gray triangle). The configuration of the robot is represented by \((x, y)\) at its center. (b) In C-space, the obstacle is “grown” by the radius of the robot and the robot is treated as a point. Any \((x, y)\) configuration outside the bold line is collision-free. #

../_images/2rrobot_arm_cspace.png

Fig. 44 (Left) The joint angles of a 2R robot arm. (Middle) The arm navigating among obstacles A, B, and C. (Right) The same motion in C-space. Three intermediate points, 4, 7, and 10, along the path are labeled. #

Optimization-based motion planning (NOT convered) #

Robot motion is generated by solving an optimization problem that minimizes (or maximizes) a given cost function (such as minimal jerk, control energy) while satisfying various constraints, such as obstacle avoidance and robot dynamics. Mathmatically,

../_images/opt_mp.png

Fig. 45 General formulation for optimization-based robot motion planning #

Some methods (Please read the original paper if you are interested):

Covariant Hamiltonian Optimization for Motion Planning (CHOMP)

Stochastic Trajectory Optimization for Motion Planning (STOMP)

Trajectory Optimization (Direct Collocation)

Differential Dynamic Programming (DDP)

Iterative Linear Quadratic Regulator (iLQR)

Pontryagin Differentiable Programming (PDP)

Sampling-based motion planning (focused here) #

Probabilistic Roadmap

Rapidly-exploring Random Trees (RRTs)

Probabilistic Roadmap (PRM) #

Python example code (copyright, authored by IRIS Lab): https://colab.research.google.com/drive/1TvEIeUeZnZjJOjU33IVWLnP43DrFXxVY?usp=sharing

The basic PRM algorithm in robot configuration space Fig. 46 is as follows:

Step 1: Sample \(N\) configurations at random from the C-space, as shown in Fig. 47 . Check all sampled configurations and remove those in the obstacle sapce. Only add the samples in free space and start and goal to the roadmap (also known as milestones), as shown in Fig. 48 .

Step 2: Each milestone is linked by straight paths to its nearest neighbors. The collision-free links are retained as local paths to form the PRM, as shown in Fig. 49 .

Step 3: Search for a path from the start to the goal using A* search or Dijkstra’s search algorithms, as shown in Fig. 50 .

../_images/prm-1.png

Fig. 46 Configuration space: balck blocks show obstacle space and the while show free space. The start is shown in green and goal in red. #

../_images/prm-2.png

Fig. 47 Sample \(N\) configurations at random from the C-space. #

../_images/prm-3.png

Fig. 48 Only add the samples in free space and the start and goal to the roadmap (also known as milestones). #

../_images/prm-4.png

Fig. 49 Each milestone is linked by straight paths to its nearest neighbors. The collision-free links are retained as local paths to form the PRM #

../_images/prm-5.png

Fig. 50 Search for a path from the start to the goal using A* search algorithms. #

Rapidly-exploring Random Trees (RRTs) #

Python example code (copyright, authored by IRIS Lab): https://colab.research.google.com/drive/1fxMlKzIWIWr6Qq49wTmTsIAJz5pp0ing?usp=sharing

The basic RRT algorithm is as follows (from https://arxiv.org/pdf/1105.1186 )

../_images/rrt.png

Fig. 51 Basic RRT algorithm #

A visualization of how RRT works step by step in 2D space is shown as follows.

CREDIT: all the figures are taken from rey’s blog ( https://rrwiyatn.github.io/blog/robotik/2020/05/16/rrt.html )

../_images/rrt-1.png

Fig. 52 The 1st iteration in RRT algoritm #

../_images/rrt-2.png

Fig. 53 The 2nd iteration of RRT to n-th iteration #

Variants of RRT #

bidirectional or multi-directional RRT

RRT* : Swap new point in as parent for nearby vertices who can be reached along shorter path through new point than through their original (current) parent.

../_images/rrt%2A.png

Fig. 54 (Left) The tree generated by an RRT after 5,000 nodes. The goal region is the square at the top right corner, and the shortest path is indicated. (Right) The tree generated by RRT* after 5,000 nodes. Figure from original paper ( https://arxiv.org/abs/1105.1186 ). #

Smoothing #

Randomized motion planners tend to find not so great paths for execution: very jagged, often dynamially infeasbile. So, people usually do smoothing before using the planned path.

Subdividing and Reconnecting: A local planner can be used to attempt a connection between two distant points on a path. If this new connection is collision free, it replaces the original path segment. Since the local planner is designed to produce short, smooth, paths, the new path is likely to be shorter and smoother than the original. This test-and-replace procedure can be applied iteratively to randomly chosen points on the path. Another possibility is to use a recursive procedure that subdivides the path first into two pieces and attempts to replace each piece with a shorter path; then, if either portion cannot be replaced by a shorter path, it subdivides again; and so on.

Nonlinear Optimization: With the obtained path as initial condition, one can define an objective function that includes smoothness in state, control, small control inputs, etc, and optimize such objective function for smoother path.

REVIEW article

Combining task and motion planning: challenges and guidelines.

Masoumeh Mansouri

  • 1 Intelligent Robotics Lab, School of Computer Science, University of Birmingham, Birmingham, United Kingdom
  • 2 Multi-Robot Planning and Control Lab, Center for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden
  • 3 Knowledge-Based Systems Group, TU Wien, Vienna, Austria

Combined Task and Motion Planning (TAMP) is an area where no one-fits-all solution can exist. Many aspects of the domain, as well as operational requirements, have an effect on how algorithms and representations are designed. Frequently, trade-offs have to be made to build a system that is effective. We propose five research questions that we believe need to be answered to solve real-world problems that involve combined TAMP. We show which decisions and trade-offs should be made with respect to these research questions, and illustrate these on examples of existing application domains. By doing so, this article aims to provide a guideline for designing combined TAMP solutions that are adequate and effective in the target scenario.

1 Introduction

This paper addresses a known problem in planning for robots, namely, that of combining Task And Motion Planning (TAMP). As robots have been increasingly deployed in challenging, unstructured object- and interaction-rich environments, combined TAMP has received extensive attention from the robotics community. Examples include TAMP solutions for autonomous excavators pushing gravel in construction sites, or autonomous mining robots drilling the ground to extract materials. To make a robot operate competently in such environments, researchers have combined methods from different sub-fields of Artificial Intelligence, including task planning to compute appropriate actions, motion planning to generate motions using geometric models, and control to compute feasible trajectories. The numerous efforts dedicated to combining task and motion planning highlight a common scientific challenge, namely, that the level of abstraction varies across planning models: discrete domain representations for task planning and continuous models for motion planning and control. Figure 1 illustrates several challenges that commonly occur in combining TAMP: finding solutions in discrete search spaces that are infeasible in continuous space, abstraction of continuous space, and uncertainty (e.g., due to obstructed view). Currently, solutions to combined TAMP vary in the way in which they explore the (joint) continuous-discrete search space.

www.frontiersin.org

FIGURE 1 . Example of combining TAMP using an abstract and a motion representation with uncertainty (question marks) and two abstract solutions of which one is infeasible due to a tight spot.

When designing a TAMP method, the application requirements pose scientific questions beyond how we explore the search space. This paper attempts to lay down relevant challenges and research questions that can be used as guidelines for solving real-world TAMP problems. We consider relevance with respect to two criteria: 1) addressing the challenge is technically difficult and requires significant research and innovation; 2) advances in addressing the challenge have a large impact in current industrial applications. These research questions are the following:

• Q1: How can a domain be divided into multiple levels of abstraction, and what are effective methods for finding a globally feasible solution that obeys all constraints in all abstraction levels?

• Q2: How should symbolic and continuous knowledge representations be reasoned upon jointly?

• Q3: What classes of methods exist for learning models, specifying models, and performing the two together? In particular what are the options for combining existing task and motion planning methods with machine learning?

• Q4: How to enable online decision making in combined task an motion planning? How can we guarantee the consistency of decision-making in such settings?

• Q5: Which methods should be used to deal with uncertain perception in combined task and motion planning? Should uncertainty be considered in one of the decision making processes or both?

We will show how these five aspects reflect the major gaps/open questions in the current state of advancement in planning for robots. We will also show that these questions include the key choices that need to be made when combining task and motion planning in real applications. As our analysis makes evident, alternative solutions are proposed to similar questions. This often depends on aspects of the application context, and on the different assumptions and trade-offs that can be made. This paper serves as a guideline for navigating the landscape of existing solutions and their caveats when designing combined task and motion planning methods. As opposed to the latest survey on this topic ( Garrett et al., 2020 ), we here analyze a wider scope of concerns within TAMP and their combination; in particular, we discuss several orthogonal aspects, including uncertainty handling and online planning. Also, the discussion is centered on five open research questions and how answering them matters in a selection of industrially-relevant application contexts.

2 Discussion Over Five Research Questions

Task and motion planning methods are categorised based on the different class of algorithms used in planning tasks and motions, as well as the way in which these two types of methods are combined. In their widely referenced textbook on Task Planning, Ghallab et al. (2016) organised their discussion based on available representation and reasoning choices in task planning for the purpose of acting. Similarly, for the motion planning reference book, LaValle (2006) centered his discussion around how a robot world ( via a geometric and configuration space) can be represented and the efficient algorithms to explore those spaces ( reasoning about motion). Our Q1 concerns the representational choices that need to be made combining these two types of representations, each belonging to a different level of abstraction. Q2 follows the logical next step of addressing issues related to reasoning about these representations in combination. Q3 discusses using learning algorithms in lieu or in support of both representation and reasoning. Q4 concerns applications where TAMP techniques should be integrated online with acting, and Q5 addresses the all-important question in robotics of planning with uncertain knowledge. There are, of course, other open issues in combined TAMP — however, we believe that these are either subsumed by these questions (e.g., how to discretize the environment for task planning), or are of lesser general interest because they are relevant to very specific application settings. We also claim that it is not possible to answer any of the four questions above in isolation. The issues around representation, reasoning, how we obtain a joint model, online reasoning, and uncertain or incomplete knowledge of an environment, are all interdependent. For example, consider an application where we have a TAMP problem that should be solved in an online manner. For the TAMP solution to be appropriate for this purpose (addressing Q4), the chosen representation for the task and motion (addressing Q1) should be computationally adequate for enabling fast joint reasoning (addressing Q2). In Section 4, we provide four applications requiring combined TAMP and discuss a possible order in which these questions can be addressed.

3 Analysis of the State-of-the-Art

We categorise the existing TAMP methods around our five research questions. Note that the categorisation is not crisp, i.e., one paper can belong to more than one category.

3.1 Abstraction

In this section, we mainly discuss how different levels of abstractions interact by means of a shared abstraction (partially addressing Q1), and leave the discussion about choice of knowledge representations at each level to the next section. A shared abstraction must have the capacity to represent knowledge at different levels. Shared abstractions can be realized by the mechanism of an already existing logic [e.g., Satisfiability Modulo Theories (SMT) ( Nieuwenhuis et al., 2006 )], various forms of constraint-based approaches (e.g., meta-constraint reasoning), or a novel formalism designed specifically for enabling interaction between these levels.

SMT is built upon the notion of augmenting the Boolean Satisfiability Problem (SAT) with the ability to reason about several diverse background theories. For instance, Nedunuri et al. (2014) encode high-level robot requirements in a SAT formulation, and the background theories are linear arithmetic and functions which relate to the physical configuration of the robot and objects in the environment. Another example of reasoning with a shared abstraction is meta constraint reasoning. In this problem formulation, task and motion planning problems are modeled as different instances of Constraint Satisfaction Problems (CSPs) at different levels of abstraction. So-called meta-constraints capture the dependencies between task and motion CSPs ( Mansouri, 2016 ). Instead of adapting known knowledge representation like SMT or CSP, Dantam et al. (2018) propose a flexible framework that employs a uniform interface (called scene graph) as a shared abstraction to connect motion and environment models with task states. Similarly, Gaschler et al. (2013) treats volumes as a shared abstraction.

In addition to shared representations, formal methods are used to provide behavioral guarantees at all levels of abstraction. In these methods, formal synthesis provides a framework for specifying tasks in a mathematically precise language, and automatically transforming these specifications into correct-by-construction robot controllers ( Kress-Gazit et al., 2018 ). Linear Temporal Logic (LTL) is a formal language that is commonly used in TAMP formulations (e.g., Plaku, 2012 ). Also, Signal Temporal Logic (STL) also exist to relate logic predicates to continuous-time signals (e.g., Maler and Nickovic, 2004 ).

The TAMP domains in the instances described above were divided based on the capacity of the shared knowledge representations which ensure to find global feasible solutions for all levels of abstraction. Choosing a shared representation capable of maintaining such global consistency is an effective method to divide an overall TAMP problem to a set of sub-problems embedded in a shared representation. In the following section, we discuss other ways to enable interactions between levels of abstraction.

3.2 Symbolic Versus Continuous Models

In TAMP, symbolic knowledge representations are often relevant in most variants of task planning; by contrast, most models that are relevant for motion planning are expressed in terms of variables with continuous domains. Furthermore, different types of models (and, hence, different forms of automated planning) may be relevant in a given application, e.g., continuous time and events, metric maps and qualitative spatial relations, kinodynamic motion models and symbolic preconditions for acting. In the following, we analyze various approaches for combining symbolic and continuous models.

To enable joint reasoning across symbolic and continuous domains (addressing Q2), Procedural Attachment is a common approach. In procedural attachment, feasibility of actions in terms of kinematics, dynamics, and geometric constraints is assessed through a procedure, e.g., an external motion planner, that is attached to the symbol(s) representing that action at a high level of abstraction. The approaches differ in the reasoning techniques used for high level action and task planning, e.g., Boolean satisfiability ( Havur et al., 2013 ), PDDL planning ( Srivastava et al., 2014 ), Hierarchical Task Network (HTN) planning ( Kaelbling and Lozano-Pérez, 2011 ), or Answer Set Programming (ASP) ( Erdem et al., 2016 ); as well as the attached procedures, e.g., simulation-based verification ( Mosenlechner and Beetz, 2011 ), geometric reasoning ( Lagriffoul et al., 2012 ) or motion planning ( Havur et al., 2013 ). Another common approach to enable joint reasoning is to use sampling-based methods, where both task and motion solutions are combined in one common space for a probabilistic search to navigate in. Such methods can use conditional samplers that are provided as part of a domain specification, hence domain knowledge improves sampling in the (usually large) solution spaces ( Garrett et al., 2018 ).

Sampling-based methods incorporate discrete sampling for task planning into the (usually non-discrete) sampling process used in many approaches to motion planning. Procedural attachment is exactly the opposite strategy: motion planning is attached to certain logical predicates that are processed by the algorithm of the task planner. A notable difference is that motion planning methods are directly used as sub-procedures in procedural attachment, while sampling-based methods do not use a task planner; they recast task planning as sampling. Therefore, procedural attachment and sampling-based methods are two extremes in a potential continuum of integrating task and motion planning along the aspect of ‘what is the leading formalism’ of the integration — the task level or the motion planning level.

Historically, combining task and motion planning via procedural attachment has been very successful in promoting the role of Symbolic AI reasoning in robotics. However, procedural attachment fails to provide a scalable, general technique for integrating very diverse forms of reasoning. There are reasons behind this shortcoming. One is that procedural attachments often do not capture inter-dependencies among sub-problems, i.e., each sub-problem solver is not aware of requirements of the domain that pertain to other sub-problems. These approaches lack a clear and transparent means to specify inter-dependencies between sub-problems; this is due to the fact that such a specification would have to combine notions/concepts that are expressed in different KR formalisms. Finally, note that some flavors of procedural attachment permit limited inter-dependencies between low-level sub-problems, e.g., HEX-programs ( Erdem et al., 2016 ).

Regarding integration of symbolic and continuous reasoning, several properties of the application area need to be considered. if it is sufficient to find solutions that are close to the optimal, sampling-based methods are a better choice. However, procedural attachment or more powerful symbolic task-level methods are preferable for cases when the task level search is highly complex so that only few solutions exist, and sampling would potentially yield no solution. Procedural attachment can also be used when low-level reasoning itself is required to be split into many small sub-problems so that can be solved independently.

3.3 Specifying Versus Learning

Q1 and Q2 discussed so far concerned the choices TAMP methods make regarding the representation of planning models. Q3, on the other hand, has to do with how to obtain these models, and is particularly in focus today thanks to the rise in popularity of machine learning techniques, in particular deep (reinforcement) learning.

The recent AlphaGo breakthrough has had a great impact in many areas of AI, including TAMP. Kim et al. (2019a) propose an actor-critic algorithm that learns from planning experience to guide a planner. They have also investigated how to predict global constraints on the solution for generic TAMP problems using a scoring function to represent planning problem instances ( Kim et al., 2019b ), and have developed an algorithm that learns a stochastic policy from past search trees using generative adversarial nets, for problems with fixed numbers of objects ( Kim et al., 2018 ).

Using a learning method for planning does not originate form AlphaGo. In a review paper from 2012 ( Jiménez et al., 2012 ), learning for planning is described on the task level and with respect to discrete planning actions and states; the purpose of learning is to acquire knowledge about 1) action conditions and effects, i.e., the domain; or 2) heuristic knowledge for guiding the search process faster to a goal state. A later review from 2018 ( Arora et al., 2018 ) focuses only on 1) and refers to the integration of task-level and motion planning as part of a list of “guidelines that a robotic system can follow in order to be proclaimed autonomous”. This review also mentions Surprise Based Learning (SBL) ( Ranasinghe and Shen, 2008 ), which learns a domain description from execution monitoring and interleaves prediction of future states and monitoring of actually reached states to improve that domain theory. SBL is the main work that embraces the idea of an incorrect domain theory. We argue that Motion Planning also embraces that idea but in an orthogonal way: TAMP is based on the use of a domain description which is correct on its level of abstraction and coarse enough to permit efficient planning; however, due to its coarseness, planning with this domain description requires an integration with motion planning to ensure that the task plan can be realized in the concrete domain. Contrary to SBL, TAMP does not attempt to repair the task-level domain description, because any repair that would ensure full correctness would at the same time make the domain description useless for efficient planning algorithms.

Balac et al. (2000) employs regression tree learning to predict the influence of terrain on the efficiency of high-level actions to be used in the task planner. In other words, their system performs learning of low-level action cost to be used as a parameter in the high-level domain for task and motion planning. In a similar vein, imitation learning has been used to learn motion primitives corresponding to a manually specified high-level task structure from one-shot demonstrations of a human (kinesthetic teaching) who also gives verbal cues about the task at hand ( Caccavale et al., 2019 ). An attention mechanism automatically segments motion tracking data from the human and assigns recorded motion primitives to sub-tasks. In general, learning is a better choice for designing levels of abstraction within TAMP such that the specification is difficult to achieve or imprecise due to the complexity of the domain or lack of knowledge on the environment. We will see concrete examples of learning parts of domains in our illustrated use cases in Section 4.

3.4 Online Planning

In real-world applications, automated planning systems are often required to make decisions online, while previous plans are already under execution. When planning for motions, this is known as Receding Horizon Control, or Model Predictive Control. In task planning, methods ensuring the ability to update plans online is often referred to as continuous planning. Moreover, incomplete knowledge about the domain requires assumptions to be made for planning, and in online planning these assumptions may be revised multiple times during physical execution of the plan. This concern can be summarised as in Q4, around which we analyze the current methods.

Many approaches to online task and motion planning can also be relevant to plan-based robot control (see the next subsection). Online planning is about radical changes of plans due to contingencies, whereas control is more about small disturbances that the controller can compensate in its local environment without affecting the overall plan. Also, we assume that a plan obtained by an online planner is only preliminary until it has been executed. This is because the environment is highly non-deterministic, described probabilistically, partially unknown, or a considerable fraction of actions is bound to fail. As a case in point, a human-robot collaborative manipulation system has to adapt its cooperative behavior during execution due to the continuous human intervention ( Cacace et al., 2018 ).

Contingency planning methods do not compute plans in an online fashion, rather prepare them for dealing with foreseeable failures. The way these planners work is to put in sensing and repair actions in an original plan, sometimes conditionally, where certain action failures are likely to happen. In this way, contingency planners ‘program’ replanning already into the initial plan. An example is the HCP-ASP hybrid conditional planner, where conditional actuation and sensing actions are modeled in ASP ( Yalciner et al., 2017 ). An ASP solver computes feasible branches of a conditional plan using external atoms that account for continuous feasibility checks (e.g., collision checks).

In summary, with respect to online planning we have to first identify which classes of possible action failures or modified environment situations we want the robot to be robust against. It is important to determine in the beginning whether the goal of the plan is subject to change. Also, it is important to know how fast new knowledge needs to be integrated into the plan, in other words, how long can we afford to execute the ‘old’ plan before knowing and switching to the ‘new’ plan.

3.5 Planning With Uncertainty

Robots epitomize the need for automated planning methods, which provide them with the means to achieve goals. Yet the physical nature of robot systems, as well as the uncertainty connected to robot behaviors and perception, destroy many of the assumptions made by current methods for planning. Q5 focuses on this aspect of TAMP.

Many pioneers in using automated task planning for deriving robot behaviors use the term plan-based robot control to distinguish planning for robots from planning for other systems. The focus here has been on aligning the belief-state of the robot with the symbolic planning process. The latter can employ one of the many approaches to task planning, from Hierarchical Task Networks for TAMPs in partially observable environments (e.g., Weser et al., 2010 ) to inferring the most appropriate plan from a pre-defined plan library using probabilistic representations (e.g., Beetz et al., 2001 ). Decision-theoretic task planning methods, and specifically Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs) are the most prevalent approaches for tackling various types of uncertainty in TAMP formulations (e.g., Şucan and Kavraki, 2012 ; Kaelbling and Lozano-Pérez, 2013 ; Hadfield-Menell et al., 2015 ). A recent work includes an anytime algorithm of TAMP generating policies for handling multiple execution-time contingencies using MDP-based modeling of actions which corresponds to an infinite set of motion planning problems ( Shah et al., 2020 ).

A reactive type of formal methods has also been used to provide behavioral guarantees in typically uncertain or adversarial environments [e.g., within a receding horizon paradigm ( Raman et al., 2015 )]. LTL-based abstractions can be used to account for delays and measurement errors as a form of uncertainty modeling in TAMP ( Liu and Ozay, 2016 ). Also, recent theoretical advancement in synthesis methods for uncertain MDPs that provide better noise modeling than classical MDPs, have shown promise for uncertainty handling in TAMP (e.g., Lahijanian et al., 2015 ).

With respect to uncertainty, it is essential to identify whether uncertainties on the task level can be foreseen so that we can create robust plans regarding the uncertainty models. For uncertainties on the motion-planning level, we have to validate whether it is sufficient to use local control methods to tackle uncertainties, or there is a need for deferring to the task level. In general, the more uncertainty we have to deal with, the more likely it is that some type of Online Planning (see previous section) is to be necessary.

4 Applications

Now that we have outlined the key questions underlying the realization of combined TAMP, we illustrate how some of these aspects have been considered in concrete industrial applications. There are many industries where combined TAMP is required for long-term autonomy. These range from automation of heavy-duty machines operating in unstructured environments such as mines or construction sites, to that of robots in controlled and unobstructed environments such as factories. The challenges these domains bring about differ in nature, but fall in the range of the questions we have outlined above. Specifically, we identify three broad issues posed by such industrial applications.

First, in some domains, tasks carried out by the robot(s) in the environment affect the motions that can be carried out in realizing other tasks. This is true in mining, for instance, where operations like drilling have a permanent effect on navigability. In general, decisions over the order of the tasks not only affect the subsequent motions but also the environment. We illustrate an example of such applications in Section 4.1.

Second, an industrial application has crucial qualitative requirements to guarantee safe operation, e.g., there should always be a machine in a certain station whenever one is in another station. These seemingly simple constraints have ramifications beyond the task level, as they affect all levels of abstraction including the low-level control. For instance, a machine may need to accelerate to fill the place of another machine leaving an active station. We will examine such an instance of TAMP in an application of electric haulers in a quarry in Section 4.2.

Third, peculiarities of industrial applications and their consequences in their TAMP formulations are not limited to those that derive from unstructured, outdoor environments. Even in more structured environments, like factories, a relevant issue is how to design the environment for efficient task and motion planning. In manufacturing, for instance, motion planning can often be greatly simplified at the cost of limiting the flexibility of the robotic solution. We address the relation between specifications for task planning and their implication on the resulting TAMP formulation in Section 4.3.

Fourth, qualitative specifications may be relevant to ensure task achievement under extreme uncertainty. This is case in mission planning for Autonomous Underwater Vehicles, where task plans and motions are affected by currents and other complex environmental phenomena that are difficult to model. We discuss the use of learning predictive models for use in combined TAMP in Section 4.4.

Although these problems are relevant in many more applications than those cited below, we have made a selection of few concrete examples in each of these three categories in order to underscore the impact that innovation in task and motion planning can have in the real world. We conclude the section by indicating some good practices derived from these examples.

4.1 Drill Planning for Open-Pit Mines

In this section we analyze the drill planning problem within an application involving a fleet of surface drill rigs operating in a common area of an open-pit mine, called a bench.

4.1.1 Problem and requirements.

A set of drill targets in a bench is given; at each target, a blast hole is to be drilled and filled with explosive material, which will then be detonated to produce rubble that will be processed into ore. The drill planning problem consists of computing a plan that involves machines reaching each drill target in a bench and performing the necessary operations to drill the blast hole. Drilling produces piles of excess material around the hole. These piles constitute obstacles for the machine itself and other machines, hence no machine can drive over them. A solution to the drill planning problem should take into account the emerging obstacles as well as all other common TAMP requirements (e.g., avoiding machine-machine collisions) to be executable by the drill rigs.

4.1.2 Integration Challenges

The drill planning problem can be seen as a combination of several sub-problems: task planning (consisting of deciding the sequencing of the targets to be drilled), motion planning, and coordination. These problems cannot be treated separately, as the solutions of each problem depend on each other. For instance, task planning must lead to a sequence of drill targets that accounts for the piles generated after drilling (which become obstacles that must be taken into account in motion planning). In other words, the order in which the targets are drilled will affect the ability of the machine itself and other machines to traverse on the bench. Hence, it is necessary to subject the possible choices made to solve one problem to the choices made in resolving the other problems, e.g., verifying through motion planning that a chosen sequence of targets to drill will be kinematically feasible and will avoid the piles of material produced by drilling. There are two approaches in the literature addressing the drill planning problem. One approach is based on meta constraint reasoning ( Mansouri et al., 2016 ) in which the task planning, motion planning and coordination problems are modeled as CSPs at different levels of abstraction, and the meta-constraints capture inter-dependencies between the tasks and motions as a shared abstraction. The second approach implements a multi-abstraction search where an abstract solution is refined incrementally with different types of search at different levels of abstraction ( Mansouri et al., 2017 ).

4.1.3 Considerations

This application allows us to make several statements pertaining to questions Q1– Q3.

Q1: One common way to deal with multiple levels of abstraction is to abstract away the continuous (geometric) representation – the motions and the piles in this problem – in order to obtain a fully discrete (graph) representation. The graph representation of the drill planning problem forms a variant of the Traveling Salesperson Problem (TSP) where nodes are the drill targets, and edges represent abstracted motion between the nodes. Then, the problem is to find a shortest closed path (tour) in the graph such that every node is visited only once. In a TSP, regions to be visited are associated to nodes in a graph, and each node should be traversed exactly once. Roughly speaking, each region along a tour acts as an “obstacle” that appears dynamically once the node is visited, and which must be avoided while visiting other nodes. However, the TSP employs the abstract notion of a graph to represent locations and their connectivity, thus ignoring the geometrical extent of the locations. Ignoring the geometric reality of the nodes in the TSP, and the fact that paths between them are affected by this spatial extent, leads to solutions that may not be feasible in practice, as they ignore the further constraints to the motion space that derive from the drilling tasks. This points to a rather general observation: abstracting away certain aspects of the problem representation preserves correctness only if we do not lose information regarding the dependencies between different aspects of the problem (in this application, the geometrical extent of the drill targets). An alternative way is to keep each representation at its own level of abstraction, and to leverage a common language to combine relevant knowledge among different levels of abstraction. To enable the use of a common language, we should first identify sub-problems of the overall problem. Furthermore, we need to identify dedicated solvers, each of which focuses on a subset of aspects of the overall problem, e.g., a motion planner verifies kinematic feasibility and absence of collisions, while a scheduler verifies that coordination choices are temporally and spatially feasible. Validated solutions for each sub-problem can be see as constraints that account for particular aspects of the overall problem. As remarked below, constraints can play the role of a common language to facilitate joint reasoning.

Q2: Where we discard the continuous (geometric) representation of motion and piles under a fully discrete (graph) representation, we effectively disable joint reasoning for the drill planning problem. Nevertheless, one might solve a TSP over the graph representation as a proxy to solve the drill planning problem, and use a post-processing step to filter out TSP solutions that are infeasible with respect to motion and pile constraints. To enable joint reasoning in the second alternative of dealing with multiple levels of abstractions, we can use the common language, in this case constraints, in a common constraint network ( Dechter, 2003 ) to model the search space of all problems jointly. In this way, each dedicated solver only operates in the relevant level of abstraction, and validating solutions to sub-problems is reduced to posting constraints in the common constraint network and verifying its consistency. This approach is realized in several different applications, including drill planning ( Mansouri et al., 2016 ) and integrated task and motion planning for warehouse management ( Mansouri et al., 2015 ).

Q3: Machine learning can be useful not only for generating planning models, but also to generate heuristics for efficiently exploring search spaces. In the drill planning problem, we can learn patterns from examples provided by human experts for sequencing decisions in regions where machines have limited space to manoeuvre ( Mansouri et al., 2016 ). In general, learning from humans is important for uptake by the industry, as end users want machines to adhere to the best practices of humans while expending little effort in specification/knowledge engineering. We can also use clustering methods for analyzing the topology of a bench, which will allow to cluster targets into groups for which there are only few reasonable sequencing possibilities and that are easy to navigate in sequence. This again will alleviate the computational burden of finding sequences in a joint search space, which is strongly affected by constraints on motion ( Mansouri et al., 2016 ; Mansouri et al., 2017 ).

4.2 Multi-Hauler Planning for Quarrying

In this section, we focus on an instance of TAMP for multiple electric haulers operating in a quarry. The important challenge in this application is to provide team-level guarantees over team behaviors in the presence of high uncertainty over the durations of navigation actions. The problem requirements presented in this application can be found in other real-world robotics applications, such as mining, construction, and warehouse automation.

4.2.1 Problem and Requirements

A team of autonomous electric haulers transport material between stations in a quarry. At the unloading station, a hauler can unload gravel obtained from two crushers. The primary crusher (PC) constantly produces gravel, which is continuously output via a conveyor belt. The production of gravel at the PC cannot be stopped under normal circumstances, hence, there should always be a robot under the PC so that gravel does not accumulate on the ground, obstructing access to the PC and halting the entire process. The secondary crusher (SC) does not have this constraint, as the gravel produced there is loaded onto haulers manually. Also, robots are required to leave the PC when full. The aim is to maximize the throughput of the overall system, i.e., the amount of gravel dumped at the unloading point. The important constraint of this application is to guarantee that there is always a hauler under the PC. In this instance of TAMP, we require to solve task planning for the team-wide decisions of which robot visits which station in what order, and motion planning and coordination for the decisions of how, when and where the robots move. Henceforth, we refer to this instance as a multi-hauler planning problem.

4.2.2 Integration Challenges

In order to respect the constraint of “there should always be a robot under the PC”, we have to able to compute exactly how long it takes for a hauler to go from one station to another station to make sure that we dispatch the robot in an appropriate time. Being too conservative and sending as many robots as available to the PC, makes the SC useless, and negatively affects the throughput by wasting robots being in a queue to reach the PC. However, the durations of navigation actions of the haulers in the quarry is very uncertain. This uncertainty stems from many sources, e.g., the dynamics of individual robots are typically only partially known; robots may navigate differently in different parts of the environment (e.g., skidding over a sandy patch of terrain, proceeding more slowly in the vicinity of pedestrians); task-dependent factors may affect how robots navigate (e.g., slow movement due to a heavy load); and interactions between robots jointly navigating in a shared space introduce further unmodelled dynamics (e.g., robots yielding to, or avoiding, each other). The multi-hauler planning problem was addressed in the literature by a hierarchical approach based on Generalised Stochastic Petri nets (GSPN) for modeling team behavior, where accurate probabilistic models of path durations are obtained via integration with a lower-level team controller ( Mansouri et al., 2019 ). The GSPN is then interpreted as a MDP for which policies can be generated so that team performance is optimized whilst avoiding the exponential blow-up associated with the construction of a full joint model.

4.2.3 Considerations

This application is most relevant to two of our original questions.

Q3: Today’s commercial solutions for multi-robot path planning remove many source of uncertainty by engineering the environment. Such assumptions are not applicable in many real-world applications including multi-hauler planning. For this reason, current industrial practice relies on fixed, hand-crafted policies for selecting tasks for robots and dispatching them to their destinations. We should instead replace the current practice with an automated planning system that does not make assumptions on the map, the robot geometries, the paths followed by robots, or their kinematics and dynamics. The system should provide a means to easily specify high-level requirements on team behavior, including safety constraints, and it should scale to realistically-sized teams. In order to robustly maintain the safety specification for the PC, models of navigation task duration can be learned. In the absence of real data due to the difficulties of deploying real experiments, we can learn from simulations of the team navigating in the target environment in this case a quarry ( Mansouri et al., 2019 ). To explore the range of multi-robot navigation experiences relevant for the target environment, the robot team must operate in a way that is as similar to the desired behavior as possible. To achieve this, the team should be controlled in simulation using a controller which integrates coordination, motion planning and robot control (e.g., Pecora et al., 2018 ), and supports the injection of external navigation choices for robots. Given these choices, the controller generates multi-robot paths that take into account the kino-dynamic constraints of individual robots. These paths are jointly executed and supervised by the controller. When generating data for learning, a randomised policy can be used to provide navigation choices.

Q5: The main source of uncertainty in this problem is the duration of navigation actions. We require a method for multi-hauler planning that accounts for this uncertainty. A popular approach to planning with uncertainly is to use MDPs, where uncertainly is modeled in the outcomes of actions. However, it is not accurate to directly model the learned probabilistic model of duration as an action outcome in an MDP. Instead, we can use a stochastic extension to Petri nets to model team behaviors with probabilistic models of path durations. This can then yields an MDP which can be solved to generate policies that optimise team behavior against the team requirements and performance objective.

4.3 Assembly Planning for Industrial Manipulators

In this section, we analyze an application of dual-arm manipulators in assembly tasks.

4.3.1 Problem and Requirements

A modern lightweight dual-arm robot, e.g., the ABB Yumi, is deployed to assemble pieces of wiper motors. The workstation is depicted in Figure 2C , where the rotors are already inserted into workpiece holders (A) on a conveyor system, arriving in groups of five. The stators with the brushes and the electric interfaces are supplied in transport containers (B). Mounting a stator on a rotor requires to place a cone-shaped tool on the motor shaft temporarily. (C) marks the position the robot picks up of a tool. Such flexible production requires fast methods to specify new tasks for these robots, and classical teach-in by means of fixed poses and paths is not appropriate. Flexible assembly planning involves three aspects: task planning of the necessary steps and actions to achieve the overall goal/task; scheduling of these steps and actions; and motion planning for each step and action. Dual-arm manipulation further requires to decide about the allocation of task steps and actions to the individual arms. Moreover, the complexity of scheduling and motion planning is increased heavily, due to the necessity to closely coordinate the manipulators to prevent self-collisions of the robot. All four aspects – task planning, scheduling, allocation and motion planning – are closely interrelated and must be combined to achieve optimal plans with regard to some objective e.g., makespan. Henceforth, we refer to this instance of combined planning as an assembly planning problem.

www.frontiersin.org

FIGURE 2 . Drilling machines and the resulting holes in an open-pit mine (a, b) , autonomous construction machines in a quarry (c) ; a dual-arm robot assembling wiper motors (d) .

4.3.2 Integration Challenges

Obtaining an optimal solution to the assembly planning problem depends not only on the motion of the manipulators but also on the orders in which a workpiece is assembled, the components are taken from boxes or conveyor belts, processed by other machines, etc. These dependencies are all the more complex if connected systems or machines impose further temporal constraints. In addition, different assignments of sub-tasks to arms, while taking the individual working ranges into account as well as task steps in which the arms have to cooperate, lead to a further combinatorial complexity. The assembly planning problem was addressed in the literature via different methods, including prioritized TAMP ( Kurosu et al., 2017 ), fixed-path planning ( O’Donnell and Lozano-Pérez, 1989 ), and fixed-roadmap planning. For this paper, we analyze the latter approach, which uses a flexible model and solver for simultaneous task allocation and motion scheduling that is based on constraint programming (CP) and constraint optimization ( Behrens et al., 2019a ). The core modeling concepts was Ordered Visiting Constraints, which describe routine sequences of actions in production and time-scalable motion series. These are linked by so-called Connection Variables that act as the shared abstraction between the task and the motion models.

4.3.3 Considerations

Four of our original questions are relevant in this application.

Q1: Similarly to the drill planning problem, we can keep each representation at its own level of abstraction, and employ a common language to pass relevant knowledge among those levels. Assembly planning for a large scale of items can possibly lead to a massive search space of mutually feasible solutions. However, industrial workplaces often have several characteristic properties that we can leverage to simplify the problem. For example in task modeling, it is safe to assume that many production routines can be described concisely by sequences of actions (e.g., drilling, picking, welding or joining) to perform with one of the robot arms at given locations, with temporal constraints and dependencies between them. This can be easily specified using Constraint Processing (CP) languages ( Behrens et al., 2019a ). Also, many industrial workplaces provide a controlled and unobstructed environment in which motions can be pre-computed in the form of time-scalable roadmaps. The obtained representation of motion is then discrete and ready to be connected to the high-level CP-based task model by some auxiliary variables so that it can be directly used by a constraint optimisation solver.

Q2: When we flatten out all levels of abstractions into one uniform level, or use an interface representation to manage interactions among abstraction levels, it then becomes straightforward to employ a dedicated solver that can read the uniform or the interface representation. In assembly planning, we can follow this logic, and employ a dedicated constraint optimisation solver for CP languages, e.g., Google Operation Research tools, to obtain an executable optimal assembly plan. The resulting plan is effectively a mutually feasible solution for all sub-problems: task planning, scheduling, allocation and motion planning.

Q3: Instead of directly specifying a sequence of production routines into a planning domain languages (e.g., a constraint problem), a multi-model learning method can be used for robot programming. In particular, a combination of learning from demonstration and requirements specification through natural language has been shown to be effective in preparing robot assembly planning domains for flexible manufacturing ( Behrens et al., 2019b ).

Q5: Industrial workplaces provide by design a controlled and unobstructed environment. Therefore, it can be assumed that all object locations and possible placements are known in advance, which allows for offline pre-calculation of motion roadmaps and a profile of potential collisions of the arms in motions. Furthermore, depending on the industrial setting, we may able to assume the absence of external interference, e.g., from humans.

4.4 Navigation Planning for Autonomous Underwater Vehicles

In this section, we focus on an instance of TAMP for Autonomous Underwater Vehicles (AUV) operating in spatially and temporally complex environments such as oceans. The problem analyzed in this section will be referred to as the AUV mission planning problem.

4.4.1 Problem and Requirements.

AUVs are required to autonomously accomplish missions such as coverage or inspection of a sequence of regions in the ocean or sea. To perform such missions, an AUV must employ a mission planner that can reason about both high-level sequencing of the regions to be visited and low-level motions for navigating through them. While an AUV executes a series of tasks that can span over a period of several hours, the environment could change drastically due to the presence of tide and currents. An AUV mission planner should generate a combined task and motion plan that take into account not only the nonlinear dynamics of AUVs, natural obstacles in water, kinematic constraints, but also drift caused by the time-varying ocean currents. If these requirements, especially those imposed by dynamically changing environment, are not met, AUVs would attempt to carry out highly costly missions that are no longer feasible.

4.4.2 Integration Challenges

In order to generate a feasible motion plan for an AUV, the intertwined dependencies between the tasks and dynamics of the AUVs and the environment should be considered in the initial planning phases. If the interactions with the environment are overlooked, it may be difficult or impossible to reach the regions of interest that the high-level task planner prescribes. Also, drift is usually modeled via a function whose inputs are position, depth, and time. Therefore, a particular ordering of visitation and position, for instance, could push the UAV further away from its goal because of drift. The AUV mission planning problem has been addressed in the literature by several different approaches. The one we analyze here builds a high-level navigation roadmap by sampling waypoints over the operational area and connecting neighboring waypoints to construct a network of navigation routes. This network avoids known obstacles, areas that are deemed too dangerous for the AUV, or other forbidden regions ( McMahon and Plaku, 2016 ). The navigation roadmap is then combined with a Deterministic Finite Automaton (DFA) representing a regular language to compute sequences of waypoints that are compatible with the mission specification. This combined representation is then used to effectively guide a sampling-based motion planner that takes into account a model of the time-varying ocean currents in each its edge expansion.

4.4.3 Considerations

For this application, we analyze two of our original questions.

Q1: As explained earlier, one way to deal with multiple levels of abstraction is to abstract away the continuous representation. In AUV mission planning, this particular choice would lead to discretising the relevant portion of the ocean in order to be able to impose high-level specifications for task and motion planning. It is problematic, however, to account for the nonlinear dynamics of the AUV and of the ocean currents in this discretisation. A complementary approach is to use a roadmap type of discretisation (i.e., a graph) which does not represent knowledge regarding the dynamics of the vehicle/environment, but contains knowledge about feasible states that satisfy the high-level task specification. An example of such a representation is roadmap coupled with a DFA ( McMahon and Plaku, 2016 ). This is used as a guide to sample the continuous domain for obtaining a motion tree that is aware of constraints normally imposed on a continuous motion representation e.g., kinodynamic constraints or current drift.

Q3: The obvious candidate to leverage learning methods in this problem is to learn a drift model. In absence of reliable data to build predictive models of ocean currents, a simulator can take advantage of synthetic data derived from what is known of the physics of oceans.

4.5 Good Practices

One of the first questions that should be addressed when designing a TAMP solution for an application is to elicit the level of uncertainty inherent in the domain. Uncertainty manifests itself in many different ways: it may be relate to knowledge of goals, requiring them to be posted online, or it may relate to partial observability of the environment, the sudden appearance of obstacles, or uncertainty in the duration of motions. Understanding the nature of uncertainty at hand corresponds to answering questions Q4 and Q5. Analysing the types of uncertainty will narrow the range of task and motion planning algorithms that cater to that specific domain, and help in determining strategies to tackle the consequent challenges. For example, certain uncertainties can be completely hidden from the TAMP method and instead be dealt with during plan execution using existing control methods. Others, like the uncertain travel times in the multi-hauler planning application, should be considered explicitly in the design of a TAMP method, as disregarding them would violate important safety constraints. Sometimes, we can afford to totally ignore the presence of uncertainty, as in the case of the industrial manipulators operating in a controlled environment.

The next step in designing an appropriate TAMP method is to determine the levels of abstraction, and an effective method for their incorporation. This concerns the problem of finding one or more knowledge representation formalisms that are appropriate for expressing the requirements of the domain in question while affording efficient reasoning (addressing questions Q1 and Q2). Effectively dividing knowledge into different levels of abstraction is very challenging. In the drill planning application, for instance, a graph representation for the sequencing problem enables efficient high-level TSP computation while dedicating the geometric information to the motion planner. The TSP passes the knowledge of emerging obstacles to the motion planner, and the motion planner in response verifies the sequencing choice made by the TSP solver. Although this seems a reasonable distribution of knowledge between the two planners, the frequency of knowledge sharing between the two can be exponentially high. As this application shows, the question of how to interface several levels of abstraction is most crucial when there is high interdependency between the levels. On the contrary, in the multi-hauler planning, the interdependency is weak, and we can learn the low-level information and explicitly incorporate the learned model in the high-level task planner. In the latter, the question of how to interface efficiently is less crucial.

Another important issue that must be addressed in the early stages of designing an approach for combines TAMP is how to discretize the problem space. The right choice of discretization has a massive impact on the final solution. This directly relates to the size of the state space at the task level as well as the required calls to motion planning in loosely-coupled approaches like semantic attachments.

5 Summary and Outlook

The research questions we discussed above do not have simple answers. Depending on the domain at hand and the constraints of the application scenario, different answers may suit better for achieving an effective combined TAMP method. To aid the researcher or engineer in building a TAMP system, we have outlined an order in which questions can be approached, with the intention of reducing the amount of required backtracking in the decision making process.

As witnessed by the number of questions and the complexity of the overall topic, future research has the potential to simplify certain questions and maybe even eliminate certain trade-offs by providing more general solutions than we currently have at our disposal. Nevertheless, we conjecture that a one-fits-all method for solving combined TAMP will never exist, therefore the questions we have discussed in this article, as well as the proposed guidelines, will remain relevant in the future.

Author Contributions

MM coordinated the research and the writing of the manuscript. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Federico Pecora is supported by the Swedish Knowledge Foundation (KKS) under the Semantic Robots research profile, and by Vinnova under projects AutoBoomer and AutoHauler. Peter Schüller is supported by the EU Horizon 2020 project AI4EU under grant agreement No. 825619.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Arora, A., Fiorino, H., Pellier, D., Métivier, M., and Pesty, S. (2018). A Review of Learning Planning Action Models. Knowl. Eng. Rev. 33, e20. doi:10.1017/s0269888918000188

CrossRef Full Text | Google Scholar

Balac, N., Gaines, D. M., and Fisher, D. (2000). “Learning Action Models for Navigation in Noisy Environments”, in ICML Workshop on Machine Learning of Spatial Knowledge, Stanford .

Google Scholar

Beetz, M., Arbuckle, T., Belker, T., Cremers, A. B., Schulz, D., Bennewitz, M., et al. (2001). Integrated, Plan-Based Control of Autonomous Robot in Human Environments. IEEE Intell. Syst. 16, 56–65. doi:10.1109/5254.956082

Behrens, J. K., Lange, R., and Mansouri, M. (2019a). “A Constraint Programming Approach to Simultaneous Task Allocation and Motion Scheduling for Industrial Dual-Arm Manipulation Tasks”, in Proc. of the IEE Int. Conf. on Robotics and Automation (ICRA) , Montreal, Canada , May 20, 2019 , 8705–8711.

Behrens, J. K., Stepanova, K., Lange, R., and Skoviera, R. (2019b). Specifying Dual-Arm Robot Planning Problems through Natural Language and Demonstration. IEEE Robot. Autom. Lett. 4, 2622–2629. doi:10.1109/lra.2019.2898714

Cacace, J., Caccavale, R., Finzi, A., and Lippiello, V. (2018). Interactive Plan Execution during Human-Robot Cooperative Manipulation. IFAC-PapersOnLine 51, 500–505. doi:10.1016/j.ifacol.2018.11.584

Caccavale, R., Saveriano, M., Finzi, A., and Lee, D. (2019). Kinesthetic Teaching and Attentional Supervision of Structured Tasks in Human–Robot Interaction. Auton. Robots 43, 1291–1307. doi:10.1007/s10514-018-9706-9

Dantam, N. T., Chaudhuri, S., and Kavraki, L. E. (2018). The Task-Motion Kit: An Open Source, General-Purpose Task and Motion-Planning Framework. IEEE Robot. Autom. Mag. 25, 61–70. doi:10.1109/mra.2018.2815081

Dechter, R. (2003). Constraint Processing (Morgan Kaufmann Series in Artificial Intelligence) .

Erdem, E., Patoglu, V., and Schüller, P. (2016). A Systematic Analysis of Levels of Integration between High-Level Task Planning and Low-Level Feasibility Checks. AI Commun. 29, 319–349. doi:10.3233/AIC-150697

Garrett, C. R., Lozano-Pérez, T., and Kaelbling, L. P. (2018). Sampling-based Methods for Factored Task and Motion Planning. Int. J. Robot. Res. 37, 1796–1825. doi:10.1177/0278364918802962

Garrett, C. R., Chitnis, R., Holladay, R., Kim, B., Silver, T., Kaelbling, L. P., et al. (2020). Integrated Task and Motion Planning. Ann. Rev. Contr. Robot.Auton. Sys. 4 [Dataset]. doi:10.1146/annurev-control-091420-084139

Gaschler, A., Petrick, R. P., Giuliani, M., Rickert, M., and Knoll, A. (2013). “Kvp: A Knowledge of Volumes Approach to Robot Task Planning”, in Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems , Tokyo, Japan , Nov 3, 2013 (New York, NY: IEEE ), 202–208. doi:10.1109/iros.2013.6696354

Ghallab, M., Nau, D., and Traverso, P. (2016). Automated Planning and Acting . Cambridge, United Kingdom: Cambridge University Press .

Hadfield-Menell, D., Groshev, E., Chitnis, R., and Abbeel, P. (2015). “Modular Task and Motion Planning in Belief Space”, in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , Hamburg, Germany , Sep 28, 2015 (New York, NY: IEEE ), 4991–4998. doi:10.1109/IROS.2015.7354079

Havur, G., Haspalamutgil, K., Palaz, C., Erdem, E., and Patoglu, V. (2013). “A Case Study on the Tower of Hanoi Challenge: Representation, Reasoning and Execution”, in IEEE Conference on Robotics and Automation (ICRA) , Karlsruhe, Germany , May 6, 2013 (New York, NY: IEEE ), 4552–4559.

Jiménez, S., De La Rosa, T., Fernández, S., Fernández, F., and Borrajo, D. (2012). A Review of Machine Learning for Automated Planning. Knowl. Eng. Rev. 27, 433–467. doi:10.1017/s026988891200001x

Kaelbling, L. P., and Lozano-Pérez, T. (2013). Integrated Task and Motion Planning in Belief Space. Int. J. Robot. Res. 32, 1194–1227. doi:10.1177/0278364913484072

Kaelbling, L. P., and Lozano-Pérez, T. (2011). “Hierarchical Planning in the now”, in Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA) , Shanghai, China , May 13, 2011 (New York, NY: IEEE ). doi:10.1109/icra.2011.5980391

Kim, B., Kaelbling, L. P., and Lozano-Pérez, T. (2018). “Guiding Search in Continuous State-Action Spaces by Learning an Action Sampler from off-Target Search Experience”, in Thirty-Second AAAI Conference on Artificial Intelligence . Available at: https://ojs.aaai.org/index.php/AAAI/article/view/12106 (Accessed April 26, 2018).

Kim, B., Kaelbling, L. P., and Lozano-Pérez, T. (2019a). “Adversarial Actor-Critic Method for Task and Motion Planning Problems Using Planning Experience”, in Proc. of the AAAI Conf. on Artificial Intelligence Vol. 33, 8017–8024. doi:10.1609/aaai.v33i01.33018017

Kim, B., Wang, Z., Kaelbling, L. P., and Lozano-Pérez, T. (2019b). Learning to Guide Task and Motion Planning Using Score-Space Representation. Int. J. Robot. Res. 38, 793–812. doi:10.1177/0278364919848837

Kress-Gazit, H., Lahijanian, M., and Raman, V. (2018). Synthesis for Robots: Guarantees and Feedback for Robot Behavior. Annu. Rev. Contr. Robot. Auton. Syst. 1, 211–236. doi:10.1146/annurev-control-060117-104838

Kurosu, J., Yorozu, A., and Takahashi, M. (2017). Simultaneous Dual-Arm Motion Planning for Minimizing Operation Time. Appl. Sci. 7, 1210. doi:10.3390/app7121210

Lagriffoul, F., Dimitrov, D., Saffiotti, A., and Karlsson, L. (2012). “Constraint Propagation on Interval Bounds for Dealing With Geometric Backtracking”, in Proc. of IEEE/RSJ Int’l Conf. on Intelligent Robots and Systems , Vilamoura-Algarve, Portugal , Oct 7, 2012 (New York, NY: IEEE ). doi:10.1109/iros.2012.6385972

Lahijanian, M., Andersson, S. B., and Belta, C. (2015). Formal Verification and Synthesis for Discrete-Time Stochastic Systems. IEEE Trans. Automat. Contr. 60, 2031–2045. doi:10.1109/tac.2015.2398883

LaValle, S. M. (2006). Planning Algorithms . New York, NY: Cambridge University Press . doi:10.1017/cbo9780511546877

CrossRef Full Text

Liu, J., and Ozay, N. (2016). Finite Abstractions With Robustness Margins for Temporal Logic-Based Control Synthesis. Nonlinear Anal. Hybrid Syst. 22, 1–15. doi:10.1016/j.nahs.2016.02.002

Maler, O., and Nickovic, D. (2004). “Monitoring Temporal Properties of Continuous Signals”, in Formal Techniques, Modelling and Analysis of Timed and Fault-Tolerant Systems (Berlin, Heidelberg: Springer ), 152–166. doi:10.1007/978-3-540-30206-3_12

Mansouri, M., Andreasson, H., and Pecora, F. (2015). “Towards Hybrid Reasoning for Automated Industrial Fleet Management”, in In Hybrid Reasoning Workshop of Int. Joint Conf. on Artificial Intelligence , Buenos Aires, Argentina .

Mansouri, M., Andreasson, H., and Pecora, F. (2016). Hybrid Reasoning for Multi-Robot Drill Planning in Open-Pit Mines. Acta Polytech. 56, 47–56. doi:10.14311/app.2016.56.0047

Mansouri, M., Lagriffoul, F., and Pecora, F. (2017). “Multi Vehicle Routing With Nonholonomic Constraints and Dense Dynamic Obstacles”, in Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Vancouver, Canada , Sep 28, 2017 (New York, NY: IEEE ), 3522–3529. doi:10.1109/iros.2017.8206195

Mansouri, M., Lacerda, B., Hawes, N., and Pecora, F. (2019). “Multi-Robot Planning Under Uncertain Travel Times and Safety Constraints”, in Proc. of the Int. Joint Conf. on Artificial Intelligence (IJCAI) , 478–484. doi:10.24963/ijcai.2019/68

Mansouri, M. (2016). A Constraint-Based Approach for Hybrid Reasoning in Robotics. PhD thesis. Örebro (Sweden): Örebro University .

McMahon, J., and Plaku, E. (2016). Mission and Motion Planning for Autonomous Underwater Vehicles Operating in Spatially and Temporally Complex Environments. IEEE J. Ocean. Eng. 41, 893–912. doi:10.1109/JOE.2015.2503498

Mosenlechner, L., and Beetz, M. (2011). “Parameterizing Actions to Have the Appropriate Effects”, in Proc. of IEEE/RSJ Int’l Conf. on Intelligent Robots and Systems , San Francisco, CA , Sep 25, 2011 . doi:10.1109/iros.2011.6094883

Nedunuri, S., Prabhu, S., Moll, M., Chaudhuri, S., and Kavraki, L. E. (2014). “SMT-Based Synthesis of Integrated Task and Motion Plans for Mobile Manipulation”, in IEEE Intl. Conf. on Robotics and Automation (ICRA) , Hong Kong, China , May 31, 2014 (New York, NY: IEEE ). doi:10.1109/icra.2014.6906924

Nieuwenhuis, R., Oliveras, A., and Tinelli, C. (2006). Solving Sat and Sat Modulo Theories: From an Abstract Davis–Putnam–Logemann–Loveland Procedure to Dpll(t). J. ACM 53, 937–977. doi:10.1145/1217856.1217859

O’Donnell, P. A., and Lozano-Pérez, T. (1989). “Deadlock-Free and Collision-Free Coordination of Two Robot Manipulators”, in IEEE International Conference on Robotics and Automation , Scottsdale, AZ , May 14, 1989 (Washington, DC: IEEE Computer Society ), 484–489.

Pecora, F., Andreasson, H., Mansouri, M., and Petkov, V. (2018). “A Loosely-Coupled Approach for Multi-Robot Coordination, Motion Planning and Control”, in Proc. of the Int. Conf. on Automated Planning and Scheduling (ICAPS) . 485–493.

Plaku, E. (2012). “Planning in Discrete and Continuous Spaces: From Ltl Tasks to Robot Motions,” in Advances in Autonomous Robotics . Editors G. Herrmann, M. Studley, M. Pearson, A. Conn, C. Melhuish, M. Witkowskiet al. (Berlin, Heidelberg: Springer ), 331–342. doi:10.1007/978-3-642-32527-4_30

Raman, V., Donzé, A., Sadigh, D., Murray, R. M., and Seshia, S. A. (2015). “Reactive Synthesis from Signal Temporal Logic Specifications”, in Proc. of the Int. Conf. on Hybrid Systems: Computation and Control (ACM) , Seattle, Washington , April, 2015 (New York, NY: Association for Computing Machinery ), 239–248.

Ranasinghe, N., and Shen, W.-M. (2008). “Surprise-Based Learning for Developmental Robotics”, in ECSIS Symposium on Learning and Adaptive Behaviors for Robotic Systems , Edinburgh, United Kingdom , Aug 6, 2008 (New York, NY: IEEE ), 65–70.

Shah, N., Kala Vasudevan, D., Kumar, K., Kamojjhala, P., and Srivastava, S. (2020). “Anytime Integrated Task and Motion Policies for Stochastic Environments”, in IEEE Int. Conf. Robotics Automation (ICRA) , Paris, France , May 31, 2020 (New York, NY: IEEE ), 9285–9291. doi:10.1109/ICRA40945.2020.9197574

Srivastava, S., Fang, E., Riano, L., Chitnis, R., Russell, S., and Abbeel, P. (2014). “Combined Task and Motion Planning through an Extensible Planner-Independent Interface Layer”, in Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA) , Hong Kong, China , May 31, 2014 (New York, NY: IEEE ). doi:10.1109/icra.2014.6906922

Şucan, I. A., and Kavraki, L. E. (2012). “Accounting for Uncertainty in Simultaneous Task and Motion Planning Using Task Motion Multigraphs”, in IEEE International Conference on Robotics and Automation , Saint Paul, MN , May 14, 2012 (New York, NY: IEEE ), 4822–4828.

Weser, M., Off, D., and Zhang, J. (2010). “Htn Robot Planning in Partially Observable Dynamic Environments”, in Proc. of IEEE Int. Conf. on Robotics and Automation , Anchorage, AK , May 3, 2010 (New York, NY: IEEE ), 1505–1510.

Yalciner, I. F., Nouman, A., Patoglu, V., and Erdem, E. (2017). Hybrid Conditional Planning Using Answer Set Programming. Theor. Pract. Logic Program. 17, 1027–1047. doi:10.1017/s1471068417000321

Keywords: task and motion planning, integrative AI, knowledge representation, automated reasoning, industrial applications of robotics

Citation: Mansouri M, Pecora F and Schüller P (2021) Combining Task and Motion Planning: Challenges and Guidelines. Front. Robot. AI 8:637888. doi: 10.3389/frobt.2021.637888

Received: 04 December 2020; Accepted: 22 April 2021; Published: 19 May 2021.

Reviewed by:

Copyright © 2021 Mansouri, Pecora and Schüller. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Masoumeh Mansouri, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • DOI: 10.1109/UR57808.2023.10202250
  • Corpus ID: 238198253

Solving Challenging Control Problems via Learning-based Motion Planning and Imitation

  • Nitish Sontakke , Sehoon Ha
  • Published in 20th International Conference… 27 September 2021
  • Computer Science, Engineering
  • 2023 20th International Conference on Ubiquitous Robots (UR)

Figures from this paper

figure 1

One Citation

Learning to brachiate via simplified model imitation, 45 references, self-imitation learning by planning, harnessing reinforcement learning for neural motion planning, relmogen: leveraging motion generation in reinforcement learning for mobile manipulation, motion planner augmented reinforcement learning for robot manipulation in obstructed environments, learning to walk via deep reinforcement learning, deepgait: planning and control of quadrupedal gaits using deep reinforcement learning, data efficient reinforcement learning for legged robots, prm-rl: long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning, chomp: gradient optimization techniques for efficient motion planning, related papers.

Showing 1 through 3 of 0 Related Papers

2.6 Problem-Solving Basics for One-Dimensional Kinematics

Learning objectives.

By the end of this section, you will be able to:

  • Apply problem-solving steps and strategies to solve problems of one-dimensional kinematics.
  • Apply strategies to determine whether or not the result of a problem is reasonable, and if not, determine the cause.

Problem-solving skills are obviously essential to success in a quantitative course in physics. More importantly, the ability to apply broad physical principles, usually represented by equations, to specific situations is a very powerful form of knowledge. It is much more powerful than memorizing a list of facts. Analytical skills and problem-solving abilities can be applied to new situations, whereas a list of facts cannot be made long enough to contain every possible circumstance. Such analytical skills are useful both for solving problems in this text and for applying physics in everyday and professional life.

Problem-Solving Steps

While there is no simple step-by-step method that works for every problem, the following general procedures facilitate problem solving and make it more meaningful. A certain amount of creativity and insight is required as well.

Examine the situation to determine which physical principles are involved . It often helps to draw a simple sketch at the outset. You will also need to decide which direction is positive and note that on your sketch. Once you have identified the physical principles, it is much easier to find and apply the equations representing those principles. Although finding the correct equation is essential, keep in mind that equations represent physical principles, laws of nature, and relationships among physical quantities. Without a conceptual understanding of a problem, a numerical solution is meaningless.

Make a list of what is given or can be inferred from the problem as stated (identify the knowns) . Many problems are stated very succinctly and require some inspection to determine what is known. A sketch can also be very useful at this point. Formally identifying the knowns is of particular importance in applying physics to real-world situations. Remember, “stopped” means velocity is zero, and we often can take initial time and position as zero.

Identify exactly what needs to be determined in the problem (identify the unknowns) . In complex problems, especially, it is not always obvious what needs to be found or in what sequence. Making a list can help.

Find an equation or set of equations that can help you solve the problem . Your list of knowns and unknowns can help here. It is easiest if you can find equations that contain only one unknown—that is, all of the other variables are known, so you can easily solve for the unknown. If the equation contains more than one unknown, then an additional equation is needed to solve the problem. In some problems, several unknowns must be determined to get at the one needed most. In such problems it is especially important to keep physical principles in mind to avoid going astray in a sea of equations. You may have to use two (or more) different equations to get the final answer.

Substitute the knowns along with their units into the appropriate equation, and obtain numerical solutions complete with units . This step produces the numerical answer; it also provides a check on units that can help you find errors. If the units of the answer are incorrect, then an error has been made. However, be warned that correct units do not guarantee that the numerical part of the answer is also correct.

Check the answer to see if it is reasonable: Does it make sense? This final step is extremely important—the goal of physics is to accurately describe nature. To see if the answer is reasonable, check both its magnitude and its sign, in addition to its units. Your judgment will improve as you solve more and more physics problems, and it will become possible for you to make finer and finer judgments regarding whether nature is adequately described by the answer to a problem. This step brings the problem back to its conceptual meaning. If you can judge whether the answer is reasonable, you have a deeper understanding of physics than just being able to mechanically solve a problem.

When solving problems, we often perform these steps in different order, and we also tend to do several steps simultaneously. There is no rigid procedure that will work every time. Creativity and insight grow with experience, and the basics of problem solving become almost automatic. One way to get practice is to work out the text’s examples for yourself as you read. Another is to work as many end-of-section problems as possible, starting with the easiest to build confidence and progressing to the more difficult. Once you become involved in physics, you will see it all around you, and you can begin to apply it to situations you encounter outside the classroom, just as is done in many of the applications in this text.

Unreasonable Results

Physics must describe nature accurately. Some problems have results that are unreasonable because one premise is unreasonable or because certain premises are inconsistent with one another. The physical principle applied correctly then produces an unreasonable result. For example, if a person starting a foot race accelerates at 0 . 40 m/s 2 0 . 40 m/s 2 for 100 s, his final speed will be 40 m/s (about 150 km/h)—clearly unreasonable because the time of 100 s is an unreasonable premise. The physics is correct in a sense, but there is more to describing nature than just manipulating equations correctly. Checking the result of a problem to see if it is reasonable does more than help uncover errors in problem solving—it also builds intuition in judging whether nature is being accurately described.

Use the following strategies to determine whether an answer is reasonable and, if it is not, to determine what is the cause.

Solve the problem using strategies as outlined and in the format followed in the worked examples in the text . In the example given in the preceding paragraph, you would identify the givens as the acceleration and time and use the equation below to find the unknown final velocity. That is,

Check to see if the answer is reasonable . Is it too large or too small, or does it have the wrong sign, improper units, …? In this case, you may need to convert meters per second into a more familiar unit, such as miles per hour.

This velocity is about four times greater than a person can run—so it is too large.

If the answer is unreasonable, look for what specifically could cause the identified difficulty . In the example of the runner, there are only two assumptions that are suspect. The acceleration could be too great or the time too long. First look at the acceleration and think about what the number means. If someone accelerates at 0 . 40 m/s 2 0 . 40 m/s 2 , their velocity is increasing by 0.4 m/s each second. Does this seem reasonable? If so, the time must be too long. It is not possible for someone to accelerate at a constant rate of 0 . 40 m/s 2 0 . 40 m/s 2 for 100 s (almost two minutes).

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/college-physics-2e/pages/1-introduction-to-science-and-the-realm-of-physics-physical-quantities-and-units
  • Authors: Paul Peter Urone, Roger Hinrichs
  • Publisher/website: OpenStax
  • Book title: College Physics 2e
  • Publication date: Jul 13, 2022
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/college-physics-2e/pages/1-introduction-to-science-and-the-realm-of-physics-physical-quantities-and-units
  • Section URL: https://openstax.org/books/college-physics-2e/pages/2-6-problem-solving-basics-for-one-dimensional-kinematics

© Jul 9, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Motion planning with complex goals

Profile image of Moshe  Vardi

Abstract This article describes approach for solving motion planning problems for mobile robots involving temporal goals. Traditional motion planning for mobile robotic systems involves the construction of a motion plan that takes the system from an initial state to a set of goal states while avoiding collisions with obstacles at all times. The motion plan is also required to respect the dynamics of the system that are typically described by a set of differential equations.

Related Papers

Moshe Vardi

Abstract In this paper, we consider the problem of motion planning for mobile robots with nonlinear hybrid dynamics, and high-level temporal goals. We use a multi-layered synergistic framework that has been proposed recently for solving planning problems involving hybrid systems and high-level temporal goals. In that framework, a high-level planner employs a user-defined discrete abstraction of the hybrid system as well as exploration information to suggest high-level plans.

motion challenge planning and problem solving

ROBOTICS RESEARCH-INTERNATIONAL …

Paolo Fiorini

Shahram Payandeh

An overview of Mobile Robotic Agents' Motion Planning (RMP) and Trajectory Planning Problem (TPP) in Dynamic Environment is presented. We have focused on the area of mobile robots, in an environment in which obstacles may change their location in space-time and henceforth it does not cover the same for environment comprising only non-moving, stationary obstacles. Comparative graphs and charts, demonstrating the directions, major contributions and distribution of works over the past 23 years have been provided in conclusion section.

George Pappas

Abstract This paper presents a geometry-based, multi-layered synergistic approach to solve motion planning problems for mobile robots involving temporal goals. The temporal goals are described over subsets of the workspace (called propositions) using temporal logic. A multi-layered synergistic framework has been proposed recently for solving planning problems involving significant discrete structure.

Indranil Saha

Motion planning is the core problem to solve for developing any application involving an autonomous mobile robot. The fundamental motion planning problem involves generating a trajectory for a robot for point-to-point navigation while avoiding obstacles. Heuristic-based search algorithms like A* have been shown to be extremely efficient in solving such planning problems. Recently, there has been an increased interest in specifying complex motion plans using temporal logic. In the state-of-the-art algorithm, the temporal logic motion planning problem is reduced to a graph search problem and Dijkstra's shortest path algorithm is used to compute the optimal trajectory satisfying the specification. The A* algorithm when used with a proper heuristic for the distance from the destination can generate an optimal path in a graph efficiently. The primary challenge for using A* algorithm in temporal logic path planning is that there is no notion of a single destination state for the robot...

Integrated Computer-Aided Engineering

Alfonso José García Cerezo

Ellips Masehian

This paper presents a new sensor-based online method for generating collision-free near-optimal paths for mobile robots pursuing a moving target amidst dynamic and static obstacles. At each iteration, first the set of all collision-free directions are calculated using velocity vectors of the robot relative to each obstacle and target, forming the Directive Circle (DC), which is a novel concept. Then, a direction close to the shortest path to the target is selected from feasible directions in DC. The DC prevents the robot from being trapped in deadlocks or local minima. It is assumed that the target’s velocity is known, while the speeds of dynamic obstacles, as well as the locations of static obstacles, are to be calculated online. Extensive simulations and experimental results demonstrated the efficiency of the proposed method and its success in coping with complex environments and obstacles.

AI Communications

Erion Plaku

RELATED PAPERS

IEEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications

Christian Laugier

Advanced Path Planning for Mobile Entities

Xiangrong Xu

Journal of Guidance, Control, and Dynamics

Victoria Coverstone , W. Cerven

Mumbai University

M.G mohanan

The International Journal of Robotics …

Farah Kamil

2007 IEEE/RSJ International Conference on Intelligent Robots and Systems

Samuel Abreo Rodriguez

2019 International Conference on Robotics and Automation (ICRA)

Charalampos Bechlioulis

Proceedings of the 16th international conference on Hybrid systems: computation and control - HSCC '13

[1993] Proceedings IEEE International Conference on Robotics and Automation

Robotics and Autonomous Systems

Abdullah Al Mamun

Fifth International Conference on Advanced Robotics 'Robots in Unstructured Environments

saraswati kumari

Zvi Shiller

Luis Montano

IEEE Transactions on Robotics

Yichao Zhao

IEEE Robotics & Automation Magazine

Miguel Prado

Defence Science Journal

Dyana Joseline

2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)

Fawzi Nashashibi

Acta Informatica

Klaus Sutner

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Help | Advanced Search

Computer Science > Robotics

Title: solving motion planning tasks with a scalable generative model.

Abstract: As autonomous driving systems being deployed to millions of vehicles, there is a pressing need of improving the system's scalability, safety and reducing the engineering cost. A realistic, scalable, and practical simulator of the driving world is highly desired. In this paper, we present an efficient solution based on generative models which learns the dynamics of the driving scenes. With this model, we can not only simulate the diverse futures of a given driving scenario but also generate a variety of driving scenarios conditioned on various prompts. Our innovative design allows the model to operate in both full-Autoregressive and partial-Autoregressive modes, significantly improving inference and training speed without sacrificing generative capability. This efficiency makes it ideal for being used as an online reactive environment for reinforcement learning, an evaluator for planning policies, and a high-fidelity simulator for testing. We evaluated our model against two real-world datasets: the Waymo motion dataset and the nuPlan dataset. On the simulation realism and scene generation benchmark, our model achieves the state-of-the-art performance. And in the planning benchmarks, our planner outperforms the prior arts. We conclude that the proposed generative model may serve as a foundation for a variety of motion planning tasks, including data generation, simulation, planning, and online training. Source code is public at this https URL
Comments: ECCV2024
Subjects: Robotics (cs.RO); Computer Vision and Pattern Recognition (cs.CV)
Cite as: [cs.RO]
  (or [cs.RO] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

A review of recent trend in motion planning of industrial robots

  • Regular Paper
  • Published: 22 February 2023
  • Volume 7 , pages 253–274, ( 2023 )

Cite this article

motion challenge planning and problem solving

  • Mehran Ghafarian Tamizi 1 ,
  • Marjan Yaghoubi 2 &
  • Homayoun Najjaran   ORCID: orcid.org/0000-0002-3550-225X 2  

3279 Accesses

18 Citations

Explore all metrics

Motion planning is an integral part of each robotic system. It is critical to develop an effective motion in order to achieve a successful performance. The ability to generate a smooth, optimal, and precise trajectory is crucial for a robotic arm to accomplish a complex task. Classical approaches such as artificial potential fields, sampling-based, and bio-inspired heuristic methods, have been widely used to solve the motion planning problem. However, most of these methods are ineffective in highly dynamic and high-dimensional configuration space due to the high computations and low convergence rates impeding real-time implementations. Recently, learning-based methods have gained considerable attention in tackling the motion planning problem due to their generalization and high ability to deal with complex issues. This research presents a detailed overview of the most recent developments in solving the motion planning problem for manipulator robotics systems. Specifically, it focuses on how learning-based methods are developed to address the drawbacks of classical approaches. We examined current works on manipulator motion planning and outlined the gaps, limitations, and prospects for further research and analysis. Subsequently, this study investigates three main learning-based motion planning methods: deep learning-based motion planners, reinforcement learning, and learning by demonstration. This paper can help experts to benefit from concise version of advantages and disadvantages of different motion planning techniques to use them in their research. We anticipate that learning-based path planning methods will remain the subject of research in the foreseeable future because these solutions are typically dependent on problem-specific knowledge and datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

motion challenge planning and problem solving

Similar content being viewed by others

motion challenge planning and problem solving

Research on manipulator motion planning for complex systems based on deep learning

motion challenge planning and problem solving

Collision-free path planning for welding manipulator via hybrid algorithm of deep reinforcement learning and inverse kinematics

motion challenge planning and problem solving

Inverse kinematics solution and control method of 6-degree-of-freedom manipulator based on deep reinforcement learning

Explore related subjects.

  • Artificial Intelligence

Code or data availability

The authors declare that data supporting the findings of this study are available within the article.

Aarts, E., Korst, J., Michiels, W.: Simulated annealing. In: Search Methodologies. Springer, pp. 187–210 (2005)

Abdor-Sierra, J.A., Merchán-Cruz, E.A., Sánchez-Garfias, F.A., Rodríguez-Cañizo, R.G., Portilla-Flores, E.A., Vázquez-Castillo, V.: Particle swarm optimization for inverse kinematics solution and trajectory planning of 7-dof and 8-dof robot manipulators based on unit quaternion representation. J. Appl. Eng. Sci. 19 (3), 592–599 (2021)

Article   Google Scholar  

Aleo, I., Arena, P., Patané, L.: Sarsa-based reinforcement learning for motion planning in serial manipulators. In: The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, pp. 1–6 (2010)

Almasri, E., Uyguroğlu, M.K.: Trajectory optimization in robotic applications, survey of recent developments (2021)

Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57 (5), 469–483 (2009)

Badawy, A.: Dual-well potential field function for articulated manipulator trajectory planning. Alex. Eng. J. 55 (2), 1235–1241 (2016)

Article   MathSciNet   Google Scholar  

Baghli, F.Z., bakkali, L.E., Lakhal, Y.: Optimization of arm manipulator trajectory planning in the presence of obstacles by ant colony algorithm. Procedia Eng. 181 , 560–567 (2017). 10th International Conference Interdisciplinarity in Engineering, INTER-ENG 2016, 6–7 October 2016, Tirgu Mures, Romania

Bency, M.J., Qureshi, A.H., Yip, M.C.: Neural path planning: Fixed time, near-optimal path generation via oracle imitation. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 3965–3972 (2019)

Berenson, D., Abbeel, P., Goldberg, K.: A robot path planning framework that learns from experience. In: 2012 IEEE International Conference on Robotics and Automation. IEEE, pp. 3671–3678 (2012)

Boggs, P.T., Tolle, J.W.: Sequential quadratic programming. Acta Numer 4 , 1–51 (1995)

Article   MathSciNet   MATH   Google Scholar  

Bohlin, R., Kavraki, L.E.: Path planning using lazy prm. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), vol. 1. IEEE, pp. 521–528 (2000)

Calinon, S.: A tutorial on task-parameterized movement learning and retrieval. Intel. Serv. Robot. 9 (1), 1–29 (2016)

Calinon, S.: Robot learning with task-parameterized generative models. In: Robotics Research. Springer, pp. 111–126 (2018)

Calinon, S., Billard, A.: Active teaching in robot programming by demonstration. In: RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, pp. 702–707 (2007)

Cao, H., Sun, S., Zhang, K., Tang, Z.: Visualized trajectory planning of flexible redundant robotic arm using a novel hybrid algorithm. Optik 127 (20), 9974–9983 (2016)

Cao, X., Yan, H., Huang, Z., Ai, S., Xu, Y., Fu, R., Zou, X.: A multi-objective particle swarm optimization for trajectory planning of fruit picking manipulator. Agronomy 11 (11), 2286 (2021)

Carabin, G., Scalera, L.: On the trajectory planning for energy efficiency in industrial robotic systems. Robotics 9 (4), 89 (2020)

Chehelgami, S., Ashtari, E., Basiri, M.A., Tale Masouleh, M., Kalhor, A.: Safe deep learning-based global path planning using a fast collision-free path generator (2022). https://ssrn.com/abstract=4170011r

Chen, X., Ghadirzadeh, A., Folkesson, J., Björkman, M., Jensfelt, P.: Deep reinforcement learning to acquire navigation skills for wheel-legged robots in complex environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 3110–3116 (2018)

Chen, P., Pei, J., Lu, W., Li, M.: A deep reinforcement learning based method for real-time path planning and dynamic obstacle avoidance. Neurocomputing 497 , 64–75 (2022)

Cheng, R., Shankar, K., Burdick, J.W.: Learning an optimal sampling distribution for efficient motion planning. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 7485–7492 (2020)

Choset, H., Lynch, K.M., Hutchinson, S., Kantor, G.A., Burgard, W.: Principles of Robot Motion: Theory, Algorithms, and Implementations. MIT Press (2005)

Chumkamon, S., Yokkampon, U., Hayashi, E., Fujisawa, R.: Robot motion generation by hand demonstration. In: Proceedings of International Conference on Artificial Life & Robotics (ICAROB2021), pp. 768–771 (2021)

Coleman, D., Şucan, I.A., Moll, M., Okada, K., Correll, N.: Experience-based planning with sparse roadmap spanners. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 900–905 (2015)

Cong, M., Dong, H., Liu, D.: Reinforcement learning and ega-based trajectory planning for dual robots yi liu. Int. J. Robot. Automat. 33 (4) (2018)

da Graça Marcos, M., Machado, J.T., Azevedo-Perdicoúlis, T.-P.: Trajectory planning of redundant manipulators using genetic algorithms. Commun. Nonlinear Sci. Numer. Simul. 14 (7), 2858–2869 (2009)

da Graça Marcos, M., Machado, J.T., Azevedo-Perdicoúlis, T.-P.: A multi-objective approach for the motion planning of redundant manipulators. Appl. Soft Comput. 12 (2), 589–599 (2012)

Das, N., Yip, M.: Learning-based proxy collision detection for robot motion planning applications. IEEE Trans. Rob. 36 (4), 1096–1114 (2020)

Devi, M.A., Jadhav, P.D., Adhikary, N., Hebbar, P.S., Mohsin, M., Shashank, S.K.: Trajectory planning & computation of inverse kinematics of scara using machine learning. In: 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS). IEEE, pp. 170–176 (2021)

Diankov, R., Kuffner, J.: Randomized statistical path planning. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 1–6 (2007)

Ding, W., Liu, Y., Zhang, H., Shah, M.A., Ikbal, M.A.: Research on manipulator motion planning for complex systems based on deep learning. Int. J. Syst. Assur. Eng. Manag., 1–10 (2021)

Duguleana, M., Barbuceanu, F.G., Teirelbar, A., Mogan, G.: Obstacle avoidance of redundant manipulators using neural networks based reinforcement learning. Robot. Comput. Integr. Manuf. 28 (2), 132–146 (2012)

Google Scholar  

Duque, D.A., Prieto, F.A., Hoyos, J.G.: Trajectory generation for robotic assembly operations using learning by demonstration. Robot. Comput. Integr. Manuf. 57 , 292–302 (2019)

Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science. Ieee, pp. 39–43 (1995)

Elbanhawi, M., Simic, M.: Sampling-based robot motion planning: a review. Ieee access 2 , 56–77 (2014)

Ellekilde, L.-P., Petersen, H.G.: Motion planning efficient trajectories for industrial bin-picking. Int. J. Robot. Res. 32 (9–10), 991–1004 (2013)

Everett, M., Chen, Y.F., How, J.P.: Motion planning among dynamic, decision-making agents with deep reinforcement learning. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 3052–3059 (2018)

Fadzli, S.A., Abdulkadir, S.I., Makhtar, M., Jamal, A.A.: Robotic indoor path planning using dijkstra’s algorithm with multi-layer dictionaries. In: 2015 2nd International Conference on Information Science and Security (ICISS). IEEE, pp. 1–4 (2015)

Ferguson, D., Stentz, A.: Anytime rrts. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 5369–5375 (2006)

Field, G., Stepanenko, Y.: Iterative dynamic programming: an approach to minimum energy trajectory planning for robotic manipulators. In: Proceedings of IEEE International Conference on Robotics and Automation, vol. 3. IEEE, pp. 2755–2760 (1996)

Fontanals, J., Dang-Vu, B.-A., Porges, O., Rosell, J., Roa, M.A.: Integrated grasp and motion planning using independent contact regions. In: 2014 IEEE-RAS International Conference on Humanoid Robots. IEEE, pp. 887–893 (2014)

Gai, S.N., Sun, R., Chen, S.J., Ji, S.: 6-dof robotic obstacle avoidance path planning based on artificial potential field method. In: 2019 16th International Conference on Ubiquitous Robots (UR). IEEE, pp. 165–168 (2019)

Gammell, J.D., Srinivasa, S.S., Barfoot, T.D.: Informed rrt*: optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 2997–3004 (2014)

Gammell, J.D., Srinivasa, S.S., Barfoot, T.D.: Batch informed trees (bit*): Sampling-based optimal planning via the heuristically guided search of implicit random geometric graphs. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 3067–3074 (2015)

Gao, X., Wu, H., Zhai, L., Sun, H., Jia, Q., Wang, Y., Wu, L.: A rapidly exploring random tree optimization algorithm for space robotic manipulators guided by obstacle avoidance independent potential field. Int. J. Adv. Rob. Syst. 15 (3), 1729881418782240 (2018)

Garg, D.P., Kumar, M.: Optimization techniques applied to multiple manipulators for path planning and torque minimization. Eng. Appl. Artif. Intell. 15 (3–4), 241–252 (2002)

Gasparetto, A., Boscariol, P., Lanzutti, A., Vidoni, R.: Path planning and trajectory planning algorithms: a general overview. Motion Oper. Plan. Robot. Syst., 3–27 (2015)

Geraerts, R., Overmars, M.H.: Creating high-quality paths for motion planning. Int. J. Robot. Res. 26 (8), 845–863 (2007)

Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521 (7553), 452–459 (2015)

Gilbert, E.G., Johnson, D.W., Keerthi, S.S.: A fast procedure for computing the distance between complex objects in three-dimensional space. IEEE J Robot Automat 4 (2), 193–203 (1988)

Guo, M., Wang, Y., Liang, B., Chen, Z., Lin, J., Huang, K.: Robot path planning via deep reinforcement learning with improved reward function. In: Proceedings of 2021 Chinese Intelligent Systems Conference. Springer, pp. 672–680 (2022)

Gupta, K., Najjaran, H.: Exploiting abstract symmetries in reinforcement learning for complex environments. In: 2022 International Conference on Robotics and Automation (ICRA). IEEE, pp. 3631–3637 (2022)

Guruji, A.K., Agarwal, H., Parsediya, D.: Time-efficient a* algorithm for robot path planning. Procedia Technol. 23 , 144–149 (2016)

Hamdoun, O., El Bakkali, L., Baghli, F.Z.: Optimal trajectory planning of 3rrr parallel robot using ant colony algorithm. In: Zeghloul, S., Laribi, M.A., Gazeau, J.-P. (eds.) Robotics and Mechatronics, pp. 131–139. Springer, Cham (2016)

Chapter   Google Scholar  

Hauser, K.: Lazy collision checking in asymptotically-optimal motion planning. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 2951–2957 (2015)

Hubbard, P.M.: Approximating polyhedra with spheres for time-critical collision detection. ACM Trans Gr (TOG) 15 (3), 179–210 (1996)

Huh, J., Lee, D.D.: Learning high-dimensional mixture models for fast collision detection in rapidly-exploring random trees. In: 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 63–69 (2016)

Ichter, B., Harrison, J., Pavone, M.: Learning sampling distributions for robot motion planning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 7087–7094 (2018)

Ichter, B., Pavone, M.: Robot motion planning in learned latent spaces. IEEE Robot Automat Lett 4 (3), 2407–2414 (2019)

Ijspeert, A.J., Nakanishi, J., Schaal, S.: Movement imitation with nonlinear dynamical systems in humanoid robots. In: Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), vol. 2. IEEE, pp. 1398–1403 (2002)

Incremona, G.P., Sacchi, N., Sangiovanni, B., Ferrara, A.: Experimental assessment of deep reinforcement learning for robot obstacle avoidance: a lpv control perspective. IFAC-PapersOnLine 54 (8), 89–94 (2021)

Jaryani, M.H.: An effective manipulator trajectory planning with obstacles using virtual potential field method. In: 2007 IEEE International Conference on Systems, Man and Cybernetics. IEEE, pp. 1573–1578 (2007)

Jeevamalar, J., Ramabalan, S.: Optimal trajectory planning for autonomous robots-a review. In: IEEE-International Conference on Advances in Engineering, Science and Management (ICAESM-2012) IEEE, pp. 269–275 (2012)

Jin, W., Murphey, T.D., Kulić, D., Ezer, N., Mou, S.: Learning from sparse demonstrations. IEEE Trans. Robot. (2022)

Jurgenson, T., Tamar, A.: Harnessing reinforcement learning for neural motion planning (2019). arXiv preprint arXiv:1906.00214

Kahn, G., Sujan, P., Patil, S., Bopardikar, S., Ryde, J., Goldberg, K., Abbeel, P.: Active exploration using trajectory optimization for robotic grasping in the presence of occlusions. In: 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 4783–4790 (2015)

Kamali, K., Bonev, I.A., Desrosiers, C.: Real-time motion planning for robotic teleoperation using dynamic-goal deep reinforcement learning. In: 2020 17th Conference on Computer and Robot Vision (CRV). IEEE, pp. 182–189 (2020)

Karaman, S., Frazzoli, E.: Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 30 (7), 846–894 (2011)

Article   MATH   Google Scholar  

Katyal, K., Wang, I., Burlina, P., et al.: Leveraging deep reinforcement learning for reaching robotic tasks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 18–19 (2017)

Kavraki, L.E., Svestka, P., Latombe, J.-C., Overmars, M.H.: Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans. Robot. Autom. 12 (4), 566–580 (1996)

Kavraki, L.E., Kolountzakis, M.N., Latombe, J.-C.: Analysis of probabilistic roadmaps for path planning. IEEE Trans. Robot. Autom. 14 (1), 166–171 (1998)

Khan, A.T., Li, S., Kadry, S., Nam, Y.: Control framework for trajectory planning of soft manipulator using optimized rrt algorithm. IEEE Access 8 , 171730–171743 (2020)

Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. In: Autonomous Robot Vehicles. Springer, pp. 396–404 (1986)

Kim, D.-H., Lim, S.-J., Lee, D.-H., Lee, J.Y., Han, C.-S.: A rrt-based motion planning of dual-arm robot for (dis) assembly tasks. In: IEEE ISR 2013. IEEE, pp. 1–6 (2013)

Kim, J.-J., Park, S.-Y., Lee, J.-J.: Adaptability improvement of learning from demonstration with sequential quadratic programming for motion planning. In: 2015 IEEE International Conference on Advanced Intelligent Mechatronics (AIM). IEEE, pp. 1032–1037 (2015)

Kingston, Z., Moll, M., Kavraki, L.E.: Sampling-based methods for motion planning with constraints. Annu. Rev. Control Robot. Auton. Syst. 1 , 159–185 (2018)

Kleinbort, M., Salzman, O., Halperin, D.: Collision detection or nearest-neighbor search? on the computational bottleneck in sampling-based motion planning (2016). arXiv preprint arXiv:1607.04800

Kucuk, S.: Optimal trajectory generation algorithm for serial and parallel manipulators. Robot. Comput. Integr. Manuf. 48 , 219–232 (2017)

Kuffner, J.J., LaValle, S.M.: Rrt-connect: An efficient approach to single-query path planning. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), vol. 2. IEEE, pp. 995–1001 (2000)

LaValle, S.M., et al.: Rapidly-exploring random trees: a new tool for path planning (1998)

LaValle, S.M.: Planning Algorithms. Cambridge University Press, Cambridge (2006)

Book   MATH   Google Scholar  

Lehner, P., Albu-Schäffer, A.: Repetition sampling for efficiently planning similar constrained manipulation tasks. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 2851–2856 (2017)

Lehner, P., Albu-Schäffer, A.: The repetition roadmap for repetitive constrained motion planning. IEEE Robot. Automat. Lett. 3 (4), 3884–3891 (2018)

Lembono, T.S., Pignat, E., Jankowski, J., Calinon, S.: Learning constrained distributions of robot configurations with generative adversarial network. IEEE Robot. Automat. Lett. 6 (2), 4233–4240 (2021)

Li, Y., Cui, R., Li, Z., Xu, D.: Neural network approximation based near-optimal motion planning with kinodynamic constraints using rrt. IEEE Trans. Ind. Electron. 65 (11), 8718–8729 (2018)

Li, H., Wang, Z., Ou, Y.: Obstacle avoidance of manipulators based on improved artificial potential field method. In: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, pp. 564–569 (2019)

Li, Z., Ma, H., Ding, Y., Wang, C., Jin, Y.: Motion planning of six-dof arm robot based on improved DDPG algorithm. In: 2020 39th Chinese Control Conference (CCC). IEEE, pp. 3954–3959 (2020)

Li, Y., Hao, X., She, Y., Li, S., Yu, M.: Constrained motion planning of free-float dual-arm space manipulator via deep reinforcement learning. Aerosp. Sci. Technol. 109 , 106446 (2021)

Li, L., Miao, Y., Qureshi, A.H., Yip, M.C.: Mpc-mpnet: model-predictive motion planning networks for fast, near-optimal planning under kinodynamic constraints. IEEE Robot. Automat. Lett. 6 (3), 4496–4503 (2021)

Lin, H.-I., Hsieh, M.-F.: Robotic arm path planning based on three-dimensional artificial potential field. In: 2018 18th International Conference on Control, Automation and Systems (ICCAS). IEEE, pp. 740–745 (2018)

Liu, S., Liu, P.: A review of motion planning algorithms for robotic arm systems. RiTA 2020 , 56–66 (2021)

Liu, S., Zhang, Q., Zhou, D.: Obstacle avoidance path planning of space manipulator based on improved artificial potential field method. J. Inst. Eng. (India) Ser. C 95 (1), 31–39 (2014)

Liu, Y., Guo, C., Weng, Y.: Online time-optimal trajectory planning for robotic manipulators using adaptive elite genetic algorithm with singularity avoidance. IEEE Access 7 , 146301–146308 (2019)

Liu, L.-s., Lin, J.-f., Yao, J.-x., He, D.-w., Zheng, J.-s., Huang, J., Shi, P.: Path planning for smart car based on dijkstra algorithm and dynamic window approach. Wirel. Commun. Mob. Comput. 2021 (2021)

Lo Bianco, C.G., Piazzi, A.: Minimum-time trajectory planning of mechanical manipulators under dynamic constraints. Int. J. Control 75 (13), 967–980 (2002)

Long, Z.: Virtual target point-based obstacle-avoidance method for manipulator systems in a cluttered environment. Eng. Optim. 52 (11), 1957–1973 (2020)

Lu, S., Ding, B., Li, Y.: Minimum-jerk trajectory planning pertaining to a translational 3-degree-of-freedom parallel manipulator through piecewise quintic polynomials interpolation. Adv. Mech. Eng. 12 (3), 1687814020913667 (2020)

Lynch, K.M., Park, F.C.: Modern Robotics. Cambridge University Press, Cambridge (2017)

Mac, T.T., Copot, C., Tran, D.T., De Keyser, R.: Heuristic approaches in robot path planning: a survey. Robot Autonom Syst 86 , 13–28 (2016)

Marturi, N., Kopicki, M., Rastegarpanah, A., Rajasekaran, V., Adjigble, M., Stolkin, R., Leonardis, A., Bekiroglu, Y.: Dynamic grasp and trajectory planning for moving objects. Auton. Robot. 43 (5), 1241–1256 (2019)

Mbede, J.B., Huang, X., Wang, M.: Fuzzy motion planning among dynamic obstacles using artificial potential fields for robot manipulators. Robot. Auton. Syst. 32 (1), 61–72 (2000)

McGuire, K.N., de Croon, G.C., Tuyls, K.: A comparative study of bug algorithms for robot navigation. Robot. Auton. Syst. 121 , 103261 (2019)

Menasri, R., Nakib, A., Daachi, B., Oulhadj, H., Siarry, P.: A trajectory planning of redundant manipulators based on bilevel optimization. Appl. Math. Comput. 250 , 934–947 (2015)

MathSciNet   MATH   Google Scholar  

Mukherjee, D., Gupta, K., Chang, L.H., Najjaran, H.: A survey of robot learning strategies for human-robot collaboration in industrial settings. Robot. Comput. Integr. Manuf. 73 , 102231 (2022)

Nair, A., McGrew, B., Andrychowicz, M., Zaremba, W., Abbeel, P.: Overcoming exploration in reinforcement learning with demonstrations. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 6292–6299 (2018)

Nash, A., Daniel, K., Koenig, S., Felner, A.: Theta  \(\hat{}\)  *: Any-angle path planning on grids. In: AAAI, vol. 7, pp. 1177–1183 (2007)

Palmieri, G., Scoccia, C.: Motion planning and control of redundant manipulators for dynamical obstacle avoidance. Machines 9 (6), 121 (2021)

Pan, J., Manocha, D.: Efficient configuration space construction and optimization for motion planning. Engineering 1 (1), 046–057 (2015)

Pan, J., Manocha, D.: Fast probabilistic collision checking for sampling-based motion planning using locality-sensitive hashing. Int. J. Robot. Res. 35 (12), 1477–1496 (2016)

Parque, V.: Learning motion planning functions using a linear transition in the c-space: Networks and kernels. In: 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC). IEEE, pp. 1538–1543 (2021)

Peng, G., Yang, J., Lia, X., Khyam, M.O.: Deep reinforcement learning with a stage incentive mechanism of dense reward for robotic trajectory planning (2020). arXiv preprint arXiv:2009.12068

Pérez-D’Arpino, C., Shah, J.A.: C-learn: Learning geometric constraints from demonstrations for multi-step manipulation in shared autonomy. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 4058–4065 (2017)

Pham, Q.-C.: Trajectory planning. In: Handbook of Manufacturing Engineering and Technology, pp. 1873–1887 (2015)

Piazzi, A., Visioli, A.: Global minimum-time trajectory planning of mechanical manipulators using interval analysis. Int. J. Control 71 (4), 631–652 (1998)

Piazzi, A., Visioli, A.: Global minimum-jerk trajectory planning of robot manipulators. IEEE Trans. Industr. Electron. 47 (1), 140–149 (2000)

Pires, E., Tenreiro Machado, J.: Trajectory optimization for redundant robots using genetic algorithms with heuristic operators. In: Genetic and Evolutionary Computation Conference, pp. 1–9 (2000)

Prianto, E., Kim, M., Park, J.-H., Bae, J.-H., Kim, J.-S.: Path planning for multi-arm manipulators using deep reinforcement learning: soft actor-critic with hindsight experience replay. Sensors 20 (20), 5911 (2020)

Prianto, E., Park, J.-H., Bae, J.-H., Kim, J.-S.: Deep reinforcement learning-based path planning for multi-arm manipulators with periodically moving obstacles. Appl. Sci. 11 (6), 2587 (2021)

Qiao, T., Yang, D., Hao, W., Yan, J., Wang, R.: Trajectory planning of manipulator based on improved genetic algorithm. In: Journal of Physics: Conference Series, vol. 1576. IOP Publishing, p. 012035 (2020)

Qureshi, A.H., Ayaz, Y.: Potential functions based sampling heuristic for optimal path planning. Auton. Robot. 40 (6), 1079–1093 (2016)

Qureshi, A.H., Yip, M.C.: Deeply informed neural sampling for robot motion planning. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 6582–6588 (2018)

Qureshi, A.H., Mumtaz, S., Iqbal, K.F., Ali, B., Ayaz, Y., Ahmed, F., Muhammad, M.S., Hasan, O., Kim, W.Y., Ra, M.: Adaptive potential guided directional-rrt. In: 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, pp. 1887–1892 (2013)

Qureshi, A.H., Simeonov, A., Bency, M.J., Yip, M.C.: Motion planning networks. In: 2019 International Conference on Robotics and Automation (ICRA). IEEE, pp. 2118–2124 (2019)

Qureshi, A.H., Dong, J., Baig, A., Yip, M.C.: Constrained motion planning networks x. IEEE Trans. Robot. (2021)

Qureshi, A.H., Miao, Y., Simeonov, A., Yip, M.C.: Motion planning networks: bridging the gap between learning-based and classical motion planners. IEEE Trans. Rob. 37 (1), 48–66 (2020)

Qureshi, A.H., Dong, J., Choe, A., Yip, M.C.: Neural manipulation planning on constraint manifolds. IEEE Robot. Automat. Lett. 5 (4), 6089–6096 (2020)

Rodríguez, C., Montaño, A., Suárez, R.: Planning manipulation movements of a dual-arm system considering obstacle removing. Robot. Auton. Syst. 62 (12), 1816–1826 (2014)

Rosell, J., Iniguez, P.: Path planning using harmonic functions and probabilistic cell decomposition. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation. IEEE, pp. 1803–1808 (2005)

Roy, R., Mahadevappa, M., Kumar, C.: Trajectory path planning of eeg controlled robotic arm using ga. Procedia Comput. Sci. 84 , 147–151 (2016)

Rybus, T., Seweryn, K.: Application of rapidly-exploring random trees (rrt) algorithm for trajectory planning of free-floating space manipulator. In: 2015 10th International Workshop on Robot Motion and Control (RoMoCo). IEEE, pp. 91–96 (2015)

Rybus, T.: Point-to-point motion planning of a free-floating space manipulator using the rapidly-exploring random trees (rrt) method. Robotica 38 (6), 957–982 (2020)

Sadiq, A.T., Raheem, F.A., Abbas, N.A.F.: Ant colony algorithm improvement for robot arm path planning optimization based on d* strategy. Int. J. Mech. Mechatron. Eng. (2021)

Sangiovanni, B., Incremona, G.P., Piastra, M., Ferrara, A.: Self-configuring robot path planning with obstacle avoidance via deep reinforcement learning. IEEE Control Syst. Lett. 5 (2), 397–402 (2020)

Santos, R.R., Rade, D.A., da Fonseca, I.M.: A machine learning strategy for optimal path planning of space robotic manipulator in on-orbit servicing. Acta Astronaut. 191 , 41–54 (2022)

Semwal, V.B., Gupta, Y.: Performance analysis of data-driven techniques for solving inverse kinematics problems. In: Proceedings of SAI Intelligent Systems Conference. Springer, pp. 85–99 (2021)

Semwal, V.B., Reddy, M., Narad, A.: Comparative study of inverse kinematics using data driven and fabrik approach. In: Advances in Robotics-5th International Conference of The Robotics Society, pp. 1–6 (2021)

Shojaeinasab, A., Jalayer, M., Najjaran, H.: Insightigen: a versatile tool to generate insight for an academic systematic literature review (2022). arXiv preprint arXiv:2208.01752

Shojaeinasab, A., Charter, T., Jalayer, M., Khadivi, M., Ogunfowora, O., Raiyani, N., Yaghoubi, M., Najjaran, H.: Intelligent manufacturing execution systems: a systematic review. J. Manuf. Syst. 62 , 503–522 (2022)

Shyam, R.A., Hao, Z., Montanaro, U., Dixit, S., Rathinam, A., Gao, Y., Neumann, G., Fallah, S.: Autonomous robots for space: Trajectory learning and adaptation using imitation. Front. Robot. AI 8 (2021)

Singer, S., Nelder, J.: Nelder-mead algorithm. Scholarpedia 4 (7), 2928 (2009)

Song, Q., Li, S., Bai, Q., Yang, J., Zhang, A., Zhang, X., Zhe, L.: Trajectory planning of robot manipulator based on rbf neural network. Entropy 23 (9), 1207 (2021)

Stentz, A.: Optimal and efficient path planning for partially known environments. In: Intelligent Unmanned Ground Vehicles. Springer, pp. 203–220 (1997)

Števo, S., Sekaj, I., Dekan, M.: Optimization of robotic arm trajectory using genetic algorithm. IFAC Proc. Vol. 47 (3), 1748–1753 (2014)

Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press (2018)

Tai, L., Paolo, G., Liu, M.: Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 31–36 (2017)

Tan, C.S., Mohd-Mokhtar, R., Arshad, M.R.: A comprehensive review of coverage path planning in robotics using classical and heuristic algorithms. IEEE Access (2021)

Tarokh, M., Zhang, X.: Real-time motion tracking of robot manipulators using adaptive genetic algorithms. J. Intell. Robot. Syst. 74 (3), 697–708 (2014)

Tian, L., Collins, C.: An effective robot trajectory planning method using a genetic algorithm. Mechatronics 14 (5), 455–470 (2004)

Volpe, R., Khosla, P.: Manipulator control with superquadric artificial potential functions: Theory and experiments. IEEE Trans. Syst. Man Cybern. 20 (6), 1423–1436 (1990)

Wang, X., Luo, X., Han, B., Chen, Y., Liang, G., Zheng, K.: Collision-free path planning method for robots based on an improved rapidly-exploring random tree algorithm. Appl. Sci. 10 (4), 1381 (2020)

Wang, S., Cao, Y., Zheng, X., Zhang, T.: An end-to-end trajectory planning strategy for free-floating space robots. In: 2021 40th Chinese Control Conference (CCC). IEEE, pp. 4236–4241 (2021)

Xie, J., Shao, Z., Li, Y., Guan, Y., Tan, J.: Deep reinforcement learning with optimized reward functions for robotic trajectory planning. IEEE Access 7 , 105669–105679 (2019)

Xu, X., Hu, Y., Zhai, J., Li, L., Guo, P.: A novel non-collision trajectory planning algorithm based on velocity potential field for robotic manipulator. Int. J. Adv. Rob. Syst. 15 (4), 1729881418787075 (2018)

Xu, T., Zhou, H., Tan, S., Li, Z., Ju, X., Peng, Y.: Mechanical arm obstacle avoidance path planning based on improved artificial potential field method. Ind. Robot., 2021 (2021)

Yang, J., Peng, G.: Ddpg with meta-learning-based experience replay separation for robot trajectory planning. In: 2021 7th International Conference on Control, Automation and Robotics (ICCAR). IEEE, pp. 46–51 (2021)

Ying, K.-C., Pourhejazy, P., Cheng, C.-Y., Cai, Z.-Y.: Deep learning-based optimization for motion planning of dual-arm assembly robots. Comput. Industr. Eng. 160 , 107603 (2021)

Yu, L., Wang, K., Zhang, Q., Zhang, J.: Trajectory planning of a redundant planar manipulator based on joint classification and particle swarm optimization algorithm. Multibody Sys. Dyn. 50 (1), 25–43 (2020)

Yuan, C., Zhang, W., Liu, G., Pan, X., Liu, X.: A heuristic rapidly-exploring random trees method for manipulator motion planning. IEEE Access 8 , 900–910 (2019)

Zhang, N., Zhang, Y., Ma, C., Wang, B.: Path planning of six-dof serial robots based on improved artificial potential field method. In: 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, pp. 617–621 (2017)

Zhang, J.: Kinodynamic motion planning for robotics: a review. In: 2021 5th International Conference on Robotics and Automation Sciences (ICRAS). IEEE, pp. 75–83 (2021)

Zhang, T., Zhang, M., Zou, Y.: Time-optimal and smooth trajectory planning for robot manipulators. Int. J. Control Autom. Syst. 19 (1), 521–531 (2021)

Zhao, M., Lv, X.: Improved manipulator obstacle avoidance path planning based on potential field method. J. Robot. 2020 (2020)

Zhou, D., Jia, R., Yao, H., Xie, M.: Robotic arm motion planning based on residual reinforcement learning. In: 2021 13th International Conference on Computer and Automation Engineering (ICCAE). IEEE, pp. 89–94 (2021)

Zimmermann, S., Hakimifard, G., Zamora, M., Poranne, R., Coros, S.: A multi-level optimization framework for simultaneous grasping and motion planning. IEEE Robot. Automat. Lett. 5 (2), 2966–2972 (2020)

Download references

Acknowledgements

We would like to acknowledge the financial support of Apera AI and Mathematics of Information Technology and Complex Systems (MITACS) under IT16412 Mitacs Accelerate. We would like to thank our colleague Ardeshir Shojaeinasab for sharing his literature analysis codes.

This work was supported by Apera AI and Mathematics of Information Technology and Complex Systems (MITACS) under IT16412 Mitacs Accelerate.

Author information

Authors and affiliations.

Department of Electrical and Computer Engineering, University of Victoria, Victoria, BC, V8P 5C2, Canada

Mehran Ghafarian Tamizi

Department of Mechanical Engineering, University of Victoria, Victoria, BC, V8P 5C2, Canada

Marjan Yaghoubi & Homayoun Najjaran

You can also search for this author in PubMed   Google Scholar

Contributions

MGT, conceptualization, formal analysis, investigation, visualization, writing-original draft, writing-review, and editing. MY, writing-original draft, writing-review, and editing. HN, writing-review and editing, supervision, and funding acquisition.

Corresponding author

Correspondence to Homayoun Najjaran .

Ethics declarations

Conflict of interest.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethics approval

Not applicable.

Consent to participate

All the authors of this article agreed to participate.

Consent for publication

All authors of this article agree to publish.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Tamizi, M.G., Yaghoubi, M. & Najjaran, H. A review of recent trend in motion planning of industrial robots. Int J Intell Robot Appl 7 , 253–274 (2023). https://doi.org/10.1007/s41315-023-00274-2

Download citation

Received : 31 May 2022

Accepted : 03 February 2023

Published : 22 February 2023

Issue Date : June 2023

DOI : https://doi.org/10.1007/s41315-023-00274-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Motion planning
  • Artificial potential field
  • Sampling-based algorithms
  • Bio-inspired heuristic methods
  • Deep learning
  • Reinforcement learning
  • Learning by demonstration
  • Find a journal
  • Publish with us
  • Track your research

Aon smartPredict Challenges Practice Guide [2024]

motion challenge planning and problem solving

Overcoming the smartPredict by AON (formerly cut-e ) presents a significant challenge. The tests, including switchChallenge, gridChallenge, digitChallenge, and motionChallenge, are gamified and require specific tools for effective preparation.

Big companies  use these gamified challenges to screen candidates for influential positions - Procter and Gamble , Morgan Stanley , Deloitte, KPMG, Johnson & Johnson , Capgemini, and more . Your determination to begin a new career is worthy of passing this stage of recruitment.

Over the last year, we've dedicated ourselves to developing practice exams that closely resemble the actual smartPredict challenges. Our aim is to assist you as effectively as possible, enabling you to successfully pass the assessment and ultimately secure employment.

  • MotionChallenge Practice  New! 
  • Gamified Switch Challenge Practice 
  • Gamified Digit Challenge Practice
  • Gamified Grid Challenge Practice + Study Guide
  • 3 Diagrammatic Reasoning Tests
  • 8 Basic Math Practice Tests + 2 Study Guides

24 hour customer service

MONEY-BACK Guarantee

Secure Payment

motion challenge planning and problem solving

What is Aon smartPredict Assessment?

Aon’s smartPredict assessment is comprised of 4 gamified challenges - Switch, Grid, Digit, and Motion challenges. These “games” are short, each of them lasting between 5 to 9 minutes.

SwitchChallenge

The switchChallenge , or the cut-e Scales sx, evaluates your deductive-logical thinking skills using abstract figures. These figures undergo a sequence change via a funnel or operator. Typically, you have 6 minutes to answer as many questions as possible, although some test versions allow only 3 minutes.

Each question presents you with two (and sometimes three in more challenging levels) rows of symbols. Your task is to identify the correct operator that accounts for the alteration in the sequence of the symbols.

To grasp this challenge more effectively, it's beneficial to review a sample question and its explanation, as provided in our switch challenge practice test.

Switch Challenge Practice Question

Which operator is needed?

P&G Assessment sample question

The correct answer is (C).

This puzzle features two rows of symbols with an absent operator that rearranges the symbols from the top row into a new sequence in the bottom row. The operator's digits correspond to the original positions of the symbols in the top row before rearrangement. To solve the puzzle, begin by identifying the first symbol in the bottom row. Locate this symbol in the top row; its position in the top row represents the first digit of the missing operator.

The explanation provided earlier details that the operator, comprising four numbers, rearranges the symbols in the top row into a new sequence in the bottom row. Each number in the operator corresponds to the original position of the symbol directly below it before the rearrangement. To solve this, you must identify the correct operator (from three options) that reordered the symbols following this rule.

Understanding the Switch Challenge solely from text can be challenging. However, engaging in hands-on practice significantly simplifies the task. It's important to note that the Switch Challenge increases in difficulty with each level, presenting more complex scenarios. The example given is from the first level of questions. Our practice materials are designed to help you master all levels of the Switch Challenge.

GridChallenge

The Grid Challenge is designed to evaluate executive attention skills. This test includes a variety of tasks that blend memory and spatial orientation challenges.

Initially, you're shown a grid with dots, one of which is highlighted. Your job is to memorize the location of this dot. Next, you'll face a quick question about basic symmetry – you'll need to determine if a given image is symmetrical. Following this, the grid reappears with a different highlighted dot. The test progresses in this manner, alternating between showing 3 to 5 dots and interspersing questions about symmetry and rotation. The objective is to accurately answer the questions while keeping track of the dots' locations and the sequence in which they appear.

The upcoming video will provide a demonstration of our interactive grid challenge practice test:

motion challenge planning and problem solving

DigitChallenge

The Digit Challenge is an assessment of fundamental mathematical abilities, focusing primarily on the three basic operations: addition, subtraction, and multiplication. Rather than presenting direct questions, this challenge offers a unique approach: participants are given the solution first and then tasked with filling in the missing numbers to complete the equation.

Experience it firsthand to understand the process better:

MotionChallenge

 New Practice 

The Motion Challenge evaluates your capability for planning. This interactive test bears a strong resemblance to the classic "Rush Hour" sliding blocks game. Currently, we are not providing preparatory material for this challenge, but efforts are underway to soon equip you with appropriate practice resources.

Prepare for the Switch, Grid, Digitand Motion Challenges

The Aon smartPredict PrepPack is designed to thoroughly prepare you for the online assessment, providing a comprehensive preview of the various challenges. It includes practice tests and study guides across all subjects, including a new motion challenge practice. 

Taking your time to prepare is crucial, as these challenges are unique and unlike anything you've faced before.

Included in the PrepPack are:

  • New Motion Challenge game practice.
  • An exclusive, interactive Grid Challenge practice.
  • An innovative interactive Digit Challenge practice offering endless scenario variations.
  • A Switch Challenge practice test featuring questions of varying difficulty.
  • PDF study guides for both the Switch Challenge and Grid Challenge
  • 11 additional practice tests to enhance your mathematical and reasoning abilities

Acquainting yourself with the test format beforehand significantly boosts your chances of success and engaging in practice with similar gamified questions provides a substantial edge over your competitors.

We hope you found this page helpful, and if you have any question, whatsoever, contact Arbel at [email protected]

Since 1992, JobTestPrep has stood for true-to-original online test and assessment centre preparation. Our decades of experience make us a leading international provider of test training. Over one million customers have already used our products to prepare professionally for their recruitment tests.

  • United Kingdom
  • Arabic Site
  • Netherlands
  • TestPrep-Online
  • Meet the Team
  • Terms & Conditions
  • Affiliate Program

Ace the Aon (cut-e) smartPredict Assessment - Switch, Grid, Digit and Motion Challenges [2024]

motion challenge planning and problem solving

Ace the Gamified AON Assessment with Accurate Test-Like Practice experience

The AON gamified test consists of some of the most challenging pre-employment tests available, including four major sections (Click a link to open a specific section of the test):

  • AON switchChallenge (Deductive Logical Thinking Test)
  • AON digitChallenge (Basic Numerical Comprehension Test)
  • AON gridChallenge (Working Memory and Spatial Reasoning Test)
  • AON motionChallenge (Complex PlanningCapability test) -  Now available in our pack

These gamified assessments are impossible to master without having hands-on practice experience.

The Aon Gamified Preparation Pack offers you the closest experience possible to the real AON gamified test, so you can ace it with flying colors.

The JobTestPrep AON smartPredict PrepPack includes:

  • Gamified time-constrained practice experience that mimics the actual test
  • An algorithm that lets you practice an unlimited number of questions
  • Test questions maximally similar to those you'll encounter on the exam

Start your practice with the full Aon Assessment Preparation, or get a glimpse of our Free Aon Assessment Practice Test taken from the full pack.

Score high with the Aon Gamified Test PrepPack, which includes:

  • MotionChallenge Game Practice  New! 
  • Gamified Switch Challenge Practice 
  • Gamified Digit Challenge Practice
  • Gamified Grid Challenge Practice + Guide
  • 3 Diagrammatic Reasoning Tests
  • 8 Math Practice Tests + 2 Study Guides

24 hour customer service

MONEY-BACK Guarantee

Secure Payment

motion challenge planning and problem solving

Shir , Aon / cut-e Assessments Expert at JobTestPrep Have a question? Contact me at:

email icon

What is an AON gamified Assessment?

Aon’s gamified smartPredict assessment is comprised of four gamified tests- Switch , Grid , Digit and Motion challenges. These “games” are short assessment tests, lasting between five to nine minutes.

Here are the main differences between the exams:

switch
Challenge
6 minYesDeductive
Logical
Reasoning
digit
Challenge
5 minYesNumerical
Reasoning
grid
Challenge
9 minNoMemory
and
Spatial
Reasoning
motion
Challenge
6 minYesComplex
Planning

aon is one of the biggest assessment companies. However there are many other assessment companies including SHL ,  Korn Ferry , Watson Glaser , cut-e , Thomas , Cubiks , Pymetrics , Saville , Matrigma , McQuiag , and many more.

Worried About Not Getting Enough Practice In?

The PrepPack contains algorithm-based tests that allow for an  almost infinite number of generated questions!   In addition, the practice tests are adaptive: harder questions after correct answers, easier questions after wrong answers - just like the real tests.

Let's review the different gamified Aon tests more thoroughly:

switchChallenge

The switchChallenge (AKA cut-e Scales sx) measures your deductive-logical thinking through abstract figures that change their order through a funnel or operator. You have six minutes (sometimes three) to solve as many questions as you can.

In each level, you are presented with two rows of symbols (sometimes three rows in more difficult levels).

Your goal is to choose the correct operator that created the change in the order of the symbols.

switchChallenge sample question:

Which operator is needed?

motion challenge planning and problem solving

The correct answer is (C). In this question, there are two rows of symbols and a missing operator.

The operator changes the order of the symbols that appear in the upper row to a new order in the lower row.

The digits of the operator represent the symbol according to its position in the upper row before the operator changed it.

To solve this question, you should look at the first symbol of the lower row. Search the position of that symbol in the upper row.

This position is the first digit of the operator.

motion challenge planning and problem solving

In this example, a triangle is the first symbol on the lower row. Its position in the upper row is second, so the first digit is 2.

You can now mark answer 2341 because it's the only answer that starts with the digit 2.

​In case you can’t rule out the other alternatives, you will continue by checking the second symbol in the lower row - a heart.

Then check its position in the upper row. Its position is third, so the second digit of the operator is 3.

The third symbol is a star, its position in the upper row is fourth, so the next digit is 4.

The fourth symbol is a circle, its position in the upper row is first, so the next digit is 1. ​

The correct operator is 2341.

Concerned You Won’t Be Able to Meet Test Time Constraints?

The JobTestPrep Gamified Aon Test PrepPack includes a gamified time-constrained simulation environment designed to teach you how to answer test questions quickly.

Instantly access unlimited test-like switchChallenge practice questions.

gridChallenge

The gridChallenge aims to measure executive attention. The challenges throughout the test are comprised of several tasks combined together - memory questions and spatial orientation questions.

  • You are first shown a grid of dots, with one dot highlighted, and your task is to remember the dot's location.
  • Then you are shown a basic symmetry question – “Is It Symmetrical?” which you need to answer fast.
  • After that, you are again presented with the original grid, now highlighting another dot.
  • This challenge goes on, showing you between three to five dots, with symmetry and rotation questions between them.
  • Your goal is to answer all the questions correctly while remembering the dots’ location and order of appearance.
  • In the following video, you will get a demonstration from our gamified grid challenge practice test:

motion challenge planning and problem solving

digitChallenge

The digitChallenge (Basic Numerical Comprehension test), or Digit Challenge, measures basic math skills; addition, subtraction, and multiplication.

However, instead of being asked straight-up questions, you are shown an answer, and you need to complete the missing figures.

Here is a short video that explains exactly how it works:

Having Trouble With Math?

JobTestPrep digitChallenge Prep Pack offers a wide range of digitChallenge drills and exercises that will help take your calculations skills to the next level.

MotionChallenge

The motionChallenge (Complex Planning Capability Test) measures your ability to plan, solve problems, plan ahead and overcome barriers. Each tasl has a ball presented on a grid with at least one exit; the aim is to move the ball to the exit with as few moves as possible. You have 6 minutes to complete as many puzzles as you can.

The gamified test is very similar to the old sliding blocks game called "Rush Hour" - there are objects blocking the way to the exit, and some of those objects can be moved horizontally or vertically so long as their paths remain clear.

As you progress, the difficulty level may increase immovable objects may be introduced.

motionChallenge example

Our psychometric experts have developed a new Motion Challenge practice test, now available in the complete pack.

How to Prepare for the Switch, Grid, Digit, and Motion Challenges?

As you can see, the AON smartPredict assessment is unlike any other pre-employment assessment test you may have encountered. Below are 3 smartPredict tips to help you pass the tests:

1. Ace the first questions , and the rest will fall into place. Note that the Digit, Switch, and Motion Challenges are Adaptive. In adaptive tests, the level of difficulty changes based on the candidate's performance, getting harder or easier depending on the answer.

That’s why it is super important in each of these tests to ace the first questions because once you get a question wrong, you are demoted to less difficult questions; a big problem when time is limited.

2. Answer test questions quickly and accurately ; the Motion, Switch, and Digit challenges are only five to six minutes each, during which you will have to answer as many questions as possible. Thus, improving your ability to solve test questions quickly can help tremendously with getting the test score you need.

3. Get exposure to an Interactive test environment – The interactive experience you’ll encounter on the test will not resemble anything you’ve seen before in high school, college, or on any other psychometric test. The methods for solving each question differ from one another; only by gaining experience through practice, will you learn the best approach for each question.

And the only sure way to practice accurately is to use a preparation kit that simulates the interactive environment that you’ll encounter in the real exam.

Luckily, the JobTestPrep Aon Gamified Test PrepPack includes:

  • An interactive practice experience with an infinite number of scenarios and questions.
  • Test questions with varying difficulty levels.
  • A time constraint mirroring the one you’ll experience in the real test.

Pass Your Aon smartPredict Assessment

Familiarizing yourself with the test in advance immensely increases your chances of passing it. Practising interactive gamified questions can give you a competitive advantage over your peers!

We hope you found this page helpful, and if you have any questions feel free to contact us at [email protected].

Since 1992, JobTestPrep has stood for true-to-original online test and assessment centre preparation. Our decades of experience make us a leading international provider of test training. Over one million customers have already used our products to prepare professionally for their recruitment tests.

  • Arabic Site
  • United States
  • TestPrep-Online
  • Terms & Conditions
  • Refund Policy
  • Affiliate Programme
  • Higher Education
  • Student Beans

lock

Unlock this article for Free, by logging in

Don’t worry, unlock all articles / blogs on PrepInsta by just simply logging in on our website

  • Placement Papers
  • Pseudo Code
  • English Communication Test
  • Game Based Aptitude Test
  • Behavioral Competency Test
  • Game based Questions
  • Recruitment Process
  • Game Based Cognitive Assessement
  • Coding Questions
  • Spoken English Test
  • Technical Questions
  • Capgemini Registration Process
  • HR Interview Questions
  • Essay Writing
  • Get Hiring Updates

PREPINSTA PRIME

  • PrepInsta Mock
  • Prime Video

Get Hiring Updates right in your inbox from PrepInsta

Home > Capgemini Exceller Placement Papers and Solutions 2025 > Capgemini Exceller Game based Questions 2025 > Motion Challenge

Motion Challenge

Free material.

Login to see your performance analytics by signing in Login/Signup

Premium Quizzes

prepinsta

Login/Signup

Personalized Analytics only Availble for Logged in users

Analytics below shows your performance in various Mocks on PrepInsta

Your average Analytics for this topic

Topic Mocks completed

0 out of 2 Mocks

Get over 200+ Courses under One Subscription

mute

One Subscription access everything

Get Access to PrepInsta Prime

from FAANG/IITs/TOP MNC's

Don’t settle Learn from the Best with PrepInsta Prime Subscription

manish-prepinsta

Manish Agarwal

Ex- accenture.

Trained 3.2L+ students

Rushikesh-prepinsta

Rushikesh Konapure

Data science & full stack mentor.

Trained 35k+ students

atulya-prepinsta

Atulya Kaushik

Ex- google & instamojo.

Aashay-prepinsta

Aashay Mishra

Ex- sapient.

Trained 83k+ students

Janvi-prepinsta

Lead Verbal Mentor

Trained 46k+ students

prepinsta

One Subscription, For Everything

The new cool way of learning and upskilling -

motion challenge planning and problem solving

PrepInstaPrime

Get over 200+ course one subscription.

Courses like AI/ML, Cloud Computing, Ethical Hacking, C, C++, Java, Python, DSA (All Languages), Competitive Coding (All Languages), TCS, Infosys, Wipro, Amazon, DBMS, SQL and others.

Login/Signup to comment

August 25, 2020

motion challenge planning and problem solving

30+ Companies are Hiring

IMAGES

  1. PPT

    motion challenge planning and problem solving

  2. PPT

    motion challenge planning and problem solving

  3. Spark of Inspiration: Facilitating Problem Solving

    motion challenge planning and problem solving

  4. 9 steps to problem solving process

    motion challenge planning and problem solving

  5. PPT

    motion challenge planning and problem solving

  6. PPT

    motion challenge planning and problem solving

VIDEO

  1. Solve Projectile Motion Problems Using Mechanical Energy

  2. Learning through action || Body Awareness || Motor Planning || Problem Solving skills

  3. Mobility and coordination exercise. Challenge roll with a book on a foot

  4. Slow Motion Challenge

  5. Fast Marching Solution for the Social Path Planning Problem

  6. Master in Game Theory. #udemycouponcode2024

COMMENTS

  1. PDF Solving Motion Tasks with Challenging Dynamics by Combining Kinodynamic

    an obstacle course. To solve such a motion planning problem, a variety of approaches have been proposed, which can generally be categorized into two groups: First, there are optimization-based approaches as, e.g., in [5] the Graphs of Convex Sets algorithm was proposed to solve kinodynamic planning problems using convex optimization and applied

  2. Solving Challenging Control Problems via Learning-based Motion Planning

    We present a deep reinforcement learning (deep RL) algorithm that consists of learning-based motion planning and imitation to tackle challenging control problems. Deep RL has been an effective tool for solving many high-dimensional continuous control problems, but it cannot effectively solve challenging problems with certain properties, such as sparse reward functions or sensitive dynamics. In ...

  3. Solving the motion planning problem using learning experience through

    The motion planning problem is defined as the problem of finding an optimal collision-free path reaching a predefined goal while satisfying a set of constraints. Such a problem is essential for autonomous vehicles. In 1979, Reif proved that optimal path planning problem hardness is a PSPACE-hard problem [1]. Based on this proof, Lazard et al ...

  4. Motion Planning

    Motion Planning# Motion planning is the problem of finding a robot motion from start state to a goal state that avoids obstacles in the environment (also statisfies other constraints). Fig. 42 Robot arm motion planning # Configuration Space# Configuration of a robot: a representation of a robot pose, typically using the joint vector \(q\in R^n\).

  5. Motion Memory: Leveraging Past Experiences to Accelerate Future Motion

    Abstract—When facing a new motion-planning problem, most motion planners solve it from scratch, e.g., via sampling and exploration or starting optimization from a straight-line path. However, most motion planners have to experience a va-riety of planning problems throughout their lifetimes, which are yet to be leveraged for future planning.

  6. PDF Incremental Task and Motion Planning: A Constraint-Based Approach

    and motion planning [44,53] are fundamentally different. Consequently, most TMP methods [9,36,69,72] perform task planning and motion planning as separate, possibly in-terleaved, phases. We isolate and discuss the specific require-ments for task planning, motion planning, and their interface in order to perform efficient and robust TMP.

  7. Combining Task and Motion Planning: Challenges and Guidelines

    All four aspects - task planning, scheduling, allocation and motion planning - are closely interrelated and must be combined to achieve optimal plans with regard to some objective e.g., makespan. Henceforth, we refer to this instance of combined planning as an assembly planning problem. FIGURE 2.

  8. (PDF) Solving the motion planning problem using learning experience

    The first approach uses CBR to retain K similar cases to solve the motion planning problem by merging those solutions into a set. Afterwards, it picks from this set based on a heuristic function ...

  9. Solving Motion Tasks with Challenging Dynamics by Combining Kinodynamic

    This work considers the problem of robots with challenging dynamics having to solve motion tasks that consist in transitioning from an initial state to a goal state in an environment that is obstructed by obstacles. We propose a novel combination of methods from motion planning and iterative learning control to solve these motion tasks. The proposed method only requires an approximate, linear ...

  10. Solving Challenging Control Problems via Learning-based Motion Planning

    This work proposes an approach that decomposes the given problem into two deep RL stages: motion planning and motion imitation, and demonstrates that this approach can solve challenging control problems, rocket navigation, and quadrupedal locomotion, which cannot be solved by the monolithic deep RL formulation or the version with Probabilistic Roadmap. We present a deep reinforcement learning ...

  11. Combining Task and Motion Planning: Challenges and Guidelines

    other problems, e.g., verifying through motion planning that a chosen sequence of targets to drill will be kinematically feasible and will avoid the piles of material produced by drilling.

  12. PDF Combining Task and Motion Planning: Challenges and Guidelines

    Front. Robot. AI 8:637888. doi: 10.3389/frobt.2021.637888. This paper addresses a known problem in planning for robots, namely, that of combining Task And Motion Planning (TAMP). As robots have ...

  13. Motion planning around obstacles with convex optimization

    The main challenge in collision-free motion planning is the nonconvexity of the search space. Similarly to , ... The value of δ opt was computed, just for analysis purposes, by solving the mixed-integer problem to global optimality using branch and bound.

  14. Complex Planning Capability Test

    Aon Assessments Solutions operates as part of Aon's global human capital offering. We helps clients drive business performance through better people performa...

  15. 2.6 Problem-Solving Basics for One-Dimensional Kinematics

    Introduction to Dynamics: Newton's Laws of Motion; 4.1 Development of Force Concept; 4.2 Newton's First Law of Motion: Inertia; 4.3 Newton's Second Law of Motion: Concept of a System; 4.4 Newton's Third Law of Motion: Symmetry in Forces; 4.5 Normal, Tension, and Other Examples of Forces; 4.6 Problem-Solving Strategies; 4.7 Further Applications of Newton's Laws of Motion

  16. (PDF) Motion planning with complex goals

    View PDF. Sampling-based motion planning with temporal goals. 2010 •. Moshe Vardi. Abstract This paper presents a geometry-based, multi-layered synergistic approach to solve motion planning problems for mobile robots involving temporal goals. The temporal goals are described over subsets of the workspace (called propositions) using temporal ...

  17. Solving Motion Planning Tasks with a Scalable Generative Model

    Figure 1: We are motivated to provide a generative model as the central unit that supports all the learning-based motion planning tasks in the autonomous driving domain. We categorize the tasks into four distinct sub-domains: data generation, model evaluation, model training, and model inference.

  18. Solving Motion Planning Tasks with a Scalable Generative Model

    Solving Motion Planning Tasks with a Scalable Generative Model. As autonomous driving systems being deployed to millions of vehicles, there is a pressing need of improving the system's scalability, safety and reducing the engineering cost. A realistic, scalable, and practical simulator of the driving world is highly desired.

  19. A review of recent trend in motion planning of industrial robots

    Motion planning is an integral part of each robotic system. It is critical to develop an effective motion in order to achieve a successful performance. The ability to generate a smooth, optimal, and precise trajectory is crucial for a robotic arm to accomplish a complex task. Classical approaches such as artificial potential fields, sampling-based, and bio-inspired heuristic methods, have been ...

  20. PDF Collaborative Problem Solving

    solving processes: exploring and understanding, representing and formulating, planning and executing, and monitoring and reflecting. Chapter 1: Executive Summary 3 ATC21S has also developed a framework for assessing collaborative problem solving. This ... problem solving differs little from standard test development. The major departure is in

  21. Aon smartPredict Challenges Practice Guide [2024]

    The Aon smartPredict PrepPack is designed to thoroughly prepare you for the online assessment, providing a comprehensive preview of the various challenges. It includes practice tests and study guides across all subjects, including a new motion challenge practice. Taking your time to prepare is crucial, as these challenges are unique and unlike ...

  22. Pass The Aon Gamified Test

    motion Challenge: 6 min: Yes: Complex Planning ... The motionChallenge (Complex Planning Capability Test) measures your ability to plan, solve problems, plan ahead and overcome barriers. ... college, or on any other psychometric test. The methods for solving each question differ from one another; only by gaining experience through practice ...

  23. Motion Challenge

    Join TCS CodeVita Now - The Global Coding Contest is Live! Unlock your coding potential with TCS CodeVita. Compete with the best minds across the globe!