№ | Слайд | Текст |
1 |
 |
Notes 4: Informed and Heuristic SearchICS 171 Winter 2001 |
2 |
 |
SummaryA more practical setup Optimal search strategies path costs examples now we want to search for the minimum cost path to goal G optimality exhaustive search uniform-cost search |
3 |
 |
Searching for the Minimum Cost PathPreviously we only cared about finding a path to any goal G Each transition from state to state (operator) has a cost Now we want the minimum cost path to a goal G Cost of a path = sum of individual transitions along path “best” = minimum cost path Note: 2 different types of cost 1. cost of traversing the path to goal (“path-cost”/online cost) cost from start S to node cost from node to goal G 2.. the space and time complexity of the search algorithm (“search-cost”/offline cost) Path-cost = sum of individual node-to-node costs individual costs are always assumed positive |
4 |
 |
Examples of Path-CostNavigation path-cost = distance to node in miles OR the time to travel on foot or by car OR number of cities visited - ALL DIFFERENT minimum => minimum time, least fuel VLSI Design path-cost = length of wires between chips minimum => least clock/signal delay 8-Puzzle path-cost = number of pieces moved minimum => least time to solve the puzzle |
5 |
 |
Example of Path CostsOptimal (minimum cost) path is S-A-D-E-F-G |
6 |
 |
Completeness and OptimalityA search procedure is complete if it is guaranteed to find a goal state, if one exists e.g., DFS is not complete in general (if repeated states exist) but BFS is complete A search procedure is optimal if it is guaranteed to find the minimum cost path, if multiple paths to a goal state exist e.g., DFS and BFS are not optimal Note that optimal => complete, but not vice-versa |
7 |
 |
Exhaustive SearchBasic idea find all possible paths to goal states choose the best one (best = minimum path-cost) For example, use BFS, modified so that it continues until all goal states are found but BFS scales as O(bd) worst case! Conclusion it is certainly complete and optimal but exhaustive search is impractical except for very small problems |
8 |
 |
The Shortest Path PrincipleWe are searching for the minimum cost path to G Suppose the minimum (optimal) path to a goal state G has cost c Shortest Path Principle: we can ignore any open nodes in the search tree which have path-costs greater than or equal to c why ? because we already have a solution with path-cost c, and expanding these nodes can only increase their costs further but, we must still explore all other open nodes with path-cost <c why? they could produce a path to goal with path-cost < c We can do this simply by ordering nodes in Q according to path-cost to each node |
9 |
 |
Uniform Cost SearchUniform Cost Search orders the nodes on the Q according to path cost from S always expands the node with minimum path cost from S Initialize: Let Q = {S} While Q is not empty pull Q1, the first element in Q if Q1 is a goal report(success) and quit else child_nodes = expand(Q1) <eliminate child_nodes which represent loops> put remaining child_nodes in Q sort Q according to path-cost to each node end Continue |
10 |
 |
Example of Uniform Cost in actionS D A 2 5 |
11 |
 |
Example of Uniform Cost in actionS D A B D E E C F B E D F F B G C G 2 5 3 7 4 14 8 7 6 11 12 14 10 14 10 11 13 15 |
12 |
 |
Properties of Uniform Cost SearchComplete? yes Optimal? assume that all costs are greater than 0 when we go to expand a node, we test for goal state if it is a goal state then, it is selected (say with path cost c) if selected, is this path the lowest cost path to a goal? all other currently open paths have higher cost (by definition) since all costs >0, all of them must have path costs >c (to goal) => the selected path is the optimal one this is basically Dijkstra’s algorithm (ICS 161) applied to search trees Complexity? worst O(b^d) in both space and time in practice, run out of space usually before you run out of time |
13 |
 |
SummaryPath Cost in practice we often want the best path to a goal state, e.g., the minimum cost path Optimality a search algorithm which is guaranteed to find the best path to goal Uniform Cost Search this is optimal but it can have exponential complexity (especially memory) in practice Reading: Nilsson Ch.II-9 ; R.&N. Ch.4 (4.1 – 4.3) |
14 |
 |
Notes 5: Heuristic SearchICS 171 Winter 2001 |
15 |
 |
SummaryHeuristic search strategies heuristics and heuristic estimates best-first search hill-climbing A*: optimal search using heuristics We will see that A* can find optimal paths and expand relatively few nodes by using heuristics |
16 |
 |
Example of Path CostsOptimal (minimum cost) path is S-A-D-E-F-G |
17 |
 |
Heuristics and Searchin general a heuristic is a “rule-of-thumb” based on domain-dependent knowledge to help you solve a problem in search one uses a heuristic function of a state where h(node) = estimated cost of cheapest path from the state for that node to a goal state G h(G) = 0 h(other nodes) ? 0 (note: we will assume all individual node-to-node costs are > 0) How does this help in search we can use knowledge (in the form of h(node)) to reduce search time generally, will explore more promising nodes before less promising ones |
18 |
 |
Example of Heuristic FunctionsB C A 10.4 6.7 4.0 11.0 G S 8.9 3.0 6.9 D F E |
19 |
 |
Best First SearchBest-First uses estimated path cost from node to goal (heuristic, hcost) ignores actual path cost after each expansion sort nodes according to estimated path-cost from node to goal in effect, it always expands the most promising node on the fringe (according to the heuristic) e.g., finding a route to Seattle, always extend the path from the most northerly city among the existing routes Uniform Cost uses path cost from root to node after each expansion sort nodes according to path-cost from start S to node |
20 |
 |
Example of Best-First Search in actionS D A A E F B G 10.4 8.9 10.4 6.9 3.0 6.7 0 Note: this is not the optimal path for this problem |
21 |
 |
Properties of Uniform Cost and Best-First SearchCompleteness Both Best-first and Uniform Cost search are complete Optimality Best-first is not optimal in general why? its ordering of nodes according to estimated cost to goal G: there is no reason this will produce the path which is really lowest-cost first e.g. “most northerly” => route to Seattle might go through Denver! uniform cost is optimal Time and Space Complexity Both could be O(bd) in the worst case But in practice, best-first usually uses far fewer nodes |
22 |
 |
Hill-Climbing or Greedy SearchGeneral Idea of hill-climbing always expand the node which appears closest to goal (according to the heuristic function h) (“greedy”) only keep 1 node in memory (on the Q) at any time (no backtracking) also known as “steepest ascent” or “steepest-descent” can be used to maximize (“climb”) or minimize (“descend”) is it complete? optimal? what is the time and space complexity? Relation to the Best-First Search Algorithm? it is a special case, but where the memory is limited to a single node at any time (i.e., no backtracking) hill-climbing with multiple restarts from randomly chosen initial conditions is often extremely simple to implement and can be quite effective (usually worth trying) |
23 |
 |
Combining heuristic and path costsSo far, Uniform Cost Search: uses cost of path from start to current node Best First: uses heuristic estimate of cost from node to goal Uniform is optimal, Best First is much faster in practice Combine the two! We can also use both costs together: let n be some node in the search tree, then f(n) = g(n) + h(n) is the estimated cost from S to G via n where g(n)= d(S to n) - true path cost from S to n (exact) h(n)= h(n to S) - heuristic estimate of path cost from N to G What we need to require of h to have optimality? |
24 |
 |
The A* AlgorithmA heuristic h is admissible if it for any node n it does NOT overestimate the true path cost from n to the nearest goal. The A* search is a search algorithm orders the nodes on the Q according to f(n)=g(n)+h(n), where h(n) is an admissible heuristic i.e., it sorts nodes on Q according to an admissible heuristic h* It is like uniform-cost, but uses fcost(node) = path-cost(S to node) + h(node) rather than just path cost(S to node) note that uniform cost search can be viewed as A* search where h(n) equals 0 for all n (the latter heuristic equal to 0 for every node is clearly admissible! Why?) |
25 |
 |
Pseudo-code for the A* Search AlgorithmInitialize: Let Q = {S} While Q is not empty pull Q1, the first element in Q if Q1 is a goal report(success) and quit else child_nodes = expand(Q1) <eliminate child_nodes which represent loops> put remaining child_nodes in Q sort Q according to ucost = pathcost(S to node) + h*(node) end Continue |
26 |
 |
41 B C A 2 5 G 2 S 3 5 4 2 D F E |
27 |
 |
Example of A* in actionS D A 5 + 8.9 = 13.9 2 +10.4 = 12..4 |
28 |
 |
Example of A* Algorithm in actionS D A D B E C E F B G 5 + 8.9 = 13.9 2 +10.4 = 12..4 3 + 6.7 = 9.7 4 + 8.9 = 12.9 7 + 4 = 11 8 + 6.9 = 14.9 6 + 6.9 = 12.9 Dead End 10 + 3.0 = 13 11 + 6.7 = 17.7 13 + 0 = 13 |
29 |
 |
Comments on heuristic estimationThe estimate of the distance is called a heuristic typically it comes from domain knowledge e.g., the straight-line distance between 2 points If the heuristic never overestimates, then the search procedure using this heuristic is “admissible”, i.e., h*(N) is less than or equal to realcost(N to G) A* is a search with admissible heuristic is optimal i.e., if one uses an admissible heuristic to order the search one is guaranteed to find the optimal solution The closer the heuristic is to the real (unknown) path cost, the more effective it will be, ie if h1(n) and h2(n) are two admissible heuristics and h1(n)?h2(n) for any node n then A* search with h2(n) will in general expand fewer nodes than A* search with h1(n) |
30 |
 |
Examples of Heuristic Functions for A*the 8-puzzle problem the number of tiles in the wrong position is this admissible? the sum of distances of the tiles from their goal positions, where distance is counted as the sum of vertical and horizontal tile displacements (“Manhattan distance”) is this admissible? How can we invent admissible heuristics in general? look at “relaxed” problem where constraints are removed e.g., we can move in straight lines between cities e.g., we can move tiles independently of each other |
31 |
 |
Heuristic Underestimates of Path CostUnderestimates say we use a heuristic function h* which is never greater than the true path-cost from a node to a goal G and we order Q according to fcost(node) = d(S to node) + h*(node to G) This is optimal: Why? Consider fringe Q and an arbitrary node N “behind” the first G on the priority queue Q if so, then fcost(G) < fcost(N) on LHS, fcost(G) = realcost(G) since by definition h(G)=0 on RHS fcost(N) is less than or equal to realcost(N) (since the heuristic underestimates, by assumption) so realcost(G) < realcost(N) We have shown that every node behind the first goal G popped from Q will have the higher cost then G, which means that the first G popped has the lowest cost of all possible goals, hence A* is optimal |
32 |
 |
Effectiveness of A* Search AlgorithmAverage number of nodes expanded d IDS A*(h1) A*(h2) 2 10 6 6 4 112 13 12 8 6384 39 25 12 364404 227 73 14 3473941 539 113 20 ------------ 7276 676 Average over 100 randomly generated 8-puzzle problems h1 = number of tiles in the wrong position h2 = sum of Manhattan distances |
33 |
 |
SummaryIn practice we often want the goal with the minimum cost path Exhaustive search is impractical except on small problems Uniform cost search always expands the lowest path-cost node on the fringe Heuristic estimates of the path cost from a node to the goal can be efficient in reducing the search space (used in best-first search) The A* algorithm combines both of these ideas with admissible heuristics (which underestimate) guaranteeing optimality. Reading Nilsson Ch.II-9 ; R.&N. Ch.4 (4.1 – 4.3) |
«Informed and Heuristic Search» |
http://900igr.net/prezentacija/anglijskij-jazyk/informed-and-heuristic-search-64465.html