Sep 23 2012
The greedy search algorithm is a traditional approach to robotic navigation that follows a local gradient. Upon implementing this method, a robot selects its path on the basis of only positions, which places it nearest to its target.
This algorithm is named greedy as it picks the best solution without taking into account the long-term cost. In a swarm of robots involved in a greedy search, each robot is capable of finding the target on its own without communicating with each other. Thus there is no worry of loss of communication or robots.
Greedy Search Algorithm – Types
The greedy search algorithm is of three types, which are discussed below.
Best-First Search
In this method, the search nodes are expanded individually. However, other simplest search algorithms arrange the nodes in order based on the calculation of cost of a solution that passes via the corresponding node. A set of nodes are defined by best first searches such as multi-state commitment k weighted A*, Window A* and A*. The following section will discuss about some of the best first search methods:
- Weighted A* is a simple suboptimal search in which the conventional node evaluation function of A*, f(n) = g(n) + h(n) is modified to give more importance to the heuristic evaluation function and becomes f’(n) = g(n) + w.h(n).
- Greedy best-first search is a logical modification of weighted A* and completely ignores g instead of scaling h with respect to g.
- A* - When selecting the nodes to be expanded, A* holds two orders of nodes by considering two information sources. A* uses an open list called the focal list to create a node subset to be expanded.
- Window A* is an incomplete search that allows A* to run on a sliding window having size s rather than using the entire open list. In this search, a comparison is made between each node selected for expansion and the depth of the deepest node found so far.
- Multi-State Commitment Search – This search method is similar to A* and Window A* in maintaining a list called commit list for the nodes to be expanded.
- The similarity among the A*, Window A* and Multi-State Commitment Search is that the search space size can be minimized by forming of subset of all favourable nodes and expanding them.
Hill-Climbing
A hill-climbing algorithm can expand only one node from the root at any one time. This is followed by expansion of only the best child node of the ones expanded earlier. This algorithm has two demerits: (a) they do not have a tunable parameter that can be used for regulating the quality and speed with respect to each other and (b) they fail to find appropriate solutions often. The types of hill-climbing algorithms are given below:
- Enforced Hill-Climbing
- Real-Time Search
- Depth-First Branch and Bound
Hill-climbing algorithms are brittle when executed on the dynamic robot path planning domain, which has many nodes without children.
Beam Search
Beam searches are divided into two groups, namely, best-first beam search and breadth-first beam search. The best-first beam search works like a best-first search with an exception of removing the lowest quality nodes from the open list when the list exceeds a fixed size limit. In breadth-first beam search, specific nodes work by expansion at each depth layer.
Basic Principles - Example
The principles of greedy search algorithm can be best illustrated with the knapsack problem, which is explained as follows: a woman who is packing items into her knapsack wants to carry all the most precious things, but there is a limitation on the items to be fitted into her knapsack. Each item is assumed to have weight of an amount classed as wi and the cost of this weight is referred to as vi. Only a total weight of W can fit into the knapsack. The things this lady wants to carry can be fragmented without changing the value of the contents. Hence the problem is named as fractional knapsack problem.
This problem can be solved using greedy-search algorithm by calculating the value per unit weight of each item, wi/vi. and taking as many items as she can carry bearing in mind the most amount of weight associated with each item. If there is still enough space, she can also carry the item with the next greatest value per unit weight and so forth...
Except for the fact that she cannot divide the items into fragments, the 0–1 knapsack problem is similar to that of the fractional knapsack problem. Hence each item has to be taken as a whole, for example, laptop or television set.
References