Show simple item record

Object detection and localization using approximate nearest neighbor search: random tree implementation

dc.creatorBekele, Esubalew Tamirat
dc.date.accessioned2020-08-22T00:15:22Z
dc.date.available2011-04-21
dc.date.issued2009-04-21
dc.identifier.urihttps://etd.library.vanderbilt.edu/etd-03302009-162240
dc.identifier.urihttp://hdl.handle.net/1803/11751
dc.description.abstractA machine’s intelligence is related to its autonomy. Making robots autonomous is perhaps the most difficult current challenge in the state-of-the art of robotics research. Robots are said to be autonomous, if they can learn and represent percepts through their own ways and internal representations, and as a result, can undertake tasks in a real, dynamic physical world without significant human intervention. The major problems of autonomy involve acquiring, representing, and learning percepts and using them for autonomous control. Today’s robots are far from complete autonomy. There is extensive human intervention in the design and construction of the perceptual and control systems. Rather than allowing the robot to learn percepts and control itself, human programmers encode their own ideas about knowledge and control algorithms in to the robots. As it is cumbersome and practically impossible to encode all knowledge about a dynamic and unstructured environment, autonomously learning the percepts may be the only way for a robot to acquire complete autonomy. Visual percepts would be among those to be learned by the robot autonomously. The large amounts of data associated with vision necessitate an efficient representation for visual percepts. The visual features to use, the data structures to represent them, and the algorithms used to learn them are critical to the success of autonomous perception. This work builds on existing algorithms for computer vision that learn a specific percept with minimal intervention. It combines two features, a color probability density function and a texture measure gleaned from overlapping portions of an image. It uses an approximate nearest neighbor learning algorithm implemented with a random 3-way search tree to learn the visual input features and to associate them with labeled percepts. Once learned, the search tree is used to segment the input images into classes. The time and space complexities of the algorithm are studied and compared with a pure (naïve) nearest neighbor search implementation. The accuracy of the system is then tested. The results of the experiments suggest that the random tree implementation is an efficient and accurate algorithm. The classification running time of the algorithm is found to be approximately logarithmic with linear space requirements, whereas the run time of the pure algorithm is approximately quadratic for a very high dimensional feature vectors like the ones used in this project.
dc.format.mimetypeapplication/pdf
dc.subjectapproximate nearest neighbor search
dc.subjectnearest neighbor search
dc.subjectobject recognition
dc.subjectrandom search tree
dc.subjectobject localization
dc.subjectann
dc.subjectobject detection
dc.titleObject detection and localization using approximate nearest neighbor search: random tree implementation
dc.typethesis
dc.type.materialtext
thesis.degree.nameMS
thesis.degree.levelthesis
thesis.degree.disciplineElectrical Engineering
thesis.degree.grantorVanderbilt University
local.embargo.terms2011-04-21
local.embargo.lift2011-04-21
dc.contributor.committeeChairRichard Alan Peters


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record