1 / 28

Object Recognition Using a Neural Network and Invariant Zernike Features

Object Recognition Using a Neural Network and Invariant Zernike Features. Kumar Srijan (200602015) Syed Ahsan(200601096). Problem Statement. To create a Neural Networks based multiclass object classifier which can do rotation, scale and translation invariant object recognition.

ince
Télécharger la présentation

Object Recognition Using a Neural Network and Invariant Zernike Features

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Object Recognition Using a Neural Network and Invariant Zernike Features Kumar Srijan (200602015) Syed Ahsan(200601096)

  2. Problem Statement • To create a Neural Networks based multiclass object classifier which can do rotation, scale and translation invariant object recognition.

  3. Translation Invariance

  4. Scale Invariance

  5. Rotation Invariance

  6. Translation, Rotation and Scale Invariance

  7. Solution • Normalize the image so that scaled and translated images look the same. • Extract features from the images which are invariant to rotation. • Create a classifier based on these features.

  8. All these images are same if we consider scale and translation invariance and are equal to:

  9. This is accomplished by normalization with respect to first two orders of geometric moments. • Geometric Moment for an image is : • The zeroth order moment, M00, represents the total mass of the image. • The two first order moments, (M10, M01), provide the position of the center of mass.

  10. Translation Invariance • Translation invariance is achieved by transforming the image into a new one whose first order moments, M10 and M01, are both equal to zero. • So, we transform the original image (f(x,y)) into: f(x + x’, y + y’) where x’= M10/M00 and, y’= M01/M00

  11. Scale Invariance • Enlarge or reduce the object such that its zeroth order moment, M00 , is set equal to a predetermined value β. • This is done after making the image translation invariant. • Done by changing the image to a new function: f(x/a,y/a) where, a=sqrt(β/M00)

  12. Rotation Invariance • Zernike moments for an image is defined as: • Here, n represents the order and m represents the repetition. • For n=5, the valid values of m are : -5,-3,-1,1,3 and 5

  13. Rotation Invariance • Now suppose the image is rotated by an angle φ so, • Thus, |Znm| can be taken as rotational invariant feature of underlying image function.

  14. Feature Extraction • Binarize the image first according to some threshold • Normalize it to make translation invariant and scale invariant • Calculate Zernike moments of g(x,y) from 2nd order to nth order (since, the 0th order moment is = β/π and 1st the first order moments are = 0 for all images after making them scale and translation invariant ).

  15. Classification • It is done using a multi layer neural network. • In this case, we used only one hidden layer. • Back Propagation of error is used for learning.

  16. Classifier Details • The activation function used at the output layer and the hidden layer is Symmetric Sigmoid Function. • Sigmoid Function

  17. Classifier Details • The symmetric sigmoid is simply the sigmoid that is stretched so that the y range is 2 and then shifted down by 1 so that it ranges between -1 and 1. • If f(x) is the standard sigmoid then the symmetric sigmoid is g(x) = 2*f(x) – 1. So, this becomes symmetric about the origin.

  18. Classifier Details • 26 nodes in the output layer. • Number of hidden layer nodes can be varied. • Number of input layer nodes is equal to the length of feature vector.

  19. Training of the Classifier • We have used the Back Propagation Algorithm. • Initialize all Wij’s to small random values. • Present an input from class m and specify the desired output. The desired output is -1 for all the output nodes except the mth node which is 1. • Calculate actual outputs of all the nodes using the present value of Wij’s. • This is done by mapping the total input at the node according to the symmetric sigmoid function.

  20. Training of the Classifier • Find an error term, δj for all the nodes. • If dj and yj stand for desired and actual values of a node respectively, for an output node. • and for a hidden layer node • Where, k is over all nodes in the layer above node j.

  21. Training of the Classifier • Now, we adjust weights by: • where (n+1), (n), and (n-1) index next, present, and previous respectively. α is a learning rate similar to step size in gradient search algorithms. • ζ is a constant between 0 and 1 which determines the effect of past weight changes on the current direction of movement in weight space. • All the training inputs are presented cyclically until weights stabilize.

  22. Experimentation • We trained the classifier using 4 images of each of the letters of the alphabet. • A sample of training data: • Zernike moments from 2nd to nth order were calculated for all images and treated as feature vectors.

  23. Testing • Similar kinds of images with translation, scale and rotation imbalances were used for testing. • Some images used in testing: • A total of 104 images are 4 for each alphabet with various scale, translation and rotation imbalances were used as testing data.

  24. Results • EXPERIMENT – I • Keeping number of features = 47 (2nd – 12th order). Results for varying the number of hidden layer nodes.

  25. Results

  26. Results • EXPERIMENT – II • Keeping number of hidden nodes = 50 and varying the length of feature vector.

  27. Results

  28. Inferences • Good classification can be achieved even by the use of one hidden layer. • Use of very few hidden layer nodes can may lead to a very bad classifier. • After a limit, there is a saturation in the amount of performance one can get by increasing the number of hidden layer nodes. • For very good classification, one must have sufficient number of features. • Use of too many features does not guarantee better classifier.

More Related