1 / 31

Chapter 3 cont’d.

Chapter 3 cont’d. Adjacency, Histograms, & Thresholding. RAGs (Region Adjacency Graphs). RAGs (Region Adjacency Graphs). Steps: label image scan and enter adjacencies in graph (includes containment). But how do we obtain binary images?. Histograms & Thresholding. Gray to binary.

paki-estes
Télécharger la présentation

Chapter 3 cont’d.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 cont’d. Adjacency, Histograms, & Thresholding

  2. RAGs(Region Adjacency Graphs)

  3. RAGs (Region Adjacency Graphs) Steps: • label image • scan and enter adjacencies in graph (includes containment)

  4. But how do we obtain binary images?

  5. Histograms & Thresholding

  6. Gray to binary • Thresholding • G  B const int t=200; if (G[r][c]>t) B[r][c]=1; else B[r][c]=0; How do we choose t? • Interactively • Automatically

  7. Gray to binary • Interactively. How? • Automatically. • Many, many, many, …, many methods. • Experimentally (using a priori information). • Supervised/training methods. • Unsupervised • Otsu’s method (among many, many, many, many, … other methods).

  8. Histogram • “Probability” of a given gray value in an image. • h(g) = count of pixels w/ gray value equal to g. • p(g) = h(g) / (w*h) • w*h = # of pixels in entire image • Demo histogram.

  9. Histogram Note: Sometimes we need to group gray values together in our histogram into “bins” or “buckets.” E.g., we have 10 bins in our histogram and 100 possible different gray values. So we put 0..9 into bin 0, 10..19 into bin 1, …

  10. Histogram

  11. Something is missing here!

  12. Otsu’s method • Automatic thresholding method • automatically picks t given an image histogram • Assume 2 groups are present in the image: • Those that are <=t • Those that are >t

  13. Otsu’s method Best choices for t.

  14. Otsu’s method For every possible t: • Pick a t. • Calculate within group variances • probability of being in group 1 • probability of being in group 2 • determine mean of group 1 • determine mean of group 2 • calculate variance for group 1 • calculate variance for group 2 • calculate weighted sum of group variances and remember which t gave rise to minimum.

  15. Otsu’s method:probability of being in each group

  16. Otsu’s method:mean of individual groups

  17. Otsu’s method:variance of individual groups

  18. Otsu’s method:weighted sum of group variances • Calculate for all t’s and minimize. • Demo Otsu.

  19. Generalized thresholding • Single range of gray values const int t1=200; const int t2=500; if (G[r][c]>t1 && G[r][c]<t2) B[r][c]=1; else B[r][c]=0;

  20. Even more general thresholding • Union of ranges of gray values. const int t1=200, t2=500; const int t3=1200, t4=1500; if (G[r][c]>t1 && G[r][c]<t2) B[r][c]=1; else if (G[r][c]>t3 && G[r][c]<t4) B[r][c]=1; else B[r][c]=0;

  21. Something is missing here!

  22. K-Means Clustering • Clustering = the process of partitioning a set of pattern vectors into subsets called clusters. • K = number of clusters (known in advance). • Not an exhaustive search so it may not find the globally optimal solution. • (see section 10.1.1)

  23. Iterative K-Means Clustering Algorithm Form K-means clusters from a set of nD feature vectors. • Set ic=1 (iteration count). • Choose randomly a set of K means m1(1), m2(1), … mK(1). • For each vector xi compute D(xi,mj(ic)) for each j=1,…,K. • Assign xi to the cluster Cj with the nearest mean. • ic =ic+1; update the means to get a new set m1(ic), m2(ic), … mK(ic). • Repeat 3..5 until Cj(ic+1) = Cj(ic) for all j.

  24. K-Means for Optimal Thresholding • What are the features?

  25. K-Means for Optimal Thresholding • What are the features? • Individual pixel gray values

  26. K-Means for Optimal Thresholding • What value for K should be used?

  27. K-Means for Optimal Thresholding • What value for K should be used? • K=2 to be like Otsu’s method.

  28. Iterative K-Means Clustering Algorithm Form 2 clusters from a set of pixel gray values. • Set ic=1 (iteration count). • Choose 2 random gray values as our initial K means, m1(1), and m2(1). • For each pixel gray value xi compute fabs(xi,mj(ic)) for each j=1,2. • Assign xi to the cluster Cj with the nearest mean. • ic =ic+1; update the means to get a new set m1(ic), m2(ic), … mK(ic). • Repeat 3..5 until Cj(ic+1) = Cj(ic) for all j.

  29. Iterative K-Means Clustering Algorithm Example. m1(1)=260.83, m2(1)=539.00 m1(2)=39.37, m2(2)=1045.65 m1(3)=52.29, m2(3)=1098.63 m1(4)=54.71, m2(4)=1106.28 m1(5)=55.04, m2(5)=1107.24 m1(6)=55.10, m2(6)=1107.44 m1(7)=55.10, m2(7)=1107.44 . . . Demo.

More Related