1 / 37

Christine Lew Dheyani Malde Everardo Uribe Yifan Zhang Supervisors: Ernie Esser Yifei Lou

BARCODE RECONITION TEAM. Christine Lew Dheyani Malde Everardo Uribe Yifan Zhang Supervisors: Ernie Esser Yifei Lou. UPC Barcode. What type of barcode? What is a barcode? Structure? Our barcode representation? Vector of 0s and 1s . Mathematical Representation.

mindy
Télécharger la présentation

Christine Lew Dheyani Malde Everardo Uribe Yifan Zhang Supervisors: Ernie Esser Yifei Lou

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BARCODE RECONITION TEAM Christine LewDheyaniMaldeEverardo UribeYifan ZhangSupervisors:Ernie EsserYifei Lou

  2. UPC Barcode What type of barcode? What is a barcode? Structure? Our barcode representation? • Vector of 0s and 1s

  3. Mathematical Representation Barcode Distortion Mathematical Representation: What is convolution? • Every value in the blurred signal is given by the same combination of nearby values in the original signal and the kernel determines these combinations. Kernel • For our case, the blur kernel k, or point spread function, is assumed to be a Gaussian Noise • The noise we deal with is white Gaussian noise

  4. 0.2 Standard Deviation

  5. 0.5 Standard Deviation

  6. 0.9 Standard Deviation

  7. Deconvolution What is deconvolution? • It is basically solving for the clean barcode signal, . Difference between non-blind deconvolution and blind deconvolution: • Non-blind deconvolution: we know how the signal was blurred, ie: we assume k is known • Blind deconvolution: we may know some or no information about how the signal was blurred. Very difficult.

  8. Simple Methods of Deconvolution Thresholding • Basically converting signal to binary signal, seeing whether the amplitude at a specific point is closer to 0 or 1 and rounding to the value its closer to. Wiener filter • Classical method of reconstructing a signal after being distorted, using known knowledge of kernel and noise.

  9. Wiener Filter We have: The Wiener Filter solves for: Filter is easily described in frequency domain. Wiener filter defines , such that x = , where is the estimated original signal: Note that if there is no noise, r =0, and So reduces to.

  10. 0.7 Standard Deviation, 0.05 Sigma Noise

  11. 0.7 Standard Deviation, 0.2 Sigma Noise

  12. 0.7 Standard Deviation, 0.5 Sigma Noise

  13. Non-blind Deblurring using Yu Mao’s Method By: Christine Lew Dheyani Malde

  14. Overview • 2 general approaches: • -Yifei (blind: don’t know blur kernel) • -Yu Mao (non-blind: know blur kernel • General goal: • -Taking a blurry barcode with noise and making it as clear as possible through gradient projection. • -Find method with best results and least error

  15. Data Model • Method’s goal to solve • Convex Model • K: blur kernel • U: clear barcode • B: blurry barcode with noise • b = k*u + noise • Find the minimum through gradient projection • Exactly like gradient descent, only we project onto [0,1] every iteration • Once we find min u, we can predict clear signal

  16. Classical Method • Compare with Wiener Filter in terms of error rate • Error rate: difference between reconstructed signal and ground truth

  17. Comparisons for Yu Mao’s Method Yu Mao’s Gradient Projection Wiener Filter

  18. Comparisons for Yu Mao’s Method (Cont.) Wiener Filter Yu Mao’s Gradient Projection

  19. Jumps • How does the number of jumps affect the result? • What happens if we apply the amount of jumps to the different methods of de-blurring? • Compared Yu Mao’s method & Wiener Filter • Created a code to calculate number of jumps • 3 levels of jumps: • Easy: 4 jumps • Medium: 22 jumps • Hard: 45 jumps (regular barcode)

  20. What are Jumps • Created a code to calculate number of jumps: • Jump: when the binary goes from 0 to 1 or 1 to 0 • 3 levels of jumps: • Easy: 4 jumps • Medium: 22 jumps • Hard: 45 jumps • (regular barcode)

  21. Analyzing Jumps • How does the number of jumps affect the result (clear barcode)? • Compare Yu Mao’s method & Weiner Filter

  22. Comparison for Small Jumps (4 jumps) Yu Mao’s Gradient Projection Wiener Filter

  23. Comparison for Hard Jumps (45 jumps) Yu Mao’s Gradient Projection Wiener Filter

  24. Wiener Filter with Varying Jumps - More jumps, greater error - Drastically gets worse with more jumps

  25. Yu Mao's Gradient Projection with Varying Jumps - More jumps, greater error - Slightly gets worse with more jumps

  26. Conclusion Yu Mao's method better overall: produces less error from jump cases: consistent error rate of 20%-30% Wiener filter did not have a consistent error rate: consistent only for small/medium jumpsat 45 jumps, 40%- 50% error rate

  27. Blind Deconvolution Yifan Zhang Everardo Uribe

  28. Derivation of Model We have: For our approach, we assume that , the kernel, is a symmetric point-spread function. Since its symmetric, flipping it will produce an equivalent: We flip entire equation and began reconfiguration:  Y and N are matrix representations

  29. Derivation of Model Signal Segmentation & Final Equation: • Middle bars are always the same, represented as vector [0 1 0 1 0] in our case. We have to solve for x in:

  30. Gradient Projection • Projection of Gradient Descent ( first-order optimization) • Advantage: • Allows us to set a range • Disadvantage: • Takes very long time • Not extremely accurate results • Underestimate signal

  31. Least Squares • estimates unknown parameters • minimizes sum of squares of errors • considers observational errors

  32. Least Squares (cont.) Advantages: return results faster than other methods easy to implement reasonably accurate results great results for low and high noise Disadvantage: doesn’t work well when there are errors in

  33. Total Least Squares • Least squares data modeling • Also considers errors of • SVD (C) • Singular Value Decomposition • Factorization

  34. Total Least Squares (Cont.) Advantage: works on data in which others does not better than least squares when more errors in Disadvantages: doesn’t work for most data not in extremities overfits data not accurate takes a long time x

More Related