1 / 34

Analysis of the 2006 IPA Proofing Roundup Data

Analysis of the 2006 IPA Proofing Roundup Data. William B. Birkett Charles Spontelli CGATS TF1 November, 2006 Mesa, AZ. Mission Statement.

sarms
Télécharger la présentation

Analysis of the 2006 IPA Proofing Roundup Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of the 2006 IPA Proofing Roundup Data William B. Birkett Charles Spontelli CGATS TF1 November, 2006 Mesa, AZ

  2. Mission Statement TF1 - Objective Color Matching Development of a method based on colorimetric measurements which will estimate the probability that hardcopy images reproduced by single or multiple systems, using identical input, will appear similar to the typical human observer.

  3. Assumptions • Colorimetry works (patches with the same color values appear identical). • Our application of colorimetry is correct. • Visual illusions are insignificant.

  4. Assumptions • Our test targets provide a good sampling of the colors used in images. • Our color spaces are homogeneous - no discontinuities.

  5. Assumptions • Test target data correlates with the color of images. • Test target data correlates with the judgment of human observers (using methods yet to be determined).

  6. Expectations • Two prints will match perfectly if the measured colors of all corresponding patches in the test targets are identical. • The quality level of a match can be gauged by some statistical measure of test target errors.

  7. Question • Is it possible for two prints to match when the measured colors of corresponding patches are different?

  8. Answer That depends on how you define match: • Colorimetric matching requires that all colors are literally identical. • Appearance matching depends on the illusion of differently colored prints appearing the same.

  9. Examples • Reproducing a color transparency on a printed sheet (smaller gamut). • Printing an uncoated paper to match a coated paper (smaller gamut). • Printing a bluish paper to match a neutral paper (white point).

  10. Reinventing the Wheel? • Much work has already been done on appearance matching. • For instance, CIECAM-02 • Can we adapt this work to our needs?

  11. 2006 IPA Proofing Roundup • Reference press sheets printed with the help of GRACoL experts • Test targets cut from selected press sheets and given to the participants • Proofs made to “match the numbers” of these test targets

  12. 2006 IPA Proofing Roundup • Human judges evaluate the quality of the match to the press sheets, based on the appearance of images and other test elements

  13. 2006 IPA Proofing Roundup • Spectral measurements made of all test targets - press sheets and proofs • Can we correlate these measurements to the scores given by the judges?

  14. Average deltaE? • How about our old favorite, average deltaE? • This has already been tested, but let’s review the data.

  15. Average deltaE? • Again, no useful correlation from this measurement. • Note that the average deltaE is only about 0.7, which is a barely detectable difference in adjacent color patches.

  16. Does this Make Sense? • Significant differences were reported by the judges, yet the measured data is virtually identical. • This is same result that has baffled us in previous TF1 studies.

  17. Our Experiment: • Use the measured data to make simulated test prints, and compare those prints with the same judging criteria.

  18. Our Experiment: • We decided to compare the best and the worst scoring proofs. • Vendor 19 (Avg dE = 0.60) (best) • Vendor 35 (Avg dE = 0.54) (worst) • We made ICC profiles from the four datasets using PM 5.0.7

  19. Our Experiment: • Then, we made prints of the IPA test file using an Epson 4800 printer, one for each of the four data sets. The prints were made over a period of about 30 minutes (one after another). We did a nozzle test before and after to ensure consistency.

  20. Our Experiment:

  21. Our Experiment: • The prints were judged by a group of 29 graphic arts students at BGSU. We gave them the very same judging sheet that was used by the IPA. They compared the prints in a D50 standard viewing booth, after an explanation of the judging criteria.

  22. The Results: BGSU’s average scores are virtually identical, with the IPA’s worst match just slightly better than the IPA’s best.

  23. Conclusion These data sets do not contain information indicating that one pair matches better than the other.

  24. Possible Explanations • Our simulation proofs did not represent the data sets accurately enough. • Color sampling of the data sets is too coarse to pick up subtle differences in the proofs.

  25. Possible Explanations • Viewing light was not D50, causing metamerism. • Color gradients in the press sheets created differences between between images and data sets.

  26. Possible Explanations • Non-color attributes such as gloss and bronzing account for differences in the judging. • UV/optical brightener effects caused color differences (some measurements used UV-cut filter while others didn’t).

  27. Future Work • More tests to establish the actual cause(s) of color matching differences among the IPA test proofs. • Eliminate as many variables as possible when doing color research.

  28. Recommendations • Nearly perfect colorimetric matching is now routine among proofing systems. • There are other causes of matching failure that need to be considered. • Match quality is not a one-to-one function of average deltaE.

  29. Match Quality vs. Average deltaE Average deltaE Match Quality

  30. Recommendation • Match quality measurement should be built upon a quantitative understanding of appearance matching.

  31. Actions • Investigate the nature of appearance matching as it applies to print/proof comparisons. • Test potential match quality measures for correlation with visual assessments.

  32. Actions • Testing should be done with methods that avoid “unexplainable results.” • Tests should include comparisons of prints that match poorly.

  33. Actions • When functional measures are found, test them outside of TF1. • If outside testing is successful, publish our results.

More Related