Analysis of Color Rendering in Digital Cameras

Details

Document ID: 
70349
Author(s): 
Jason Lisi, Abhay Sharma
Year: 
2007
Pages: 
18

Pricing

Digital, Non-Member: 
$20.00
Photo, Member: 
$15.00
Photo, Non-Member: 
$30.00

Abstract

Digital photography has now replaced the scanner in most prepress workflows. There remains however considerable confusion regarding the color processing of the images during and after digital capture. Further there is the need to understand the color transformations that are appropriate for images intended for reproduction on press. Digital camera images consist originally of raw data which must first undergo a mathematical demosaicing and then have a secondary set of color rendering operations applied to them to produce "pleasing" images. When a camera is set to capture jpeg images, the camera internally performs both of these operations; however, if the image is kept in camera raw format, the user is able to manually interpret and adjust image data rather than have the camera make predetermined generic adjustments and conversions. The camera raw file format is not standardized, and currently no universal method exists to open and render the raw image data. There are different ways to process a raw image. A raw file may be opened using vendor specific software, such as Canon's File Viewer program, or solutions such as Photoshop's Camera Raw, Apple Aperture or Adobe Lightroom. In this research we conduct colorimetric analysis of how camera raw files are rendered by various processing software. For this study, a GretagMacbeth Digital ColorChecker SG chart was shot in both raw and JPEG format using a Canon digital SLR camera in a controlled lighting environment. The JPEG file was used to analyze in-camera transformation. The raw file was converted to sRGB using four different software solutions: Adobe Camera Raw, Adobe Lightroom, Bibble Pro, and Canon File Viewer. The results were analyzed by comparing the delta L, delta a, and delta b of the converted images to the colorimetric readings of the physical target. The transformation results were analyzed in terms of different L*a*b* quadrants based on chroma and lightness and then studied to determine if common mapping schemas could be drawn from each program that would illustrate an attempt to distort the colorimetric accuracy of reproduction in an attempt to create an image that is more pleasing to the human eye. It is very useful to note that a general pattern of darkening existed in almost all the transformations: that is, the L* values of the transformed images were generally lower than the target. This finding contradicted the original hypothesis that the rendering would be brighter (a higher L* value). Neutral colors showed the least amount of deviation from the physical target when rendered. The project also analyzed the L*a*b* values of the target mapped in 3D Lab space. The 3D analysis revealed an uneven distribution of color patches of the target within the Lab color space, with some quadrants heavily represented, and some with almost no patches at all. Areas with minimal representation on the target provide little training set data and the colorimetric trends are harder to identify. The findings presented in this paper are relevant to pre-media specialists, color scientists, and photographers as it provides analysis of the differences in the rendering algorithms allowing each type of user to choose an appropriate image processing solution.

Return To Search Results

Search Again

TAGA Papers Order Form