1 / 1

Fast Multi-exposure Image Fusion

Multi-exposure Image Fusion for Statics and Dynamic Scenes. 指導教授:柳金章 學生 :彭少宣. Fast Multi-exposure Image Fusion. Single Image-Based Ghost-Free High Dynamic Range Imaging. experimental results of method 1. Abstract. Abstract.

oistin
Télécharger la présentation

Fast Multi-exposure Image Fusion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-exposure Image Fusion for Statics and Dynamic Scenes 指導教授:柳金章 學生 :彭少宣 Fast Multi-exposure Image Fusion Single Image-Based Ghost-Free High Dynamic Range Imaging experimental results of method 1 Abstract Abstract Images taken by low dynamic range (LDR) cameras usually suffer from a lack of details in under-exposed and over-exposed areas. High dynamic range (HDR) imaging may solve this problem by multiexposure image fusion. The weighted sum based multi-exposure image fusion method consists of two main steps. Three image features, namely, local contrast, brightness and color dissimilarity, are computed to estimate the weighting maps of input LDR images, which are refined by recursive filtering. Then, the fused image is constructed by weighted sum of input LDR images. The single image-based image fusion method consists of two main steps: (1) Self-generate three histogram-stretched LDR images from a single input image.We then suppress the noise that is amplified in the process of histogram stretching. (2) The final HDR image is constructed by three processed self-generated LDR images. (a) (b) (c) Proposed method 2 (f) (d) (e) Proposed method 1 Fig. 2. (a)~(e) are 5 input LDR images: (f) is the HDR image (method 1). Image Processing Chain ↓ get image RGB Color Transform (RGB → HSV) ↓ get image V Color Separation ↓ (OEL) (NEL) (UEL) Local Histogram Separation ↓ Denoising OEL , NEL , UEL Spatially Adaptive Denoising ↓ Color Transform (HSV → RGB) Image Fusion ↓ HDR Image Input LDR image sequence ↓ weight estimation Local weight maps ↓ recursive filtering Refined weight maps ↓ weight sum Fused image Table 1. Processing times and VIF values of three image sequences. Fig. 1. The multi-exposure image fusion approach. experimental results of method 2 Local contrastand brightness Fig. 2. The ghost-free HDR approach using single input image. When fusing images in static scenes without motion objects, two image features i.e., local contrast and brightness, can be considered for weight estimation. The resulting local contrast feature will preserve image details, whereas the brightness of each pixel can be used to decide whether a pixel is under-exposed or over-exposed. Local histogram stretching To generate three LDR images from a single input image, local histogram stretching is first used to estimate an appropriate stretching region, which separate the dataset into two subsets. Then, we can acquire different exposed LDR images by stretching each subset. Color dissimilarity (a) (b) (c) When fusing images in dynamic scenes which contain motion objects, the influence of motion objects can be considered for weight estimation. The resulting color dissimilarity feature will indicate where are the motion objects. Edge-preserving spatially adaptive denoising Weight estimation During the local histogram stretching process, noise is amplified together with the brightness levels, and consequently degrades the quality of the HDR image. To remove noise while preserving the edges, a spatially adaptive denoising algorithm is used to take detailed high-frequency regions from a noisy LDR image and takes flat regions from the result of averaging filtering with an appropriate amount of weighting between the two. In order to preserve image details and remove influences of under-exposed pixels, and/or over-exposed pixels, and pixels from motion objects, the three image features i.e., local contrast, brightness, and color dissimilarity are combined together for weight estimation (by multiplication). (d) (e) Fig. 2. (a)~(c) are three images generated by local histogram stretches; (d) is the input image; (e) is the HDR image. Weight refined and weighted fusion Table 1. Processing times of three image sequences. LDRimage fusion As shown in Fig. 1, the weights estimated above are noisy and hard (most weights are either 0 or 1), which may be refined for weighted sum based image fusion. This is can be realized by recursive filtering (a real-time edge-preserving smoothing filter). For efficient implementation, the proposed fusion process uses only simple arithmetic operations for the limited number of images. Bilcu’s method first fuses two LDR images out of three, then the result of fusion and the third image are fused again. Such repetitive approach aims at reducing memory space at the cost of processing time.

More Related