Ghost-Free Deep High-Dynamic-Range Imaging Using Focus Pixels for Complex Motion Scenes

Sung Min Woo, Je Ho Ryu, Jong Ok Kim

    Research output: Contribution to journalArticlepeer-review

    13 Citations (Scopus)

    Abstract

    Multi-exposure image fusion inevitably causes ghost artifacts owing to inaccurate image registration. In this study, we propose a deep learning technique for the seamless fusion of multi-exposed low dynamic range (LDR) images using a focus-pixel sensor. For auto-focusing in mobile cameras, a focus-pixel sensor originally provides left (L) and right (R) luminance images simultaneously with a full-resolution RGB image. These L/R images are less saturated than the RGB images because they are summed up to be a normal pixel value in the RGB image of the focus pixel sensor. These two features of the focus pixel image, namely, relatively short exposure and perfect alignment are utilized in this study to provide fusion cues for high dynamic range (HDR) imaging. To minimize fusion artifacts, luminance and chrominance fusions are performed separately in two sub-nets. In a luminance recovery network, two heterogeneous images, the focus pixel image and the corresponding overexposed LDR image, are first fused by joint learning to produce an HDR luminance image. Subsequently, a chrominance network fuses the color components of the misaligned underexposed LDR input to obtain a 3-channel HDR image. Existing deep-neural-network-based HDR fusion methods fuse misaligned multi-exposed inputs directly. They suffer from visual artifacts that are observed mostly in saturated regions because pixel values are clipped out. Meanwhile, the proposed method reconstructs missing luminance with aligned unsaturated focus pixel image first, and thus, the luma-recovered image provides the cues for accurate color fusion. The experimental results show that the proposed method not only accurately restores fine details in saturated areas, but also produce ghost-free high-quality HDR images without pre-alignment.

    Original languageEnglish
    Article number9429936
    Pages (from-to)5001-5016
    Number of pages16
    JournalIEEE Transactions on Image Processing
    Volume30
    DOIs
    Publication statusPublished - 2021

    Bibliographical note

    Funding Information:
    Manuscript received August 26, 2020; revised March 9, 2021; accepted April 17, 2021. Date of publication May 12, 2021; date of current version May 18, 2021. This work was supported in part by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIT) under Grant 2019R1A2C1005834 and in part by the Ministry of Science and ICT (MSIT), South Korea, through the Information Technology Research Center (ITRC) Support Program supervised by the Institute of Information and Communications Technology Planning and Evaluation (IITP), under Grant IITP-2021-2020-0-01749. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Marta Mrak. (Corresponding author: Jong-Ok Kim.) Sung-Min Woo is with the School of Electrical, Electronics and Communication Engineering, Korea University of Technology and Education, Cheonan 31253, South Korea (e-mail: [email protected]).

    Publisher Copyright:
    © 1992-2012 IEEE.

    Keywords

    • Disparity
    • focus pixel
    • ghost free imaging
    • high dynamic range
    • joint learning
    • saturation recovery

    ASJC Scopus subject areas

    • Software
    • Computer Graphics and Computer-Aided Design

    Fingerprint

    Dive into the research topics of 'Ghost-Free Deep High-Dynamic-Range Imaging Using Focus Pixels for Complex Motion Scenes'. Together they form a unique fingerprint.

    Cite this