A Colorization Framework for Monochrome-Color Dual-Lens Systems Using a Deep Convolutional Network

Research output: Contribution to journalArticlepeer-review

Abstract

In monochrome-color dual-lens systems, the monochrome camera can capture images with higher quality than the color camera. To obtain high quality color images, a better approach is to colorize the gray images from the monochrome camera with the color images from the color camera serving as a reference. In addition, the colorization may fail in some cases, which makes the estimation of the colorization quality a necessary step before outputting the colorization result. To solve these problems, we propose a deep convolutional network based framework. 1) In the colorization module, the proposed colorization CNN uses deep feature representations, attention operation, 3-D regulation and color correction to make use of colors of multiple pixels in the reference image for colorizing each pixel in the input gray image. 2) In the colorization quality estimation module, based on the symmetry property of colorization, we propose to utilize the colorization CNN again to colorize the gray map of the original reference color image using the first-time colorization result from the colorization module as reference. Then, the quality loss of the second-time colorization result can be used for estimating the colorization quality. Experimental results show that our method can largely outperform the state-of-the-art colorization methods and estimate the colorization quality accurately as well.

Original languageEnglish
Pages (from-to)1469-1485
Number of pages17
JournalIEEE Transactions on Visualization and Computer Graphics
Volume28
Issue number3
DOIs
StatePublished - 1 Mar 2022

Keywords

  • Cameras
  • Color
  • Estimation
  • Feature extraction
  • Image color analysis
  • Lenses
  • Training

Fingerprint

Dive into the research topics of 'A Colorization Framework for Monochrome-Color Dual-Lens Systems Using a Deep Convolutional Network'. Together they form a unique fingerprint.

Cite this