Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Autoencoder-based training for multi-illuminant color constancy

Not Accessible

Your library or personal account may give you access

Abstract

Color constancy is an essential component of the human visual system. It enables us to discern the color of objects invariant to the illumination that is present. This ability is difficult to reproduce in software, as the underlying problem is ill posed, i.e., for each pixel in the image, we know only the RGB values, which are a product of the spectral characteristics of the illumination and the reflectance of objects, as well as the sensitivity of the sensor. To combat this, additional assumptions about the scene have to be made. These assumptions can be either handcrafted or learned using some deep learning technique. Nonetheless, they mostly work only for single illuminant images. In this work, we propose a method for learning these assumptions for multi-illuminant scenes using an autoencoder trained to reconstruct the original image by splitting it into its illumination and reflectance components. We then show that the estimation can be used as is or can be used alongside a clustering method to create a segmentation map of illuminations. We show that our method performs the best out of all tested methods in multi-illuminant scenes while being completely invariant to the number of illuminants.

© 2022 Optica Publishing Group

Full Article  |  PDF Article
More Like This
Edge-moment-based color constancy using illumination-coherent regularized regression

Meng Wu, Kai Luo, Jianjun Dang, and Jun Zhou
J. Opt. Soc. Am. A 32(9) 1707-1716 (2015)

Object-based color constancy in a deep neural network

Hamed Heidari-Gorji and Karl R. Gegenfurtner
J. Opt. Soc. Am. A 40(3) A48-A56 (2023)

Iterative color constancy with temporal filtering for an image sequence with no relative motion between the camera and the scene

Josemar Simão, Hans Jörg Andreas Schneebeli, and Raquel Frizera Vassallo
J. Opt. Soc. Am. A 32(11) 2033-2043 (2015)

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (8)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (6)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (5)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.