Abstract
Fueled by the powerful learning ability of deep networks, generalized models have been proposed that use external datasets for single-image super-resolution tasks. However, a model trained only with external data may have difficulty in super-resolving images in a domain that differs from the training data. To solve this drawback, several methods have been proposed for internal learning approaches that learn the weights of the network accordance with the test image. Despite these attempts to adapt to specific images using internal learning, they suffer from poor performance due to lack of flexibility that comes from using a fixed architecture regardless of image domain. We thus propose a novel training process that includes external and internal learning. Our internal learning process finds a suitable network architecture and trains the weights for each unseen test image. The overall training process allows the network to learn obtain knowledge from external data and internal learning in a balanced manner. In blind and non-blind experiments, our proposed method outperforms state-of-the-art super-resolution algorithms in various image domains with different kernels. Our proposed approach obtains impressive results in terms of expressing detailed texture and accurate color in images from various domains.
Original language | English |
---|---|
Pages (from-to) | 59-68 |
Number of pages | 10 |
Journal | Neurocomputing |
Volume | 524 |
DOIs | |
Publication status | Published - 2023 Mar 1 |
Bibliographical note
Funding Information:This work was supported by the Major Project of the Korea Institute of Civil Engineering and Building Technology (KICT) [grant number 20220238-001]
Publisher Copyright:
© 2022 Elsevier B.V.
Keywords
- Internal learning
- Neural architecture search
- Single image super-resolution
ASJC Scopus subject areas
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence