TY - JOUR
T1 - Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment
AU - Bosse, Sebastian
AU - Maniry, Dominique
AU - Müller, Klaus Robert
AU - Wiegand, Thomas
AU - Samek, Wojciech
N1 - Funding Information:
This work was supported in part by the German Ministry for Education and Research as Berlin Big Data Center under Grant 01IS14013A, in part by the Institute for Information and Communications Technology Promotion through the Korea Government under Grant 2017-0-00451, and in part by DFG. The work of K.-R. Müller was supported by the National Research Foundation of Korea through the Ministry of Education, Science, and Technology in the BK21 Program. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Kalpana Seshadrinathan.
Publisher Copyright:
© 2017 IEEE.
PY - 2018/1
Y1 - 2018/1
N2 - We present a deep neural network-based approach to image quality assessment (IQA). The network is trained end-to-end and comprises ten convolutional layers and five pooling layers for feature extraction, and two fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that: 1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and 2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ, and TID2013 databases as well as the LIVE In the wild image quality challenge database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features.
AB - We present a deep neural network-based approach to image quality assessment (IQA). The network is trained end-to-end and comprises ten convolutional layers and five pooling layers for feature extraction, and two fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that: 1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and 2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ, and TID2013 databases as well as the LIVE In the wild image quality challenge database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features.
KW - Full-reference image quality assessment
KW - deep learning
KW - feature extraction
KW - neural networks
KW - no-reference image quality assessment
KW - quality pooling
KW - regression
UR - http://www.scopus.com/inward/record.url?scp=85031920409&partnerID=8YFLogxK
U2 - 10.1109/TIP.2017.2760518
DO - 10.1109/TIP.2017.2760518
M3 - Article
C2 - 29028191
AN - SCOPUS:85031920409
SN - 1057-7149
VL - 27
SP - 206
EP - 219
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
IS - 1
M1 - 8063957
ER -