[TMM 2019] Naturalness-Aware Deep No-Reference Image Quality Assessment

发布者:代刘博发布时间:2021-12-22浏览次数:273

Authors:

Bo Yan; Bahetiyaer Bare; Weimin Tan


Publication:

This paper is included in the IEEE Transactions on Multimedia, Volume: 21, Issue: 11, Page(s): 2603-2615, October 2019.


Abstract:

No-reference image quality assessment (NR-IQA) is a non-trivial task, because it is hard to find a pristine counterpart for an image in real applications, such as image selection, high quality image recommendation, etc. In recent years, deep learning-based NR-IQA methods emerged and achieved better performance than previous methods. In this paper, we present a novel deep neural networks-based multi-task learning approach for NR-IQA. Our proposed network is designed by a multi-task learning manner that consists of two tasks, namely, natural scene statistics (NSS) features prediction task and the quality score prediction task. NSS features prediction is an auxiliary task, which helps the quality score prediction task to learn better mapping between the input image and its quality score. The main contribution of this work is to integrate the NSS features prediction task to the deep learning-based image quality prediction task to improve the representation ability and generalization ability. To the best of our knowledge, it is the first attempt. We conduct the same database validation and cross database validation experiments on LIVE, TID2013, CSIQ, LIVE multiply distorted image quality database (LIVE MD), CID2013, and LIVE in the wild image quality challenge (LIVE challenge) databases to verify the superiority and generalization ability of the proposed method. Experimental results confirm the superior performance of our method on the same database validation; our method especially achieves 0.984 and 0.986 on the LIVE image quality assessment database in terms of the Pearson linear correlation coefficient (PLCC) and Spearman rank-order correlation coefficient (SROCC), respectively. Also, experimental results from cross database validation verify the strong generalization ability of our method. Specifically, our method gains significant improvement up to 21.8% on unseen distortion types.