Global Journal of Computer Science and Technology, D: Neural & Artificial Intelligence, Volume 22 Issue 1

Figure 7 shows the reference image, distorted image in grayscale, error map, reliability map, perceptual error map and sensitivity map in this order. The image is obtained from LIVE IQA dataset. Fig. 7: Reference Image, Distorted Image (Gray Scale), Error Map, Reliability Map, Perceptual Error Map, and Sensitivity Map vi. Correlation Plot Figure 8 shows the correlation plot of ground truth and predicted subjective scores. The ground truth scores are pro- vided in the dataset for each distorted image and DNSSCIQ Fig. 8: Correlation Plot framework is used to obtain the predicted subjective score. From the plot, it is concluded that DNSSCIQ is able to calculate the subjective scores almost close to ground-truth values. Fig. 9: Loss vs. Epoch Graph vii. Loss Graph Figure 9 shows loss vs. epoch graph. Here mean squared error is used as loss function. The loss decreases as the number of epochs increases during training. The performance of the model improves with decrease in the loss. VII. C onclusion A deep CNN-based approach for Non-Screen Content and Screen Content IQA called DNSSCIQ is proposed. In the DNSSCIQ, the input normalization for the distorted images are done first. Then, the distorted image along with its ground- truth subjective score is provided to the neural network for training to obtain more meaningful feature maps. Once the training is completed, the feature maps are globally average pooled and fed the fully connected layers to get the final subjective score of the distorted image. The performance of the DNSSCIQ is good irrespective of the dataset selected is shown by using various datasets from different sources for training and final quality prediction. In addition to this, distortion-specific evaluation of different datasets is done and the output is compared. R eferences R éférences R eferencias 1. Y. Li et al., “No-reference image quality assessment with shearlet transform and deep neural networks,” Neurocomputing, vol. 154, pp. 94–109, Apr.2015. 2. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No- reference image quality assessment in the spatial domain,” IEEE Trans. Image Process., vol. 21, no. 12, pp. 4695–4708, Dec. 2012. 3. C. Li, A. C. Bovik, and X. Wu, “Blind image quality assessment using a general regression neural network,” IEEE Trans. Neural Netw., vol. 22, no. 5, pp. 793–799, May 2011. 4. A. K. Moorthy and A. C. Bovik, “Blind image quality assessment: From natural scene statistics to perceptual quality,” IEEE Trans. Image Process., vol. 20, no. 12, pp. 3350– 3364, Dec. 2011. 5. [5]H. Tang, N. Joshi, and A. Kapoor, “Learning a blind measure of perceptual image quality,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2011, pp. 305–312. 6. [6]J. Xu, P. Ye, and D. Doermann, “Blind image quality assessment based on high order statistics aggregation”, IEEE Transactions on Image Processing, vol. 25, no. 9, Sept. 2016. 7. Qiaohong Li, Weisi Lin, Jingtao Xu and Yuming Fang, “Blind image quality assessment using statistical structural and luminance features,” IEEE Transactions on Multimedia, vol. 18, no 12, Dec. 2016. 8. Kim and Lee, “Deep Learning of Human Visual Sensitivity in Image Quality Assessment Frame- Global Journal of Computer Science and Technology Volume XXII Issue I Version I 23 ( )D © 2022 Global Journals Deep CNN Model for Non-Screen Content and Screen Content Image Quality Assessment Year 2022 Correlation plot shows the correlation between any numerical variables. The correlation coefficient is calculated to determine the correlation between two variables.

RkJQdWJsaXNoZXIy NTg4NDg=