[5] Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars[C]//Proceedingsof2015IEEEConferenceonComputerVisionandPatternRecognition, 2015: 5197–5206.
[7] Dong C, Loy C C, He K M, et al. Learning a deep convolutional network for image super-resolution[C]//Proceedingsofthe13thEuropeanConferenceonComputerVision, 2014: 184–199.
[8] Kim J, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks[C]//Proceedingsof2016IEEEConferenceonComputerVisionandPatternRecognition, 2016: 1646–1654.
[9] Hui Z, Gao X B, Yang Y C, et al. Lightweight image super-resolution with information multi-distillation network[C]//Proceedings of the 27th ACM International Conference on Multimedia, 2019: 2024–2032.
[10] Liu S T, Huang D, Wang Y H. Receptive field block net for accurate and fast object detection[C]//Proceedingsofthe15thEuropeanConferenceonComputerVision, 2018: 404–419.
[11] Dai T, Cai J R, Zhang Y B, et al. Second-order attention network for single image super-resolution[C]//Proceedingsof2019IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2019: 11057–11066.
[12] Mei Y Q, Fan Y C, Zhou Y Q, et al. Image super-resolution with cross-scale non-local attention and exhaustive self-exemplars mining[C]//Proceedingsof2020IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2020: 5689–5698.
[13] Jaderberg M, Simonyan K, Zisserman A, et al. Spatial transformer networks[C]//Proceedingsofthe28thInternationalConferenceonNeuralInformationProcessingSystems, 2015: 2017–2025.
[14] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedingsof2018IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2018: 7132–7141.
[15] Zhang Y L, Li K P, Li K, et al. Image super-resolution using very deep residual channel attention networks[C]//Proceedingsofthe15thEuropeanConferenceonComputerVision, 2018: 294–310.
[16] Woo S, Park J, Lee J Y, et al. CBAM: convolutional block attention module[C]//Proceedingsofthe15thEuropeanConferenceonComputerVision, 2018: 3–19.
[17] Sun K, Zhao Y, Jiang B R, et al. High-resolution representations for labeling pixels and regions[Z]. arXiv: 1904.04514, 2019. https://arxiv.org/abs/1904.04514.
[18] Newell A, Yang K Y, Deng J. Stacked hourglass networks for human pose estimation[C]//Proceedingsofthe14thEuropeanConferenceonComputerVision, 2016: 483–499.
[19] Ke T W, Maire M, Yu S X. Multigrid neural architectures[C]//Proceedingsof2017IEEEConferenceonComputerVisionandPatternRecognition, 2017: 4067–4075.
[20] Chen Y P, Fan H Q, Xu B, et al. Drop an octave: reducing spatial redundancy in convolutional neural networks with octave convolution[C]//Proceedingsof2019IEEE/CVFInternationalConferenceonComputerVision, 2019: 3434–3443.
[21] Han W, Chang S Y, Liu D, et al. Image super-resolution via dual-state recurrent networks[C]//Proceedingsof2018IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2018: 1654–1663.
[22] Li J C, Fang F M, Mei K F, et al. Multi-scale residual network for image super-resolution[C]//Proceedings of the 15th European Conference on Computer Vision (ECCV), 2018: 527–542.
[24] Feng R C, Guan W P, Qiao Y, et al. Exploring multi-scale feature propagation and communication for image super resolution[Z]. arXiv: 2008.00239, 2020. https://arxiv.org/abs/2008.00239v2.
[25] Dai JF, Qi H Z, Xiong Y W, et al. Deformable convolutional networks[C]//Proceedingsof2017IEEEInternationalConferenceonComputerVision, 2017: 764–773.
[26] Zhu X Z, Hu H, Lin S, et al. Deformable ConvNets V2: more deformable, better results[C]//Proceedingsof2019IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2019: 9300–9308.
[27] Wang X T, Yu K, Wu S X, et al. ESRGAN: enhanced super-resolution generative adversarial networks[C]//Proceedingsof2018EuropeanConferenceonComputerVision, 2018: 63–79.
[28] Hou Q B, Zhang L, Cheng M M, et al. Strip pooling: rethinking spatial pooling for scene parsing[C]//Proceedingsof2020IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2020: 4002–4011.
[29] Agustsson E, Timofte R. NTIRE 2017 challenge on single image super-resolution: dataset and study[C]//Proceedingsof2017IEEEConferenceonComputerVisionandPatternRecognitionWorkshops, 2017: 1122–1131.
[30] Bevilacqua M, Roumy A, Guillemot C, et al. Low-complexity single-image super-resolution based on nonnegative neighbor embedding[C]//ProceedingsoftheBritishMachineVisionConference, 2012.
[31] Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations[C]//Proceedingsofthe7thInternationalConferenceonCurvesandSurfaces, 2010: 711–730.
[32] Martin D, Fowlkes C, Tal D, et al. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics[C]//ProceedingsEighthIEEEInternationalConferenceonComputerVision, 2001: 416–423.
[34] Lai W S, Huang J B, Ahuja N, et al. Deep laplacian pyramid networks for fast and accurate super-resolution[C]//Proceedingsof2017IEEEConferenceonComputerVisionandPatternRecognition, 2017: 5835–5843.
[35] Liu Y Q, Zhang X F, Wang S S, et al. Progressive multi-scale residual network for single image super-resolution[Z]. arXiv: 2007.09552, 2020. https://arxiv.org/abs/2007.09552v3.
[36] Zhang Y L, Tian Y P, Kong Y, et al. Residual dense network for image super-resolution[C]//Proceedingsof2018IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2018: 2472–2481.
[37] He X Y, Mo Z T, Wang P S, et al. ODE-inspired network design for single image super-resolution[C]//Proceedingsof2019IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2019: 1732–1741.
[38] Haris M, Shakhnarovich G, Ukita N. Deep back-projection networks for super-resolution[C]//Proceedingsof2018IEEE/CVFConferenceonComputerVisionandPatternRecognition, 2018: 1664–1673.
[39] Lim B, Son S, Kim H, et al. Enhanced deep residual networks for single image super-resolution[C]//Proceedingsof2017IEEEConferenceonComputerVisionandPatternRecognitionWorkshops, 2017: 1132–1140.