%A ZHOU Peng-cheng,GONG Sheng-rong,ZHONG Shan,BAO Zong-ming,DAI Xing-hua %T Image Semantic Segmentation Based on Deep Feature Fusion %0 Journal Article %D 2020 %J Computer Science %R 10.11896/jsjkx.190100119 %P 126-134 %V 47 %N 2 %U {https://www.jsjkx.com/CN/abstract/article_18878.shtml} %8 2020-02-15 %X When feature extraction is performed by using convolutional networks in image semantic segmentation,the context information is lost due to the reduced resolution of features by the repeated combination of maximum pooling and downsampling operations,so that the segmentation result loses the sensitivity to the object location.Although the network based on the encoder-decoder architecture gradually refines the output precision through the jump connection in the process of restoring the resolution,the operation of simply summing the adjacent features ignores the difference between the features and easily leads to local mis-identification of objects and other issues.To this end,an image semantic segmentation method based on deep feature fusion was proposed.It adopts a network structure in which multiple sets of fully convolutional VGG16 models are combined in parallel,processes multi-scale images in the pyramid in parallel efficiently with atrous convolutions,extracts multi-level context feature,and fuses layer by layer through a top-down method to capture the context information as far as possible.At the same time,the layer-by-layer label supervision strategy based on the improved loss function is an auxiliary support with a dense conditional random field of pixels modeling in the backend,which has certain optimization in terms of the difficulty of model training and the accuracy of predictive output.Experimental data show that the image semantic segmentation algorithm improves the classification of target objects and the location of spatial details by layer-by-layer fusion of deep features that characterize different scale context information.The experimental results obtained on PASCAL VOC 2012 and PASCAL CONTEXT datasets show that the proposed method achieves mIoU accuracy of 80.5% and 45.93%,respectively.The experimental data fully demonstrate that deep feature extraction,feature layer-by-layer fusion and layer-by-layer label supervision strategy in the parallel framework can jointly optimize the algorithm architecture.The feature comparison shows that the model can capture rich context information and obtain more detailed image semantic features.Compared with similar methods,it has obvious advantages.