Improving semantic segmentation with generalized models of local context
Ateş, Hasan Fehmi
MetadataShow full item record
Semantic segmentation (i.e. image parsing) aims to annotate each image pixel with its corresponding semantic class label. Spatially consistent labeling of the image requires an accurate description and modeling of the local contextual information. Superpixel image parsing methods provide this consistency by carrying out labeling at the superpixel-level based on superpixel features and neighborhood information. In this paper, we develop generalized and flexible contextual models for superpixel neighborhoods in order to improve parsing accuracy. Instead of using a fixed segmentation and neighborhood definition, we explore various contextual models to combine complementary information available in alternative superpixel segmentations of the same image. Simulation results on two datasets demonstrate significant improvement in parsing accuracy over the baseline approach.