Title |
Pareto optimized large mask approach for efficient and background humanoid shape removal / |
Authors |
Maskeliunas, Rytis ; Damasevicius, Robertas ; Vitkute-Adzgauskiene, Daiva ; Misra, Sanjay |
DOI |
10.1109/ACCESS.2023.3253206 |
Full Text |
|
Is Part of |
IEEE Access.. Piscataway, NJ : IEEE. 2023, vol. 11, p. 33900-33914.. ISSN 2169-3536 |
Keywords [eng] |
semantic segmentation ; occlusion-robust network ; human shape extraction ; background person removal ; image inpainting |
Abstract [eng] |
We introduce deep learning-based methodology for removing unwanted human-like shapes in videos. The method uses Pareto-optimized Generative Adversarial Networks (GANs) technology, which is a novel contribution. The system automatically selects the Region of Interest (ROI) for each humanoid shape and uses a skeleton detection module to determine which humanoid shape to retain. The semantic masks of human like shapes are created using a semantic-aware occlusion-robust model that has four primary components: feature extraction, and local, global, and semantic branches. The global branch encodes occlusion-aware information to make the extracted features resistant to occlusion, while the local branch retrieves fine-grained local characteristics. A modified big mask inpainting approach is employed to eliminate a person from the image, leveraging Fast Fourier convolutions and utilizing polygonal chains and rectangles with unpredictable aspect ratios. The inpainter network takes the input image and the mask to create an output image excluding the background humanoid shapes. The generator uses an encoder-decoder structure with included skip connections to recover spatial information and dilated convolution and squeeze and excitation blocks to make the regions behind the humanoid shapes consistent with their surroundings. The discriminator avoids dissimilar structure at the patch scale, and the refiner network catches features around the boundaries of each background humanoid shape. The method is evaluated on two video object segmentation datasets (DAVIS and YouTube-VOS) and a database of 66 distinct video sequences of people behind a desk in an office environment. The efficiency was assessed using the Structural Similarity Index Measure (SSIM), Frechet Inception Distance (FID), and Learned Perceptual Image Patch Similarity (LPIPS) metrics and showed promising results in fully automated background person removal task. |
Published |
Piscataway, NJ : IEEE |
Type |
Journal article |
Language |
English |
Publication date |
2023 |
CC license |
|