Title Giliojo mokymosi modelių sujungimas vaizdams segmentuoti /
Translation of Title Deep learning model ensembling for semantic segmentation.
Authors Buškus, Kazimieras
Full Text Download
Pages 112
Keywords [eng] semantic segmentation ; deep learning ; ensemble modeling ; knowledge distillation
Abstract [eng] Artificial neural networks help, prompt, and for specific use cases essentially enable efficient task solving in various industries, from medicine to autonomous driving or agriculture. In image analysis and semantic segmentation, some of the most productive algorithms are based on deep convolutional neural networks. Various techniques are implemented to make the algorithms' performance more efficient – one of them being model ensembling. These solutions, applied to already resource-intensive models, further increase the required computation amounts. It becomes essential to find a balance between the chosen model and its performance for the economic activity to remain effective. For this purpose, one can employ model compression (distillation) methodology. This project investigates semantic segmentation performance using individual models, compositions of homogeneous and heterogenous ensembles and distilled models for one specific (seabed transect) and three standard semantic segmentation datasets employing five deep convolutional architectures. According to the results of the experiments carried out in the project, it is concluded that homogeneous ensembles work for larger datasets and more distinguishable segmentation classes. In comparison, the distillation methodology is more effective for smaller datasets.
Dissertation Institution Kauno technologijos universitetas.
Type Master thesis
Language Lithuanian
Publication date 2023