Abstract [eng] |
Artificial neural networks help, prompt, and for specific use cases essentially enable efficient task solving in various industries, from medicine to autonomous driving or agriculture. In image analysis and semantic segmentation, some of the most productive algorithms are based on deep convolutional neural networks. Various techniques are implemented to make the algorithms' performance more efficient – one of them being model ensembling. These solutions, applied to already resource-intensive models, further increase the required computation amounts. It becomes essential to find a balance between the chosen model and its performance for the economic activity to remain effective. For this purpose, one can employ model compression (distillation) methodology. This project investigates semantic segmentation performance using individual models, compositions of homogeneous and heterogenous ensembles and distilled models for one specific (seabed transect) and three standard semantic segmentation datasets employing five deep convolutional architectures. According to the results of the experiments carried out in the project, it is concluded that homogeneous ensembles work for larger datasets and more distinguishable segmentation classes. In comparison, the distillation methodology is more effective for smaller datasets. |