Abstract [eng] |
In modern data analytics, making more complex decisions is not possible without testing hypotheses. Data analysts try to use statistical and a priori information. They usually start their research by testing hypotheses about the distribution of data. The information about data distribution can be useful in several of ways, for example: it can provide insights about the observed process, parameters of the model could be inferred from the characteristics of data distributions, it can help to choose more specific and computationally efficient methods. For these reasons and great practical significance, this study examines the issue of testing goodness-of-fit hypotheses. The aim of the work was create and examine univariate and multivariate tests for testing goodness-of-fit hypotheses that would be effective under the normality assumption. In this study. This work presents a goodness-of-fit hypothesis test based on N-metric theory, which is more powerful than other most powerful univariate tests for large sample sizes. The thesis also proposed a multivariate goodness-of-fit hypothesis test based on the evaluation of the difference in distribution densities and the application of the inversion formula, which has significantly higher power than the other most powerful tests for a groups of symmetric and mixed distributions. The statistical tests presented in this work can be successfully applied in real data analysis goodness of fit hypothesis testing tasks. |