Abstract [eng] |
Visual comprehension is essential activity in everyday interactions and occurs unconsciously. However, understanding the visual for individuals affected by Alzheimer's disease is often a challenging task. Because Alzheimer's disease is defined by a memory processing impairment, this study examines the visual cortex of an AD patient before memory processes begin. The study proposes a face inversion system that utilizes raw EEG data, performs preprocessing, transformation into GASF images, augmentation using the RGAN technique, and finally classification into two groups based on whether the visible face is upright or inverted. It has been demonstrated that transforming the EEG signal into images improves classification accuracy and reduces training time. Because the impacts of emotions and familiarity on this EEG signal data were found to be insignificant, study focused on the effects of colors. When comparing the performance of four different classifiers, including ResNet-50, Custom CNN, and EEGNet SSVEP; our proposed CNN, which was designed for smaller datasets utilizing the best regularization approaches, produced the greatest results in both color and grayscale analyses. Meanwhile, just a quarter of the data injections based on RGAN-generated signals demonstrated weak statistical significance. The study's findings suggest that extracting EEG signal features of facial processing in color images is more successful than in grayscale images. The control subject data achieved the average accuracy of 61.7% in detecting face inversion position processing color images and only 57% when processing grayscale images. Comparing the results between the oldest subject and the AD patient, the data from the Alzheimer's patient indicated very low feature extraction success. This could mean that the disease directly affects the human visual cortex, making it more difficult to recognize objects using memory resources. |