Title |
Human gait analysis: a sequential framework of lightweight deep learning and improved moth-flame optimization algorithm / |
Authors |
Khan, Muhammad Attique ; Arshad, Habiba ; Damaševičius, Robertas ; Alqahtani, Abdullah ; Alsubai, Shtwai ; Binbusayyis, Adel ; Nam, Yunyoung ; Kang, Byeong-Gwon |
DOI |
10.1155/2022/8238375 |
Full Text |
|
Is Part of |
Computational intelligence and neuroscience.. London : Hindawi. 2022, vol. 2022, art. no. 8238375, p. 1-13.. ISSN 1687-5265. eISSN 1687-5273 |
Abstract [eng] |
Human gait recognition has emerged as a branch of biometric identification in the last decade, focusing on individuals based on several characteristics such as movement, time, and clothing. It is also great for video surveillance applications. The main issue with these techniques is the loss of accuracy and time caused by traditional feature extraction and classification. With advances in deep learning for a variety of applications, particularly video surveillance and biometrics, we proposed a lightweight deep learning method for human gait recognition in this work. The proposed method includes sequential steps-pretrained deep models selection of features classification. Two lightweight pretrained models are initially considered and fine-tuned in terms of additional layers and freezing some middle layers. Following that, models were trained using deep transfer learning, and features were engineered on fully connected and average pooling layers. The fusion is performed using discriminant correlation analysis, which is then optimized using an improved moth-flame optimization algorithm. For final classification, the final optimum features are classified using an extreme learning machine (ELM). The experiments were carried out on two publicly available datasets, CASIA B and TUM GAID, and yielded an average accuracy of 91.20 and 98.60%, respectively. When compared to recent state-of-the-art techniques, the proposed method is found to be more accurate. |
Published |
London : Hindawi |
Type |
Journal article |
Language |
English |
Publication date |
2022 |
CC license |
|