Author Affiliations
1College of Computer Science and Information Technology, University of Al-Qadisiyah, Al Diwaniyah, 58001, Iraq2Intelligent Technology Innovation Lab, Victoria University, Melbourne, 3011, Australiashow less
Fig. 1. Arrhythmia detection based Chi-square classification with the medical organization.
Fig. 2. Schematic representation of the entire methodology, including the preprocessing, feature selection, and the Chi-square-based classifier.
Fig. 3. Results of the preprocessing step (a) before and (b) after normalization.
Fig. 4. Chi-PSO.
Fig. 5. Confusion matrix.
Fig. 6. Confusion matrices of (a) KNN, (b) RF, (c) SVM, (d) NB, (e) DT, and (f) our classifier (Chi-square).
Fig. 7. Confusion matrices of (a) KNN, (b) RF, (c) SVM, (d) NB, (e) DT, and (f) our classifier (Chi-square).
Fig. 8. Evaluated performance of classifiers (a) without and (b) with PSO feature selection.
Ref. | Year | Method | Accuracy | Limitation | [17] | 2018 | CNN | 81.33% | The suggested model takes more time to achieve moderate levels of accuracy on the MIT-BIH dataset. | [18] | 2016 | GB + SVM | 84.82% | The performance of the model was evaluated on a redundant dataset including 500 records, exhibiting average accuracy across many classes. | [19] | 2019 | Best first selection (BFS) + RF | 85.58% | A restricted dataset consisting of just 500 entries was used for 16 classes. | [20] | 2022 | RF + SVM | 77.4% | The use of a traditional hybrid ML approach is associated with moderate precision levels and substantial processing costs. | [21] | 2021 | KNN / RF | 89.83% / 90.21% | It takes a lot of data preparation (3-phase preprocessing) to get moderate results. | [22] | 2021 | BiLSTM | 95% | The computational cost is increased due to the prolonged training time of a deep LSTM-BiLSTM model. | [23] | 2022 | MobileNetV2+ BiLSTM | 91.7% | The model needs a longer period of training in order to provide significant outcomes. | [24] | 2020 | DNN | 94% | The use of DNN in conjunction with a genetic algorithm results in a computational process that incurs significant computing expenses. | [25] | 2022 | Fusion of CNN | 98.8% | The model’s ability to perform well across different scenarios is limited due to its dependence on a small dataset. | [26] | 2019 | DWT + Sparse autoencoder (S-AE) | 96.82% | In the presence of elevated levels of noise, the suggested methodology has a limited capability for accurately identifying the precise positions of R-peaks. | [27] | 2022 | SVM + Deep CNN | 99.2% | The research did not examine feature selection optimization methods, which may include identification of the most relevant deep features, classification performance optimization, and reduction of computational costs. |
|
Table 1. Literature overview.
Class | Description | Included beats | Number of extracted records | N | Non-ectopic beats | Normal beats, left bundle branch block, right bundle branch block, nodal (junctional) escape beat, and atrial escape beat | 100 | S | Supraventricular ectopic beats | Aberrated atrial premature beat, supraventricular premature beat, atrial premature beat, and nodal (junctional) premature beat | 100 | V | Ventricular ectopic beats | Ventricular escape beat and premature ventricular contraction | 100 | F | Fusion beats | Fusion of ventricular and normal beat | 100 | Q | Unknown beats | Paced beat, unclassified beat, and fusion of paced and normal beats | 100 |
|
Table 2. Details of classes in the dataset.
Components | Specifications | Processor | 6th Generation Intel® Core™ i7 | RAM | 16 GB | Editor | Visual Studio Code | Programming language | Python 3.12 | Operating system | Windows 10 Pro |
|
Table 3. Details of the implementation environment.
Classifier | Accuracy (%) | F1-score (%) | Precision (%) | Recall (%) | KNN | 84 | 83.36 | 83.48 | 84 | RF | 77 | 73.20 | 76.18 | 77 | SVM | 78 | 72.34 | 83.14 | 78 | NB | 74 | 74.49 | 75.22 | 74 | DT | 83 | 84.29 | 87.01 | 83 | Our classifier (Chi-square) | 89 | 89.43 | 90.40 | 89 |
|
Table 4. Comparison of our classifier’s performance with that of the standard classifiers without feature selection.
Classifier | Accuracy (%) | F1-score (%) | Precision (%) | Recall (%) | KNN | 96 | 96.06 | 96.23 | 96 | RF | 93 | 92.89 | 92.83 | 93 | SVM | 95 | 94.92 | 94.89 | 95 | NB | 82 | 83.26 | 86.91 | 82 | DT | 91 | 91.11 | 91.26 | 91 | Our classifier (Chi-square) | 98 | 98.03 | 98.18 | 98 |
|
Table 5. Performance comparison of our classifier (Chi-square) and standard classifiers with feature selection.
Ref. | Year | Feature selection | Classifier | Accuracy (%) | [33] | 2020 | Wavelet + Gabor filter | Bat-rider optimization algorithm deep CNN (BaRoA-DCNN) | 93.19 | [34] | 2021 | Rapid-ramp (RR) + ECG segments | Artificial deep neural network (ADNN) + Conv1D | 94.70 | [35] | 2017 | Linear discriminant analysis (LDA) + Principal component analysis (PCA) + DWT | Weighted k-nearest neighbors (WKNN) | 96.12 | [19] | 2019 | Three-filter feature selection (TFFS) | RF + BFS | 85.58 | [36] | 2021 | Wavelet | CNN | 97.41 | [37] | 2020 | Z-score + High order statistics (HOS) / DWT | RF | 93.45 | KNN | 72.56 | SVM | 90.09 | LSTM | 92.16 | Ensemble SVM | 94.40 | [38] | 2019 | Fractal dimension + Renyi entropy + Fuzzy entropy | KNN | 94.5 | [39] | 2019 | HOS + Local binary patterns (LBP) + RR | Ensemble SVM | 94.50 | [40] | 2023 | ECG segments + RR | Conv1D MF | 96.48 | [41] | 2022 | RR-intervals + Higher order statistics + DWT | EasyEnsemble | 95.6 | [42] | 2021 | Attention mechanism | Dual-level attentional (DLA) + Convolutional long short-term memory (CLSTM) neural network | 88.76 | [43] | 2023 | Augmented attention | CNN + Attention | 96.19 | [44] | 2023 | Lightweight transformer | CNN | 97.66 | Denoising autoencoder (DAE) | 97.93 | This work | | PSO | Chi-square | 98 |
|
Table 6. Comparison of the approach presented in this paper with similar work on the MIT-BIH arrhythmia dataset.