|dc.description.abstract||The main objective of the project is to increase the recognition rate by establishing a multimodal biometric recognition system that uses two di_erent biometric characteristics, as bio-signals.
Today, institutions use biometric recognition systems quite often to provide security for many areas such as information security and physical security. The importance of these systems increases day by day in the direction of technological development and increasing demand. Recognition systems based on biometric characteristics are more reliable, because of the possibility of forgetting or losing knowledge in the recognition systems based on knowledge (eg: password) and the possibility of being stolen or guessed by third persons in the biometric recognition based on possessed (eg: card). However, the fraud techniques are also developing in the direction of technological developments and biometric characteristics cannot be renewed in case of imitation, hence the use of multiple biometrics recognition system may be a solution to this problem. At the same time, the use of multiple biometrics increases in the security of systems. In this thesis, a biometric recognition system, which uses the lectrocardiogram (ECG) and speech signals of the person, was created. Since there was not enough time and possibility, an arti_cial database was generated with obtaining these signals from various sources. First, the MIT-BIH Arrhythmia Database was used for ECG signals. This database consists of 48 ECG signals, which belong to 22 females and 26 males. In ccordance with this database, a database was created for the speech signals, which were obtained from the website, given in .
The features of the biometric signals were extracted by AC / DCT (Autocorrelation/ Discrete Cosine Transform) method for ECG signals and by Mel Frequency Cepstrum Coe_cients (MFCCs) method for speech signals. The data, which were obtained from the feature extraction, were then classi_ed by the Gaussian Mixture Model (GMM) method. The scores, which were obtained from the classi_cation process, were fused as a single individual's data, and the decision-making step was passed. Recognition rates were obtained in the decision making step.
The recognition rate for the ECG signal was %87.50 and 42 persons were matched correctly. The recognition rate for 2 seconds speech signals was %58.33 and 28 persons were matched correctly. Normalization was applied before the fusion of these two datasets. The recognition rate after the fusion was %70.83 and 34 persons were matched correctly. However, when the recognition rates are considered, it has been observed that the recognition rate, which obtained after the fusion, is lower than recognition rate of the ECG signals. Therefore, instead of 2 seconds speech signals, 10 seconds speech signals were used. In this case, the recognition rate of the speech signals was %97.9 and 47 persons were matched correctly. Then, normalization was applied again and two datasets were fused. After the fusion, the rate of recognition reached %95.8 and 46 persons were matched correctly.||tr_TR