Skip to main navigation Skip to search Skip to main content

Hybridization process for text-independent speaker identification based on vector quantization model

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper examines performances of an independent Speaker Identification System (SIS) based on a template model using a Vector Quantization (VQ) method. Template model is characterized by the implementation platform based on a comparison process where the speaker model with the smallest distortion score is identified. In order to analyze the decision of the system and its confidence, a thresholding decision was introduced as a verdict condition. Thus, a new notion around decision quality was performed. Moreover, this threshold returns a discriminative criterion for selecting the training models used in the matching process and clustering with a second SIS will be allowed. According to the results, it was concluded as through the use of the proposed method; the desired performance was reached. As fulfillment, we have been able to custom a Hybridization process based on SIS-VQ model.

Original languageEnglish
Title of host publication2016 IEEE International Conference on Signal and Image Processing, ICSIP 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages596-601
Number of pages6
ISBN (Electronic)9781509023769
DOIs
StatePublished - 27 Mar 2017
Event2016 IEEE International Conference on Signal and Image Processing, ICSIP 2016 - Beijing, China
Duration: 13 Aug 201615 Aug 2016

Publication series

Name2016 IEEE International Conference on Signal and Image Processing, ICSIP 2016

Conference

Conference2016 IEEE International Conference on Signal and Image Processing, ICSIP 2016
Country/TerritoryChina
CityBeijing
Period13/08/1615/08/16

Keywords

  • Component
  • MFCC features
  • minkowski metrics
  • speaker identification system
  • vector quantization based LBG

Fingerprint

Dive into the research topics of 'Hybridization process for text-independent speaker identification based on vector quantization model'. Together they form a unique fingerprint.

Cite this