PL EN


Preferences help
enabled [disable] Abstract
Number of results
2011 | 119 | 6A | 996-999
Article title

Convolutive Blind Signal Separation Spatial Effectiveness in Speech Intelligibility Improvement

Content
Title variants
Languages of publication
EN
Abstracts
EN
Blind signal separation is one of the latest methods to improve the signal to noise ratio. The main objective of blind source separation is the transformation of mixtures of recorded signals to obtain each source signal at the output of the procedure, assuming that they are statistically independent. For acoustic signals it can be concluded that the correct separation is possible only if the source signals are spatially separated. That finding suggests analogies with the classical spatial filtering (beamforming). In this study we analyzed an effect of the angular separation of two source signals (i.e. speech and babble noise) to improve speech intelligibility. For this purpose, we chose the blind source separation algorithm based on the convolutive separation, based on second order statistics only. As a system of sensors a dummy head was used (one microphone inside each ear canal), which simulated two hearing aids of a hearing impaired person. The speech reception threshold, before and after the blind source separation was determined. The results have shown significant improvement in speech intelligibility after applying blind source separation (speach reception threshold fell even more than a dozen dB) in cases where the source signals were angularly separated. However, in cases where the source signals were coming from the same directions, the improvement was not observed. Moreover, the effectiveness of the blind source separation, to a large extent, depended on the relative positions of signal sources in space.
Keywords
EN
Publisher

Year
Volume
119
Issue
6A
Pages
996-999
Physical description
Dates
published
2011-06
Contributors
author
  • Institute of Acoustics, Adam Mickiewicz University, Umultowska 85, 61-614 Poznań, Poland
author
  • Institute of Acoustics, Adam Mickiewicz University, Umultowska 85, 61-614 Poznań, Poland
  • Institute of Acoustics, Adam Mickiewicz University, Umultowska 85, 61-614 Poznań, Poland
References
  • 1. S.F. Boll, IEEE Trans. Acoust. Speech Signal Proc. ASSP-27, 113 (1979)
  • 2. M. Berouti, R. Schwartz, J. Makhcul, in: IEEE Int. Conf. Acoustics, Speech and Signal Processing, 1979
  • 3. E. Ephraim, D. Malah, IEEE Trans. Acoust, Speech Signal Proc. ASSP-32, 1109 (1984)
  • 4. P. Scalart, J.V. Filho, in: IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Vol. 1, 1996, p. 629
  • 5. J.R. Deller, H.L. Hansen, J.G. Proakis, Discrete-Time Processing of Speech Signals, 2nd ed., EEE Press, New York 2000
  • 6. J. Kocinski, Speech Commun. 50, 29 (2008)
  • 7. D.H. Johnson, D.E. Dudgeon, Array Signal Processing: Concepts and Techniques, Prentice Hall, Englewood Cliffs, NJ 1993
  • 8. M.S. Brandstein, D. Ward, Microphone Arrays: Signal Processing Techniques and Applications, Springer, Berlin 2001
  • 9. M. Kajala, M. Hamalainen, in: IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2001
  • 10. T. Yu, J.H.L. Hansen, in: IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 2009
  • 11. S. Araki, S. Makino, Y. Hinamoto, R. Mukai, T. Nishikawa, H. Saruwatari, EURASIP J. Appl. Signal Proc. 11, 1157 (2003)
  • 12. P. Comon, Signal Process 24, 11 (1991)
  • 13. C. Jutten, J. Herault, Signal Process 24, 1 (1991)
  • 14. P. Comon, Signal Process 36, 287 (1994)
  • 15. A. Ziehe, K.R. Muller, G. Note, G.M. Macker, G. Curio, IEEE Trans. Biomed. Eng. 47, 75 (2000)
  • 16. A. Cichocki, A. Belouchrani, in: Third Int. Conf. on Independent Component Analysis and Signal Separation (ICA-2001), San Diego, 2001
  • 17. A. Hyvärinen, J. Karhunen, O. Erkki, Independent Component Analysis, Wiley, New York 2001
  • 18. A. Cichocki, S. Amari, Adaptive Blind Signal and Image Processing Learning Algorithms and Applications, Wiley, Chichester 2003
  • 19. S. Choi, A. Cichocki, H.-M. Park, S.Y. Lee, Neural Inf. Proc.-Lett. Rev. 6, 1 (2005)
  • 20. K. Matsuoka, M. Ohya, M. Kawamoto, Neural Networks 8, 411 (1995)
  • 21. L. Parra, C. Spence, IEEE Trans. Speech Audio Proc. 8, 320 (2000)
  • 22. D.-T. Pham, C. Serviere, H. Boumaraf, in: ICA 2003, Nara (Japan), 2003
  • 23. E. Ozimek, D. Kutzner, A.P. Sęk, A. Wicher, O. Szczepaniak, Arch. Acoust. 31, 431 (2006)
  • 24. E. Ozimek, D. Kutzner, A. Sęk, A. Wicher, Int. J. Audiol. 48, 433 (2009)
  • 25. H. Levitt, J. Acoust. Soc. Am. 49, 467 (1971)
  • 26. S. Harmeling, computer program, convbss Berlin, http://bme.ccny.cuny.edu/faculty/lparra/publish/, 2001
  • 27. B.C.J. Moore, An Introduction to the Psychology of Hearing, 5th ed., Academic Press, London 2003
  • 28. J. Kocinski, A. Sek, P. Libiszewski, Speech Commun. 53, 390 (2011)
  • 29. P. Libiszewski, J. Kocinski, Arch. Acoust. 32, 337 (2007)
Document Type
Publication order reference
Identifiers
YADDA identifier
bwmeta1.element.bwnjournal-article-appv119n6a19kz
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.