Video telephony could be considerably enhanced by provision of a tracking system that allows freedom of movement to the speaker while maintaining a well-framed image, for transmission over limited bandwidth. Already commercial multi-microphone systems exist which track speaker direction in order to reject background noise. Stereo sound and vision are complementary modalities in that sound is good for initialisation (where vision is expensive) whereas vision is good for localisation (where sound is less precise). Using generative probabilistic models and particle filtering, we show that stereo sound and vision can indeed be fused effectively, to make a system more capable than with either modality on its own.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Sequential Monte Carlo fusion of sound and vision for speaker tracking


    Beteiligte:
    Vermaak, J. (Autor:in) / Gangnet, M. (Autor:in) / Blake, A. (Autor:in) / Perez, P. (Autor:in)


    Erscheinungsdatum :

    2001-01-01


    Format / Umfang :

    850326 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Sequential Monte Carlo Fusion of Sound and Vision for Speaker Tracking

    Vermaak, J. / Gangnet, M. / Blake, A. et al. | British Library Conference Proceedings | 2001




    Sequential Monte Carlo Filtering for Multi-Aspect Detection/Tracking

    Bruno, M. G. S. / de Arajo, R. V. / Pavlov, A. G. et al. | British Library Conference Proceedings | 2005