We present an approach to automatically segment and label a continuous observation sequence of hand gestures for a complete unsupervised model acquisition. The method is based on the assumption that gestures can be viewed as repetitive sequences of atomic components, similar to phonemes in speech, governed by a high level structure controlling the temporal sequence. We show that the generating process for the atomic components can be described in gesture space by a mixture of Gaussian, with each mixture component tied to one atomic behaviour. Mixture components are determined using a standard expectation maximisation approach while the determination of the number of components is based on an information criteria, the minimum description length.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Auto clustering for unsupervised learning of atomic gesture components using minimum description length


    Contributors:
    Walter, M. (author) / Psarrou, A. (author) / Gong, S. (author)


    Publication date :

    2001-01-01


    Size :

    1082792 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English