In this paper, we introduce a novel method for view-independent hand pose recognition from depth data. The proposed approach, which does not rely on color information, provides an estimation of the shape and orientation of the user's hand without constraining him/her to maintain a fixed position in the 3D space. We use principal component analysis to estimate the hand orientation in space, Flusser moment invariants as image features and two SVM-RBF classifiers for visual recognition. Moreover, we describe a novel weighting method that takes advantage of the orientation and velocity of the user's hand to assign a score to each hand shape hypothesis. The complete processing chain is described and evaluated in terms of real-time performance and classification accuracy. As a case study, it has also been integrated into a touchless interface for 3D medical visualization, which allows users to manipulate 3D anatomical parts with up to six degrees of freedom. Furthermore, the paper discusses the results of a user study aimed at assessing if using hand velocity as an indicator of the user's intentionality in changing hand posture results in an overall gain in the classification accuracy. The experimental results show that, especially in the presence of out-of-plane rotations of the hand, the introduction of the velocity-based weighting method produces a significant increase in the pose recognition accuracy.

Hand shape classification using depth data for unconstrained 3D interaction

GALLO L
2014-01-01

Abstract

In this paper, we introduce a novel method for view-independent hand pose recognition from depth data. The proposed approach, which does not rely on color information, provides an estimation of the shape and orientation of the user's hand without constraining him/her to maintain a fixed position in the 3D space. We use principal component analysis to estimate the hand orientation in space, Flusser moment invariants as image features and two SVM-RBF classifiers for visual recognition. Moreover, we describe a novel weighting method that takes advantage of the orientation and velocity of the user's hand to assign a score to each hand shape hypothesis. The complete processing chain is described and evaluated in terms of real-time performance and classification accuracy. As a case study, it has also been integrated into a touchless interface for 3D medical visualization, which allows users to manipulate 3D anatomical parts with up to six degrees of freedom. Furthermore, the paper discusses the results of a user study aimed at assessing if using hand velocity as an indicator of the user's intentionality in changing hand posture results in an overall gain in the classification accuracy. The experimental results show that, especially in the presence of out-of-plane rotations of the hand, the introduction of the velocity-based weighting method produces a significant increase in the pose recognition accuracy.
2014
Hand shape classification
static hand pose recognition
touchless interface
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12607/25283
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact