# Model: Handpose

Overview and basic demo
  • πŸ– 21 3D hand landmarks
  • 1️⃣ Only one hand at a time is supported
  • 🧰 Includes THREE r124, TensorFlow 2.1

This model includes a fingertip raycaster, center of palm object, and a minimal THREE environment which doubles as a basic debugger for your project.

# Usage

# With defaults

handsfree = new Handsfree({handpose: true})

# With config

handsfree = new Handsfree({
  handpose: {
    enabled: true,

    // The backend to use: 'webgl' or 'wasm'
    // 🚨 Currently only webgl is supported
    backend: 'webgl',

    // How many frames to go without running the bounding box detector. 
    // Set to a lower value if you want a safety net in case the mesh detector produces consistently flawed predictions.
    maxContinuousChecks: Infinity,

    // Threshold for discarding a prediction
    detectionConfidence: 0.8,

    // A float representing the threshold for deciding whether boxes overlap too much in non-maximum suppression. Must be between [0, 1]
    iouThreshold: 0.3,

    // A threshold for deciding when to remove boxes based on score in non-maximum suppression.
    scoreThreshold: 0.75

# Data

// Get the [x, y, z] of various landmarks
// Thumb tip
// Index fingertip

// Normalized landmark values from [0 - 1] for the x and y
// The z isn't really depth but "units" away from the camera so those aren't normalized

// How confident the model is that a hand is in view [0 - 1]

// The top left and bottom right pixels containing the hand in the iframe
handsfree.data.handpose.boundingBox = {
  topLeft: [x, y],
  bottomRight: [x, y]

// [x, y, z] of various hand landmarks
handsfree.data.handpose.annotations: {
  thumb: [...[x, y, z]], // 4 landmarks
  indexFinger: [...[x, y, z]], // 4 landmarks
  middleFinger: [...[x, y, z]], // 4 landmarks
  ringFinger: [...[x, y, z]], // 4 landmarks
  pinkyFinger: [...[x, y, z]], // 4 landmarks
  palmBase: [[x, y, z]], // 1 landmarks

# Examples of accessing the data

handsfree = new Handsfree({handpose: true})

// From anywhere

// From inside a plugin
handsfree.use('logger', data => {
  if (!data.handpose) return


// From an event
document.addEventListener('handsfree-data', event => {
  const data = event.detail
  if (!data.handpose) return


# Three.js Properties

The following helper Three.js properties are also available:

// A THREE Arrow object protuding from the index finger
// - You can use this to calculate pointing vectors
// The THREE camera
// An additional mesh that is positioned at the center of the palm
// - This is where we raycast the Hand Pointer from
// The meshes representing each skeleton joint
// - You can tap into the rotation to calculate pointing vectors for each fingertip
// A reusable THREE raycaster
// @see https://threejs.org/docs/#api/en/core/Raycaster
// The THREE scene and renderer used to hold the hand model
// The screen object. The Hand Pointer raycasts from the centerPalmObj
// onto this screen object. The point of intersection is then mapped to
// the device screen to position the pointer

# Examples

Handsfree Jenga
Person playing virtual Jenga by pinching and pulling on the blocks in the air with the guide of a Palm Pointer

This experiment led to the palmPointer plugin which was used here to guide the hand on the screen. The pinch gesture used here to "grab" the blocks was then generalized to all fingers, with 3+ events per finger.

Add your project
If you've made something with this model I'd love to showcase it here! DM me on Twitter, make a pull request, or find us on Discord.