• Hi Guest!

    We have posted a new VaM2 dev log on Patreon, starting a monthly cadence of written progress updates between Beta releases. Highlights include the new Gizmos System, Selection Carousel, and Modes System with Context-Specific Editing. Beta1.2 is 15 of 21 items complete.

    Read the full post on Patreon, or follow progress on the public Trello roadmap.

VaM 1.x [POC] Latent Motion Synthesis: Real-time AI Skeleton Driving (VAE + Voxta)

Threads regarding the original VaM 1.x

grimes_AIVRLab

New member
Joined
Jan 21, 2026
Messages
8
Reactions
9
Hello Creators,
I’m excited to share a project I’ve been working on: a real-time skeleton-driving framework for VaM using a Variational Autoencoder (VAE).
Unlike traditional timeline animations or recorded loops, this system generates per-bone pose data in real-time based on the emotional context of the conversation.

How it works:
Emotion Inference: Using Voxta as the command center to infer emotional states (Excited, Boring, etc.) from the conversation.
Generative Motion: A custom Python hook server receives these states and generates motion data via a VAE model trained on animation data.
Hybrid Control: While the body and limbs are AI-driven, eye gaze, blinking, and basic head motions remain rule-based for stability.

Showcase Video:

Why this approach?
By moving away from pre-recorded timelines, characters can have a "living" presence with natural swaying and emotional transitions that never repeat exactly the same way.
I’d love to hear your thoughts or feedback on this generative approach!
 

Attachments

  • スクリーンショット 2026-03-02 17.31.26.png
    スクリーンショット 2026-03-02 17.31.26.png
    1.4 MB · Views: 0
Last edited:
Back
Top Bottom