• Hi Guest!

    We are extremely excited to announce the release of our first Beta1.1 and the first release of our Public AddonKit!
    To participate in the Beta, a subscription to the Entertainer or Creator Tier is required. For access to the Public AddonKit you must be a Creator tier member. Once subscribed, download instructions can be found here.

    Click here for information and guides regarding the VaM2 beta. Join our Discord server for more announcements and community discussion about VaM2.
  • Hi Guest!

    VaM2 Resource Categories have now been added to the Hub! For information on posting VaM2 resources and details about VaM2 related changes to our Community Forums, please see our official announcement here.

VaM 1.x [POC] Latent Motion Synthesis: Real-time AI Skeleton Driving (VAE + Voxta)

Threads regarding the original VaM 1.x

grimes

New member
Joined
Jan 21, 2026
Messages
1
Reactions
0
スクリーンショット 2026-03-02 17.31.26.png

Hello Creators,
I’m excited to share a project I’ve been working on: a real-time skeleton-driving framework for VaM using a Variational Autoencoder (VAE).
Unlike traditional timeline animations or recorded loops, this system generates per-bone pose data in real-time based on the emotional context of the conversation.

How it works:
Emotion Inference: Using Voxta as the command center to infer emotional states (Excited, Boring, etc.) from the conversation.
Generative Motion: A custom Python hook server receives these states and generates motion data via a VAE model trained on animation data.
Hybrid Control: While the body and limbs are AI-driven, eye gaze, blinking, and basic head motions remain rule-based for stability.

Showcase Video:

Why this approach?
By moving away from pre-recorded timelines, characters can have a "living" presence with natural swaying and emotional transitions that never repeat exactly the same way.
I’d love to hear your thoughts or feedback on this generative approach!
 
Back
Top Bottom