An AI companion plugin for Virt-A-Mate
Voxta brings your VaM characters to life. Talk to them with your voice, they talk back in their own voice, react to what's happening in the scene, and move naturally through animations you drop into Timeline.
Under the hood, the plugin connects VaM to Voxta Server — the brain that listens, thinks, and speaks. It can run right on your PC, or in the cloud. You don't need to touch any of it — just install, pick a character, and start chatting.
What the plugin gives you
▸ Voice or text chat
Mic in → your character hears you → thinks → speaks back. Audio plays right from the Person atom — 3D positioned and lip-synced via VaM's built-in Auto Behaviors. Speak naturally and the character responds, or type if you prefer.
▸ State-driven animation
The plugin exposes idle, listening, thinking, and speaking as state triggers. Drop a Timeline animation on each state and your character breathes, looks around, leans in to listen, and settles when talking — no code required.
▸ Action inference — the magic layer
Voxta runs two things in parallel when you talk:
- One AI writes what the character says.
- Another AI watches the conversation and decides what the character should do.
You teach it your actions in plain English, right from the plugin UI. For example:
action: jumpwhen: whenever {{ char }} is happy or excitedThat's it. No scripts, no if-statements. When the conversation matches, Voxta fires a VaM trigger — hook it up to a Timeline clip, a morph, a sound, anything VaM can trigger. Add as many as you want: wave, blush, come_here, sit_down — your character reacts because the AI understood the moment, not because someone pressed a button.
▸ Scene context — the character knows where they are
Your character doesn't live in a void. You can feed them what's happening in the scene and they'll react to it naturally.
Voxta gives you context slots you can write to from VaM itself. For example — when your character walks into the kitchen, a Timeline trigger can push something like:
The kitchen smells like fresh bread and coffee. There's a plate of pancakes on the table. {{ char }} hasn't eaten all morning and is very hungry.Now when they speak, they might mention the pancakes, ask if they can eat, or comment on the smell — without you scripting a single line of dialogue. Change rooms, change the time of day, change their mood — just update the slot and the character adapts.
▸ Up to 3 characters in one scene
Voxta currently supports up to 3 AI characters in the same VaM scene. They can talk to each other — and you can jump into the conversation anytime, whether by voice or text. Real group scenes, not just one-on-one chat.
What you need
Voxta Server — this is what powers the AI. You've got three ways to run it, pick whatever fits you best:
- Fully local & offline — everything runs on your own PC. Total privacy, no subscriptions, no sending data anywhere. Heads up: you'll need a pretty beefy PC to run both VaM and the AI models at the same time.
- Voxta Cloud — our own cloud service, designed with privacy in mind. No hefty PC required, and nothing leaks to third-party AI companies. The easy button if you want it to just work.
- Bring your own API keys — if you already use OpenAI, Anthropic (Claude), Google, ElevenLabs, etc., just plug in your keys and go. Traditional setup, uses their servers, you pay them directly.
Inside Voxta you pick a brain (LLM), a voice (TTS), and optionally ears (STT) for voice input. The setup wizard walks you through it — most users are up and running in under 15 minutes.
Works alongside
Timeline for per-state and per-action animations · VAMOverlays for subtitles · Glance for eye tracking · VaM's built-in Lip Sync for mouth movement · VAMMoan for moans, gasps, and other vocal reactions triggered alongside the voice.
Voxta plugs into the tools you already use — it doesn't replace them.
Setup at a glance
- Install the Voxta server. Pick your LLM, TTS, and STT.
- Create your character (name, personality, voice) in the Voxta web UI.
- In VaM, add
Voxta.cslistto a Person atom and select the character from the dropdown. - Wire the State and Action triggers into your Timeline animations.
- Talk.
Where can I find scenes using Voxta?
- Voxta Demo Scene — explore. Recommended to at least see how things work.
- Official Voxta-Compatible Scenes — explore. Curated scenes built to showcase advanced interactions.
- Free explore and paid explore — see what the community is building.
- All Voxta-tagged content — explore. Every scene and character across categories.
How do I use Voxta?
- Read the documentation.
- Watch our YouTube walkthroughs.
- Join us on Discord.
What Voxta can't do yet
✗ It doesn't animate your character
Voxta decides when the character should jump, wave, or blush — but it doesn't move bones or create animations on the fly. You (or the scene creator) bring the animations via Timeline, and Voxta fires them at the right moment.
✗ No chat window inside VaM yet
If you want to type instead of talk, you currently open Voxta's web UI in a separate window and chat there. It works, it's just not seamless. A proper in-VaM chat panel is coming.
✗ The character can't see you yet
Right now your character hears you and reads what the scene tells them about the environment — but they don't actually see the world through their eyes. We're actively working on a vision module that plugs into the plugin, so the character will be able to look around and describe what's in front of them.
✗ Touch isn't built into the plugin — but there's a companion for that
The core plugin doesn't handle touch on its own. If you want your character to react when you touch them, grab VoxtaTouch — a companion plugin that wires VaM collisions straight into Voxta so the character notices and reacts naturally. No scripting needed.
✗ Requires the Voxta Server running separately
Voxta isn't packaged inside VaM itself. You run the Voxta app on your PC (or use Voxta Cloud), and the plugin connects to it. One-time setup, but it's an extra step.
Learn more
- More videos: https://hub.virtamate.com/threads/voxta-ai.44874/
- Website: https://voxta.ai
- Patreon: https://patreon.com/voxta
- YouTube: https://www.youtube.com/@Voxta
- Twitter: https://twitter.com/VoxtaAi
- Discord: https://discord.gg/voxta
Want to create your own scenes?
We'd love to see what you build. Voxta was made to give VaM creators a new layer to play with — characters that talk, react, and feel alive — and we actively encourage the community to experiment with it and push the experience further.
No permission needed, no gatekeeping. Restrictions have been lifted and anyone can build and publish Voxta-powered scenes freely. Whether it's a chill conversation scene, an interactive roleplay, or something nobody has tried yet — go for it.
Just follow the guidelines so your scenes play nicely for everyone:
If you've got an idea and want feedback, or just want to share what you're cooking, hop into our Discord — we genuinely enjoy seeing what the community makes.