• Hi Guest!

    We have posted a new VaM2 dev log on Patreon, starting a monthly cadence of written progress updates between Beta releases. Highlights include the new Gizmos System, Selection Carousel, and Modes System with Context-Specific Editing. Beta1.2 is 15 of 21 items complete.

    Read the full post on Patreon, or follow progress on the public Trello roadmap.
VAM AI

Plugins + Scripts VAM AI

Download [2.88 MB]
Hi everyone.

Someone recommended I write this description. I'm happy to use it. Thanks, Buddy LeChuck99 :)

This plugin adds an experimental AI to Virt-A-Mate. Instead of playing only fixed animations, it attempts to learn body movements by performing specific actions.
It can:
Control body parts
Detect position, contact, and penetration
React to the current situation
Over time, the system learns by analyzing what works and what doesn't.
Let the user direct or "teach" the movements while the system learns.
This is a prototype that trains the VaM character to move more intelligently, rather than simply following pre-recorded movements/animations.

This is not a finished product.
This is a prototype that requires time to set up and train, patience, testing, and may be unstable in VaM1.
Therefore, it is more of an advanced experiment than a production-ready tool.

I’ve done a huge amount of work, but I’m still only at the beginning of my path.


This is the first neural network working directly inside Virt-A-Mate itself. And I am rightfully one of the first who did this in VAM.


This is a prototype, so treat it not as a final product, but as the beginning of something bigger. I started this project a long time ago. But I abandoned it because I didn’t have enough experience, knowledge, or scripts. And now, for the past couple of weeks, I’ve come back to it after half a year.


I want to express my gratitude to @MeshedVR for the VAM game. For me, it’s a huge sandbox where I can sharpen my skills. I don’t know of any equivalent. Maybe only Unity or Unreal. But that’s another story and not about VAM.


I also want to express my gratitude to @CheesyFX. My path in VAM started with your scripts and your automation. For me, you remain an example and an authority in the best automation on VAMHUB.


I also want to thank @Skynet for Dynamic Limbs. I used your plugin and quickly reworked it for myself. I could have written everything from scratch, but your plugin saved me time. Thank you.



I will need to write a separate article on the forum in the plugins section about working with neural networks and this plugin, but I’ll do that later. I’m very busy right now developing this small neural network so I can later transfer everything I’ve built into a bigger desktop neural network. So for now I don’t have enough time to write a full article to share my experience with you.


Also keep in mind that this is not a text neural network, this is RL. It’s a very temperamental bitch, like any RL. It needs a special approach to train it correctly. It’s not only the code inside the script that gives it rewards for correct actions or penalties, for example if it correctly inserted the penis into the right hole at the right angle, or started thrusting along the penis, or brought its hips in, but also the actions that you perform with it. The neural network likes to cheat and find places where it gets more reward, and it will not always follow the expected result, so I’m in the process of taming this bitch. In this neural network, I still haven’t implemented a lot yet. I still need to add many rules, and before that test each one of them, so the neural network does what it is supposed to do. But the final result will also depend on you as players. If you do things wrong, all you will get is frustration and disappointment.


Now about my neural network


This is a small SAC neural network with elements of dream prediction. I’m sharpening my skills on it. I also have a server on a real dreamV3 network with action prediction, which runs on my desktop PC as well. It is much stronger and smarter, but it is in development. If anyone is interested and understands this stuff, I’ll gladly share the code and post it on GitHub.


I have no goal of getting donations from this. If you want to thank me, of course, I can add a link to my crypto account on my script page, but that is not the main thing for me. I’m just interested in doing this, it’s my hobby. @MeshedVR gave me the opportunity to become part of VAM and share my work with you, make VAM better, and attract new people.


My neural network is a prototype. It can be used for anything, whether that’s training dances or reacting to music. I left 20 empty channels, but you can increase that number yourselves or add new neuron layers, though that will lead to extra load on the PC during training. In the future, I was thinking of making it so that you could write scripts yourselves for whatever you need to train it for, for example through ChatGPT, connect them to this neural network, and train it for your tasks: dancing, moving to the beat of music, fucking, or being your companion, and replace the Sex button with a general Start button. But maybe I’ll end up implementing that already in my big neural network.


How my neural network works in VAM


VAM1 itself uses old technology, and it is very unstable compared to VAM2. I’ve tested both versions. In VAM1 there can be crashes, the game can crash. Unfortunately, that’s just how it works. I tried to make protection against this in my script, and I constantly have to balance in VAM1 between the load on it during neural network training, the game breaking, and crashes. This takes a lot of time. And there is no full guarantee that at the moment of loading weights or saving, you won’t hit an error and a game crash.

One more thing. I know that neural networks can be trained on Google servers or that you can create the needed environment in Unity and train there many times faster, but I don’t know how to do that and I don’t have time for it while I’m writing and tuning the neural network, so I can only train it the same way you can, inside the game itself and in real time.

I still have a lot to do, to describe the rules and rewards for correct movement along the penis and so on. I also want to make separate behavior profile buttons, where the neural network will orient itself in how to move, for example with the hips in sex mode: fast, medium, slow, the way you would want it, or different profiles like in my Sex-Machine script for different styles of behavior.

Now I think it’s worth moving on to an overview of how the plugin works. For now it will be small and not fully complete. But it will give you an idea of what is what.

0

I didn’t have time to sit and train it for a long time. I’m in the process of modernizing this neural net, and because of that I don’t know what the result after long training will be. On top of that, some changes and modernization lead to the old weights no longer being loadable, and you have to start everything all over again. So try it and send your results and videos to me in the Discussions tab. I really will be interested to see what you got.

Important

Do not use Save Full. This function saves not only the weights, but also all the memory accumulated in the game. The memory here is needed for better learning in the moment. This is a test function. Most likely, if the file becomes large, around 20–30 megabytes, after a couple of saves or those kinds of loads you will get a guaranteed game crash.

1

Lock the body in the position you need before starting training. Leave only the hip, pelvis, and stomach controllers free so they don’t interfere with it moving. Let it first smell the dick, learn to correctly put the right hole onto it, and only then free the other controllers as training progresses. Don’t expect it to learn quickly in 5 minutes. It takes time. The minimum needed just to see anything at all is half an hour to an hour. Just be patient.

2

When the pose is locked, turn on one of the scripts, either mine from Sex-Machine, FuckingReach, or BodyLanguage, or timeLine recordings. Turn on training mode or auto-training mode, show it how it should move, and keep it in that mode for no less than 30–60 minutes. Watch that the pose doesn’t drift and put everything back if it does, otherwise you will train it wrong.

Also don’t forget that if you teach it new poses, you need to mix the old ones in too. Otherwise it will forget them. Unfortunately, this is a problem with all RL neural networks, so real training uses large datasets. But that’s not what we’re here for. We just want to play and see what comes out of it. :)

Dimensions
Observation vector size: 265
Action vector size: 138
Action format:
23 raw controllers
6 channels per controller
Channel order: forceX, forceY, forceZ, torqueX, torqueY, torqueZ

Raw-drive controllers

The order of controllers in action vector:

hipControl
pelvisControl
abdomenControl
abdomen2Control
chestControl
lThighControl
rThighControl
lKneeControl
rKneeControl
lFootControl
rFootControl
neckControl
headControl
lShoulderControl
rShoulderControl
lArmControl
rArmControl
lHandControl
rHandControl
lElbowControl
rElbowControl
lToeControl
rToeControl

Frequencies
Decision step interval: 0.05 s
Decision rate: 20 Hz
Learn interval: 0.10 s
Learn rate scheduling: 10 Hz
RL learn starts when replay buffer count >= 32

Architecture actor

Type:

Fully connected MLP
Hidden activation: tanh
Output activation: tanh

Layers:

Input: 265
Hidden 1:128
Hidden 2: 128
Hidden 3:96
Output: 138

Matrices and bias:

w1: 128 × 265
b1: 128
w2: 128 × 128
b2: 128
w3: 96 × 128
b3: 96
w4: 138 × 96
b4: 138

Critic Architecture

Number:

2 online critics
2 target critics

Type:

Fully connected MLP
Hidden activation: tanh
Output: linear scalar Q

Critic input:

state + action
Input size: 265 + 138 = 403

Layers of a single critic:

Input: 403
Hidden 1: 160
Hidden 2: 128
Hidden 3: 96
Output: 1

Matrices and bias of a single critic:

w1: 160 × 403
b1: 160
w2: 128 × 160
b2: 128
w3: 96 × 128
b3: 96
w_out: 96
b_out: 1

Parameters of a single Critic: 97,729

Parameters of two online critics: 195,458

Parameters of two target critics: 195,458
Dream model architecture

Type:

Fully connected prediction model next_state + reward
Hidden activation: tanh
State output activation: tanh, then clamp [-5, 5]
Reward output: linear, then clamp [-10, 10]

Input:

state + action
Input size: 403

Layers:

Input: 403
Hidden: 96
State head output: 265
Reward head output: 1

Matrixes and bias:

w1: 96 × 403
b1: 96
w_state: 265 × 96
b_state: 265
w_reward: 96
b_reward: 1

Dream model parameters:

64,586
Total parameter size

Trainable online:

actor: 76,330
critics: 195,458
dream model: 64,586
Total online: 336,374 parameters

Total size with target critics:

531,832 parameters

Memory size for float32 weights:

online weights: 1,345,496 bytes ≈ 1.283 MiB
full weights with targets: 2,127,328 bytes ≈ 2.029 MiB
Normalization
RunningNormalizer
Size: 265
Stores:
mean[265]
m2[265]
count
Replay buffer

Type:

Ring buffer RingBuffer

Transition:

State[265]
Action[138]
Reward
NextState[265]
Done

Settings:

Default capacity: 1000
UI range: 500 .. 20000
batch size: min(32, replayBuffer.Count)
Training

Training phases:

Dream
critics
Actor + soft update

Order:

Dream model update
Critic A / Critic B update
Actor update via UpdateWithActionGradient
Soft update target critics

Save format

The snapshot saves:

actor
qCriticA
qCriticB
targetQCriticA
targetQCriticB
dreamWorldModel
stateNormalizer
replay buffer
memory examples
episode state
previous observation
previous action
learn phase

General layout


This build is divided into 6 working plugins.


UniversalFullBodyBrain is the main brain. It collects observations, calculates actions, applies RL and coach logic, calculates reward, trains the network, manages snapshots, body memory, and arousal. It is exactly the thing that ties together physics, perception, hole geometry, DynamicLimbs, and the behavior layer.


AIPhysicsController is the movement executor. It keeps the list of controlled controllers, takes force and torque commands, turns drives on and off, watches anti-snap, and handles a temporary hip-cycle during a sharp pose change.


AdvancedAIPerceptionSystem is the sensory system. It collects physical data from the body, controller speeds and rotations, penis data, proximity and penetration for holes, and builds a state from all of that for the brain.


A_Triggers.cs / OrificeTriggerManager is the geometry and trigger system for the holes. It is responsible for the vagina, anus, mouth, proximity, depth, synthetic mouth, and serves as the source of “geometric truth” for the AI.


DynamicLimbs is an auxiliary body system. It does raycast and vision, hand colliders, wrap and grasp finger morphs, and a number of auxiliary body sensors and visualizations.


AIBehaviorSystem is a light layer of style and local behavior. This is not the main RL brain, but rather an auxiliary interface for style and intensity, and a universal input point for behavior commands.

UniversalFullBodyBrain — the main brain and AIPhysicsController - movement executor

AIPhysicsController

1776013205892.png

The AIPhysicsController is only needed to lock controllers during training. If you release all controllers, your character's body will simply float around the scene during training, as the neural network doesn't yet know anything about the world or how to behave in it.
I also tried to implement a way to prevent the character from jerking when switching poses, but I haven't implemented it sufficiently yet, and the character may still jerk when switching similar poses.

For training, I recommend locking the hip, arm and leg controllers, as well as the knees and elbows.
The hip needs to be locked so that when you train using your examples, you'll need to control it. If you enable it, you'll put the controller into kinematic mode, where it will be in free flight, and your scripts won't be able to control it. The same applies to other controllers. You can disable more controllers during training, but "hip" is the minimum that should be disabled if you're training using your own examples.

Now let's review again. Enabling controllers puts them into kinematic mode and allows the neural network to move them fully. When they're disabled, the neural network can't control them.

Please do not touch or enable the "Follow server pose" button.

1776014575863.png


It's needed to connect to my more powerful PC server and is used for a different set of scripts. You don't need this button and don't touch it. Clicking it will simply send your character into free flight across the scene.

UniversalFullBodyBrain

1776013761284.png


1.
The "Sex" button turns on the neural network itself. The other buttons control its operating modes.

2.
To switch to training mode, use the "Training" or "Self-training" buttons.

Training—the neural network simply learns from your examples but doesn't attempt to perform any actions on its own.

Self-training—the neural network learns from your examples but also attempts to perform the correct actions on its own. I prefer this mode.

In Training or Self-training mode, your computer may freeze and lag, unlike in Free mode, where the neural network operates on its own without training. This is normal. Training a neural network requires significant computational effort, and this depends on the power of your PC.

3.
Assist -

4.
Auto-coach is the teacher button that regulates the Assist Strength during training. The coach itself helps the neural network learn how to move properly and corrects its movements. But if the coach’s influence is too strong during training, then once you switch on freedom mode, the neural network will have a hard time navigating on its own without the help of the external coach/teacher. It’s better to keep it below 50%, and ideally around 20%. Yes, the movements in self-learning mode will be less accurate, but this allows the neural network to learn more from its own mistakes and rely less on the coach’s influence.

5.
Auto-sliders

6.
Auto arousal - simply enables the arousal slider during training. This feature isn't yet complete. You can simply use it to trigger events if needed.

7.
Freedom (Sex + Freedom) is the combat mode of the neural network. In this mode, the neural network acts completely on its own without anyone’s help and does not learn. This mode shows how well you have trained your neural network to operate in a given pose.
Don’t forget to fully release the hips in the AIPhysicsController script in this mode, as well as the controllers you trained, so the neural network can actually use them.

8.

1776019386676.png


I won't focus on the arousal slider and buttons block, as it doesn't affect the learning process or the neural network's operation. This block is designed for playing in Free mode, so that triggers are activated when arousal reaches 100%, is reset, or a pump trigger occurs during arousal.





Images and attachments
  • 2.mp4
    24.1 MB
React to this content...

Share this resource

Latest reviews

Version: 2026-04-10
Posted:
As an epileptic, I'm happy to be represented in 3D p*** apps.
Dev admits the program is whack but also defends and insists its better than the demo video implies, but I doubt it. This program has no upper limits. Use it for fun, not as a real tool.
Upvote 1
Positive
Version: 2026-04-10
Posted:
What is the scene have list all the pose to choose like your video?
Dlesser
Dlesser
This is my set of poses that I created while creating my scripts for the game and testing scenes.
Upvote 0
Positive
Version: 2026-04-10
Posted:
Creative and foundation work. Nice
Upvote 0
Back
Top Bottom