Hello everyone, Friends.
Has anyone tried writing their own AI for Virt A Mate? It seemed to me like a major oversight. I admire the work of Mr. CheesyFX. I wanted to make something similar and even better. I spent a long time working on automating actions for characters, but I reached a point where I got tired of it. And I thought about making an AI that I could train to do what I need. Yes, the process is not quick, and I'm not a very good programmer because you need to know a lot of different fields, just take the work with TCP/IP, neural networks, JSON protocols, and so on.
Right now, I have made the scripts, the hands, and the eyes for my AI, but unfortunately, the AI server itself - the agent - is written as a monolith. I started disassembling it and decomposing it, but it started taking too much of my time, and I had to abandon this endeavor. For now, it remains as is. When I have time, I will decompose the code and break it into separate modules and `__init__`.
To everyone who wants to participate in this or build something of their own based on this, here are all the source codes for you.
To train the model, you need from 3 to 30 days of continuous real-time in the game. This is just my rough estimate, and I might be wrong, as well as in the AI training logic that I have already created.
I know that you can use Google's servers for AI training, but I haven't gotten to that point yet. I am alone, and I spend a lot of time developing what I have already made, debugging and troubleshooting, although neural networks help me with writing code. Sometimes I don't sleep for a night or two. You can imagine my condition. At the same time, I have to work at my job.
You can do whatever you want with this code and use it wherever you want in commercial and free products.
I waive all responsibility for how you will use this code and AI. I created it for VAM so that people could play with AI for free.
I cannot post this project as .var packages because the AI server is written in Python.
I just want to be useful and for Virt A Mate and its community to develop.
The comments in my code are written in my native language. If you want, throw the code into any text-based neural network, and it will translate it for you. Just be careful that the neural network doesn't corrupt the code while rewriting it for you.
My server has two operating modes:
1. Sex mode, when the agent - the neural network - tries to learn how to have sex correctly, find the right hole, position, and so on.
2. Walking mode, when the agent moves further than 1.5 meters away, it switches to walking mode and tries to stand on its feet.
Right now, the neural network is in the training process, and you can see in the video how it tries to stand up.
I wrote the VAM scripts partially using the experience of others, taken from my favorite creator BodyLanguage - CheesyFX. I also took the ready-made DynamicLimbs script from Mr. Skynet and modified it a bit for myself.
My code in the scripts and on the server is not perfect. I am doing this alone. Therefore, something might not work or not display, as I am constantly reworking, finishing, and adding new features.
The video uses simple graphics because they are not needed during training, and because of OBS, the video lags a bit in the clips.
And to be honest, I don't know what will come out of all this. I hope it works out. I can't attach the saved neural network weights file to the post because it's not possible here. You will have to start from scratch.
Technical Data
VAM AI Physics Server - Neural Network Motion Controller
System Architecture
Main AI Engine: Hybrid PPO (Proximal Policy Optimization) with a dual-component architecture
Neural Network Specification
Model Type: Actor-Critic PPO with continuous action space
Input Data: 150 parameters (body positions, velocities, tactile data, emotional states)
Output Actions: 150 physical influences (forces and torques)
Network Architecture: Deep neural network with 4 hidden layers (512-256-128 neurons)
Dual-Component System
Primary Network - Intimate Interaction Controller
Purpose: Managing sexual interactions
Specialization: Learning optimal movement patterns, rhythms
Training: Based on penetration depth, alignment, and pleasure metrics
Secondary Network - Locomotion Controller
Purpose: Autonomous navigation and balance
Specialization: Bipedal walking, obstacle avoidance, approaching targets
Training: Based on distance reduction and balance maintenance
State Representation
Body Kinematics: 84 parameters (positions/velocities of 14 controllers)
Tactile Perception: 8 parameters (hand contacts, finger pressure)
Emotional Context: 6 parameters (arousal, pleasure, confidence)
Goal Awareness: 12 parameters (distances to orifices, alignment angles)
Learning Mechanism
Reinforcement Learning: Continuous policy improvement
Experience Replay: Memory buffer of 100,000+ state-action pairs
Adaptive Exploration: Dynamic noise addition for discovering new behaviors
Multi-Task Rewards: Balanced optimization of multiple objectives
Physics Integration
Force Calculation: Neural network outputs directly control Rigidbody physics
Torque Optimization: Learned rotational forces for natural movements
Constraint Handling: Built-in physical limits and safety boundaries
Real-Time Adaptation: 20ms response time for smooth interactions
Behavior Modes
Automatic Orifice Selection: Intelligent target switching based on performance
Style Modulation: 6 different movement styles (from gentle to aggressive)
Emotional Response: Dynamic intensity adjustment based on simulated arousal
Obstacle-Aware Navigation: Tactile collision avoidance during locomotion
This is a physics-based character animation system using deep reinforcement learning - transforming raw sensory data into plausible human-like movements.
Scripts:
AISocketCommunicator.cs
Network Connection with AI Server - responsible for connecting to the external AI server, sending character state data, and receiving control commands. The main communication module.
DynamicLimbs.cs
Dynamic Limbs and Tactile Sensations - manages the physics of arms, legs, and fingers, automatically bends fingers upon contact with objects, provides tactile data for the AI.
A_Triggers.cs (OrificeTriggerManager)
Body Orifice Triggers - tracks penetration and proximity of the penis to the vagina, anus, and mouth. Visualizes trigger zones and collects data for the AI.
AIBehaviorSystem.cs
AI Behavior System - processes commands from the AI, manages the selection of orifices, movement styles, and action intensity. Coordinates the work of other systems.
AIDataManager.cs
Data and Settings Management - saves and loads configurations for all AI systems, ensures settings persistence between sessions.
AIEmotionSystem.cs
Emotion System - simulates character emotions (arousal, pleasure, confidence), influences movement intensity and style based on emotional state.
AILearningBrain.cs
AI Learning Brain - records interaction experience, provides fail-safe operation in case of lost connection with the server, generates fallback actions.
AIPerceptionSystem.cs
Perception System - collects comprehensive data about the environment: controller positions, orifice states, penis data, movement velocities. The main data source for the AI.
AIPhysicsController.cs
Physics Controller - applies physical forces and rotations to body controllers based on commands from the AI, implements physical movement of the character.
Overall Goal: Creating an intelligent AI character that can autonomously interact in intimate scenes, adapting to the situation and demonstrating natural behavior.
File Server change the extension to .py and run it in the programming IDE.
And I hope that no one minds that I used your experience and some of your scripts.
Has anyone tried writing their own AI for Virt A Mate? It seemed to me like a major oversight. I admire the work of Mr. CheesyFX. I wanted to make something similar and even better. I spent a long time working on automating actions for characters, but I reached a point where I got tired of it. And I thought about making an AI that I could train to do what I need. Yes, the process is not quick, and I'm not a very good programmer because you need to know a lot of different fields, just take the work with TCP/IP, neural networks, JSON protocols, and so on.
Right now, I have made the scripts, the hands, and the eyes for my AI, but unfortunately, the AI server itself - the agent - is written as a monolith. I started disassembling it and decomposing it, but it started taking too much of my time, and I had to abandon this endeavor. For now, it remains as is. When I have time, I will decompose the code and break it into separate modules and `__init__`.
To everyone who wants to participate in this or build something of their own based on this, here are all the source codes for you.
To train the model, you need from 3 to 30 days of continuous real-time in the game. This is just my rough estimate, and I might be wrong, as well as in the AI training logic that I have already created.
I know that you can use Google's servers for AI training, but I haven't gotten to that point yet. I am alone, and I spend a lot of time developing what I have already made, debugging and troubleshooting, although neural networks help me with writing code. Sometimes I don't sleep for a night or two. You can imagine my condition. At the same time, I have to work at my job.
You can do whatever you want with this code and use it wherever you want in commercial and free products.
I waive all responsibility for how you will use this code and AI. I created it for VAM so that people could play with AI for free.
I cannot post this project as .var packages because the AI server is written in Python.
I just want to be useful and for Virt A Mate and its community to develop.
The comments in my code are written in my native language. If you want, throw the code into any text-based neural network, and it will translate it for you. Just be careful that the neural network doesn't corrupt the code while rewriting it for you.
My server has two operating modes:
1. Sex mode, when the agent - the neural network - tries to learn how to have sex correctly, find the right hole, position, and so on.
2. Walking mode, when the agent moves further than 1.5 meters away, it switches to walking mode and tries to stand on its feet.
Right now, the neural network is in the training process, and you can see in the video how it tries to stand up.
I wrote the VAM scripts partially using the experience of others, taken from my favorite creator BodyLanguage - CheesyFX. I also took the ready-made DynamicLimbs script from Mr. Skynet and modified it a bit for myself.
My code in the scripts and on the server is not perfect. I am doing this alone. Therefore, something might not work or not display, as I am constantly reworking, finishing, and adding new features.
The video uses simple graphics because they are not needed during training, and because of OBS, the video lags a bit in the clips.
And to be honest, I don't know what will come out of all this. I hope it works out. I can't attach the saved neural network weights file to the post because it's not possible here. You will have to start from scratch.
Technical Data
VAM AI Physics Server - Neural Network Motion Controller
Main AI Engine: Hybrid PPO (Proximal Policy Optimization) with a dual-component architecture
Model Type: Actor-Critic PPO with continuous action space
Input Data: 150 parameters (body positions, velocities, tactile data, emotional states)
Output Actions: 150 physical influences (forces and torques)
Network Architecture: Deep neural network with 4 hidden layers (512-256-128 neurons)
Primary Network - Intimate Interaction Controller
Purpose: Managing sexual interactions
Specialization: Learning optimal movement patterns, rhythms
Training: Based on penetration depth, alignment, and pleasure metrics
Secondary Network - Locomotion Controller
Purpose: Autonomous navigation and balance
Specialization: Bipedal walking, obstacle avoidance, approaching targets
Training: Based on distance reduction and balance maintenance
Body Kinematics: 84 parameters (positions/velocities of 14 controllers)
Tactile Perception: 8 parameters (hand contacts, finger pressure)
Emotional Context: 6 parameters (arousal, pleasure, confidence)
Goal Awareness: 12 parameters (distances to orifices, alignment angles)
Reinforcement Learning: Continuous policy improvement
Experience Replay: Memory buffer of 100,000+ state-action pairs
Adaptive Exploration: Dynamic noise addition for discovering new behaviors
Multi-Task Rewards: Balanced optimization of multiple objectives
Force Calculation: Neural network outputs directly control Rigidbody physics
Torque Optimization: Learned rotational forces for natural movements
Constraint Handling: Built-in physical limits and safety boundaries
Real-Time Adaptation: 20ms response time for smooth interactions
Automatic Orifice Selection: Intelligent target switching based on performance
Style Modulation: 6 different movement styles (from gentle to aggressive)
Emotional Response: Dynamic intensity adjustment based on simulated arousal
Obstacle-Aware Navigation: Tactile collision avoidance during locomotion
This is a physics-based character animation system using deep reinforcement learning - transforming raw sensory data into plausible human-like movements.
Scripts:
AISocketCommunicator.cs
Network Connection with AI Server - responsible for connecting to the external AI server, sending character state data, and receiving control commands. The main communication module.
DynamicLimbs.cs
Dynamic Limbs and Tactile Sensations - manages the physics of arms, legs, and fingers, automatically bends fingers upon contact with objects, provides tactile data for the AI.
A_Triggers.cs (OrificeTriggerManager)
Body Orifice Triggers - tracks penetration and proximity of the penis to the vagina, anus, and mouth. Visualizes trigger zones and collects data for the AI.
AIBehaviorSystem.cs
AI Behavior System - processes commands from the AI, manages the selection of orifices, movement styles, and action intensity. Coordinates the work of other systems.
AIDataManager.cs
Data and Settings Management - saves and loads configurations for all AI systems, ensures settings persistence between sessions.
AIEmotionSystem.cs
Emotion System - simulates character emotions (arousal, pleasure, confidence), influences movement intensity and style based on emotional state.
AILearningBrain.cs
AI Learning Brain - records interaction experience, provides fail-safe operation in case of lost connection with the server, generates fallback actions.
AIPerceptionSystem.cs
Perception System - collects comprehensive data about the environment: controller positions, orifice states, penis data, movement velocities. The main data source for the AI.
AIPhysicsController.cs
Physics Controller - applies physical forces and rotations to body controllers based on commands from the AI, implements physical movement of the character.
Overall Goal: Creating an intelligent AI character that can autonomously interact in intimate scenes, adapting to the situation and demonstrating natural behavior.
File Server change the extension to .py and run it in the programming IDE.
And I hope that no one minds that I used your experience and some of your scripts.
Attachments
-
A_Triggers.cs50.2 KB · Views: 0
-
DynamicLimbs.cs40 KB · Views: 0
-
AIPerceptionSystem.cs32.5 KB · Views: 0
-
AILearningBrain.cs14.9 KB · Views: 0
-
AIEmotionSystem.cs17.3 KB · Views: 0
-
AIDataManager.cs14.9 KB · Views: 0
-
AIBehaviorSystem.cs18 KB · Views: 0
-
AIPhysicsController.cs12.5 KB · Views: 0
-
AISocketCommunicator.cs41.2 KB · Views: 0
-
ServerVAM.txt99.4 KB · Views: 0