• Hi Guest!

    We are extremely excited to announce the release of our first Beta1.1 and the first release of our Public AddonKit!
    To participate in the Beta, a subscription to the Entertainer or Creator Tier is required. For access to the Public AddonKit you must be a Creator tier member. Once subscribed, download instructions can be found here.

    Click here for information and guides regarding the VaM2 beta. Join our Discord server for more announcements and community discussion about VaM2.
  • Hi Guest!

    VaM2 Resource Categories have now been added to the Hub! For information on posting VaM2 resources and details about VaM2 related changes to our Community Forums, please see our official announcement here.

Free AI Server for VAM

Dlesser

Member
Joined
Mar 28, 2025
Messages
41
Reactions
30
Hello everyone, Friends.

Has anyone tried writing their own AI for Virt A Mate? It seemed to me like a major oversight. I admire the work of Mr. CheesyFX. I wanted to make something similar and even better. I spent a long time working on automating actions for characters, but I reached a point where I got tired of it. And I thought about making an AI that I could train to do what I need. Yes, the process is not quick, and I'm not a very good programmer because you need to know a lot of different fields, just take the work with TCP/IP, neural networks, JSON protocols, and so on.

Right now, I have made the scripts, the hands, and the eyes for my AI, but unfortunately, the AI server itself - the agent - is written as a monolith. I started disassembling it and decomposing it, but it started taking too much of my time, and I had to abandon this endeavor. For now, it remains as is. When I have time, I will decompose the code and break it into separate modules and `__init__`.

To everyone who wants to participate in this or build something of their own based on this, here are all the source codes for you.

To train the model, you need from 3 to 30 days of continuous real-time in the game. This is just my rough estimate, and I might be wrong, as well as in the AI training logic that I have already created.

I know that you can use Google's servers for AI training, but I haven't gotten to that point yet. I am alone, and I spend a lot of time developing what I have already made, debugging and troubleshooting, although neural networks help me with writing code. Sometimes I don't sleep for a night or two. You can imagine my condition. At the same time, I have to work at my job.

You can do whatever you want with this code and use it wherever you want in commercial and free products.
I waive all responsibility for how you will use this code and AI. I created it for VAM so that people could play with AI for free.

I cannot post this project as .var packages because the AI server is written in Python.

I just want to be useful and for Virt A Mate and its community to develop.

The comments in my code are written in my native language. If you want, throw the code into any text-based neural network, and it will translate it for you. Just be careful that the neural network doesn't corrupt the code while rewriting it for you.

My server has two operating modes:
1. Sex mode, when the agent - the neural network - tries to learn how to have sex correctly, find the right hole, position, and so on.
2. Walking mode, when the agent moves further than 1.5 meters away, it switches to walking mode and tries to stand on its feet.

Right now, the neural network is in the training process, and you can see in the video how it tries to stand up.

I wrote the VAM scripts partially using the experience of others, taken from my favorite creator BodyLanguage - CheesyFX. I also took the ready-made DynamicLimbs script from Mr. Skynet and modified it a bit for myself.

My code in the scripts and on the server is not perfect. I am doing this alone. Therefore, something might not work or not display, as I am constantly reworking, finishing, and adding new features.

The video uses simple graphics because they are not needed during training, and because of OBS, the video lags a bit in the clips.

And to be honest, I don't know what will come out of all this. I hope it works out. I can't attach the saved neural network weights file to the post because it's not possible here. You will have to start from scratch.

Technical Data

VAM AI Physics Server - Neural Network Motion Controller
🧠 System Architecture
Main AI Engine: Hybrid PPO (Proximal Policy Optimization) with a dual-component architecture
🎯 Neural Network Specification
Model Type: Actor-Critic PPO with continuous action space

Input Data: 150 parameters (body positions, velocities, tactile data, emotional states)

Output Actions: 150 physical influences (forces and torques)

Network Architecture: Deep neural network with 4 hidden layers (512-256-128 neurons)

🤖 Dual-Component System
Primary Network - Intimate Interaction Controller

Purpose: Managing sexual interactions

Specialization: Learning optimal movement patterns, rhythms

Training: Based on penetration depth, alignment, and pleasure metrics

Secondary Network - Locomotion Controller

Purpose: Autonomous navigation and balance

Specialization: Bipedal walking, obstacle avoidance, approaching targets

Training: Based on distance reduction and balance maintenance

🧩 State Representation

Body Kinematics: 84 parameters (positions/velocities of 14 controllers)

Tactile Perception: 8 parameters (hand contacts, finger pressure)

Emotional Context: 6 parameters (arousal, pleasure, confidence)

Goal Awareness: 12 parameters (distances to orifices, alignment angles)

🔄 Learning Mechanism

Reinforcement Learning: Continuous policy improvement

Experience Replay: Memory buffer of 100,000+ state-action pairs

Adaptive Exploration: Dynamic noise addition for discovering new behaviors

Multi-Task Rewards: Balanced optimization of multiple objectives

⚡ Physics Integration

Force Calculation: Neural network outputs directly control Rigidbody physics

Torque Optimization: Learned rotational forces for natural movements

Constraint Handling: Built-in physical limits and safety boundaries

Real-Time Adaptation: 20ms response time for smooth interactions

🎮 Behavior Modes

Automatic Orifice Selection: Intelligent target switching based on performance

Style Modulation: 6 different movement styles (from gentle to aggressive)

Emotional Response: Dynamic intensity adjustment based on simulated arousal

Obstacle-Aware Navigation: Tactile collision avoidance during locomotion

This is a physics-based character animation system using deep reinforcement learning - transforming raw sensory data into plausible human-like movements.

Scripts:

AISocketCommunicator.cs
Network Connection with AI Server - responsible for connecting to the external AI server, sending character state data, and receiving control commands. The main communication module.
DynamicLimbs.cs
Dynamic Limbs and Tactile Sensations - manages the physics of arms, legs, and fingers, automatically bends fingers upon contact with objects, provides tactile data for the AI.
A_Triggers.cs (OrificeTriggerManager)
Body Orifice Triggers - tracks penetration and proximity of the penis to the vagina, anus, and mouth. Visualizes trigger zones and collects data for the AI.
AIBehaviorSystem.cs
AI Behavior System - processes commands from the AI, manages the selection of orifices, movement styles, and action intensity. Coordinates the work of other systems.
AIDataManager.cs
Data and Settings Management - saves and loads configurations for all AI systems, ensures settings persistence between sessions.
AIEmotionSystem.cs
Emotion System - simulates character emotions (arousal, pleasure, confidence), influences movement intensity and style based on emotional state.
AILearningBrain.cs
AI Learning Brain - records interaction experience, provides fail-safe operation in case of lost connection with the server, generates fallback actions.
AIPerceptionSystem.cs
Perception System - collects comprehensive data about the environment: controller positions, orifice states, penis data, movement velocities. The main data source for the AI.
AIPhysicsController.cs
Physics Controller - applies physical forces and rotations to body controllers based on commands from the AI, implements physical movement of the character.
Overall Goal: Creating an intelligent AI character that can autonomously interact in intimate scenes, adapting to the situation and demonstrating natural behavior.

File Server change the extension to .py and run it in the programming IDE.

And I hope that no one minds that I used your experience and some of your scripts.








 

Attachments

  • A_Triggers.cs
    50.2 KB · Views: 0
  • DynamicLimbs.cs
    40 KB · Views: 0
  • AIPerceptionSystem.cs
    32.5 KB · Views: 0
  • AILearningBrain.cs
    14.9 KB · Views: 0
  • AIEmotionSystem.cs
    17.3 KB · Views: 0
  • AIDataManager.cs
    14.9 KB · Views: 0
  • AIBehaviorSystem.cs
    18 KB · Views: 0
  • AIPhysicsController.cs
    12.5 KB · Views: 0
  • AISocketCommunicator.cs
    41.2 KB · Views: 0
  • ServerVAM.txt
    99.4 KB · Views: 0
After about 6 hours of training, the avatar tries to stand up and jumps.


 
Back
Top Bottom