BodyNet (Free)

Plugins BodyNet (Free)

Plugin Idea*

Adjust the UI in such a way that when the user click the "Train" button, they can specify for how many runs they want it to train beforehand.
Right now, sometimes I want it to stop after exactly 4 runs, because at the start of run 5 the results get worse again,
However I could see the same usecase for when you only want it to do 1 run, and want it to automatically stop the moment that run is done.
 
Fair enough hah!
My main useage tho is, I have a couple of custom scenes/animations I created.
And now with your tool I can just take the Model I used in my custom scenes, put her as the source.
And then just load in any random other model on the target, run the training a couple times and it spits out an appearance with a loss of ~1.2 on average after 3-4 runs.

So far I've managed to "convert" about 5 random models (they range from being petite, to being huge).
And upon loading each of these converted models into my custom scenes, they all align perfectly! (Even tho, at the end of a training session, the model for example still had allot of red dots and red lines)
Hands, neck, mouth, all of it is in the correct position for the animations to not act weird.

Those 3-4 training runs I do take about ~5 minutes at a physics rate of 120, I say this because I don't mind waiting 5 minutes per model heh, as long as the result is reliable I don't see the issue in having to wait a bit longer.


There was another random mocap scene I have from I believe kitty or Zen, to succesfully get a model loaded into that, with proper allignments, DID require to have almost all of the dots/lines to be green during training.

Everytime I tried to load a model into that scene which didn't have at least 90% of the dots/lines greenish it would missalign completely during animation transitions.
The only conclusion I could draw from this is that mocap animations are allot more finicky/require more precision when it comes to the training.
Yeah, there are more than 1100 weights being trained in the network each taking ~5 iterations on average (aka. physics updates). What's limiting the training speed ultimately is that the loss function needs to wait for the physics update.

If you need to convert a lot of models to the same "source" it might be quicker overall if you properly train a fresh network. After that just hitting apply should be enough to match.
 
Plugin Idea*

Adjust the UI in such a way that when the user click the "Train" button, they can specify for how many runs they want it to train beforehand.
Right now, sometimes I want it to stop after exactly 4 runs, because at the start of run 5 the results get worse again,
However I could see the same usecase for when you only want it to do 1 run, and want it to automatically stop the moment that run is done.
Thanks for the suggestion. I'm not happy with that part of the UI/UX myself. I might redesign the training completely and split it into a) optimizing a single look and b) training properly for a full directory of morph presets. Right now, training tries to solve two different use cases that train the model in opposite directions.
 
Yeah, there are more than 1100 weights being trained in the network each taking ~5 iterations on average (aka. physics updates). What's limiting the training speed ultimately is that the loss function needs to wait for the physics update.

If you need to convert a lot of models to the same "source" it might be quicker overall if you properly train a fresh network. After that just hitting apply should be enough to match.
At the first step of training, what it then does is look at the source you just loaded in and it compares this to the "saved" training data it has on hand?

Currently I'm still a bit confused about the training process, since it refers to multiple things at the same time it seems.
The training data by default is a "morph training" that you have included?
But when I click the train button, it also retrains it's existing default profile, by looking at the new source material and then, apply this "new training data" onto the target?

Or is what you are saying that I should in this case;
Press the button "start from scratch"
Load my source model
Load my target model
Press train

Now it will create a "new training" data based on the source model?

Where as before, if I did NOT click the "start from scratch" button the only training that would happen on the target, was based on the original training data that you had provided? Or does it still look at the source model I just loaded in?

I'm confused by the difference between the two, why would starting from scratch be better then using the pre existing training file?

Because the newly created "start from scratch" training file will be solely foccused on the custom source models morphs which should give better end results?
Or faster?

EDIT*
^ I think I answered my own question heh.
"start from scratch" Will make it so that the training will solely focus on the source you provided, granted you train the data a bunch of times which makes it so that you can just press "apply" with reliable results to your target

Where as if you did NOT start from scratch, it will look at ALLOT more data, redundant data, combined with the source appearance you put in, which results in less accurate targets if you were to press "apply".
HOWEVER, pressing "training" while you have the default training data loaded in, will "retrain" the default training data?

Correct?
 
Last edited:
At the first step of training, what it then does is look at the source you just loaded in and it compares this to the "saved" training data it has on hand?

Currently I'm still a bit confused about the training process, since it refers to multiple things at the same time it seems.
The training data by default is a "morph training" that you have included?
But when I click the train button, it also retrains it's existing default profile, by looking at the new source material and then, apply this "new training data" onto the target?

Or is what you are saying that I should in this case;
Press the button "start from scratch"
Load my source model
Load my target model
Press train

Now it will create a "new training" data based on the source model?

Where as before, if I did NOT click the "start from scratch" button the only training that would happen on the target, was based on the original training data that you had provided? Or does it still look at the source model I just loaded in?

I'm confused by the difference between the two, why would starting from scratch be better then using the pre existing training file?

Because the newly created "start from scratch" training file will be solely foccused on the custom source models morphs which should give better end results?
Or faster?
Uh, now it's getting a bit complicated.
The plugin implements machine learning using a neural network with two layers, 30 inputs (a three dimensional vector for each joint) and eight outputs (the morphs being adjusted). Every input has a weighted connection to every neuron on the next layer, which have a weighted connection to every output. In total, that equals 1142 weights if I counted correctly. These weights are effectively the network's knowledge or training data.

A training run compares both skeletons and feeds the input vectors into the network. It then looks at each and every weight and nudges it ever so slightly up or down in order to reduce the loss. If you run multiple generations on the same input vectors the network gets better at matching them but at the same time might get worse at everything else. There's a trade off between a very specialized network and a versatile one. Also, if you want to create a network that's specialized (very good at matching exactly one thing) you will get the best results if you start with a dumb (or fresh) network. On the other hand, if you want to train a network to be applicable to pretty much any morph you need to train it on a wide variety of diverse morphs.
 
Uh, now it's getting a bit complicated.
The plugin implements machine learning using a neural network with two layers, 30 inputs (a three dimensional vector for each joint) and eight outputs (the morphs being adjusted). Every input has a weighted connection to every neuron on the next layer, which have a weighted connection to every output. In total, that equals 1142 weights if I counted correctly. These weights are effectively the network's knowledge or training data.

A training run compares both skeletons and feeds the input vectors into the network. It then looks at each and every weight and nudges it ever so slightly up or down in order to reduce the loss. If you run multiple generations on the same input vectors the network gets better at matching them but at the same time might get worse at everything else. There's a trade off between a very specialized network and a versatile one.
Then am I correct to assume that using the "start from scratch" function will bassically make it so that the plugin "recalculates" all of these 1142 weights again, based on the newly source model it has detected?

Where as if you did not use the "strart from scratch" function it would skip this step, and use it's "default" state for all of these 1142 weights? That are technically based on a totally different source?

I'm trying to understand, where in the process it takes the data from the source appearance you provide it, and implements this into the training data.

Perhaps a better question is, what happens if I press "start from scratch" and then just press "apply" without any training?
It just fails to find/load any training data? Or is it similar to 1 pass of the training?
 
Last edited:
Then am I correct to assume that using the "start from scratch" function will bassically make it so that the plugin "recalculates" all of these 1142 weights again, based on the newly source model it has detected?

Where as if you did not use the "strart from scratch" function it would skip this step, and use it's "default" state for all of these 1142 weights? That are technically based on a totally different source?

I'm trying to understand, where in the process it takes the data from the source appearance you provide it, and implements this into the training data.
The training data that comes with the plugin was trained on about 20 morph presets I had on hand so there's a certain bias in it.

What the weights do is tell the network how to connect the inputs to the outputs. When you press "start from scratch" all weights are re-initialized with random values and the network has to learn anew how it needs to adjust which morph in order to reduce the error in what joint.

A well trained network could "decide" how to change each morph by looking at what the difference between both skeletons is (qualitatively and quantitatively).
 
The training data that comes with the plugin was trained on about 20 morph presets I had on hand so there's a certain bias in it.

What the weights do is tell the network how to connect the inputs to the outputs. When you press "start from scratch" all weights are re-initialized with random values and the network has to learn anew how it needs to adjust which morph in order to reduce the error in what joint.

A well trained network could "decide" how to change each morph by looking at what the difference between both skeletons is (qualitatively and quantitatively).
When I "start from scratch" and it looks at the morphs, or rather, it specifies the morphs it needs, does it go through every single morph that's currently installed in my VaM or does it merely look at the morphs that are applied/active on the source appearance?
 
When I "start from scratch" and it looks at the morphs, or rather, it specifies the morphs it needs, does it go through every single morph that's currently installed in my VaM or does it merely look at the morphs that are applied/active on the source appearance?
Neither, it adjusts eight specific morphs that correspond to the body proportions.
The difference between both skeletons isn't calculated by looking at morphs but rather a number of Rigidbodies.
 
Neither, it adjusts eight specific morphs that correspond to the body proportions.
The difference between both skeletons isn't calculated by looking at morphs but rather a number of Rigidbodies.
But when i use the "start from scratch" function and then train it for a multitude of runs with the end result being only ~4 green dots, and ~5 green lines.
Loss 1.6

Where as if I grab your "default" training data, and use the train button on this, for just 2 or 3 runs, I end up with a result of ~8-10 green dots and ~9 green lines. Loss 1.4

^ I'm better off saving the "retrained" default training data, and then use that as an "apply" for other target models since this seemingly gives better results then starting from scratch.

Or am I doing something wrong when using the start from scratch function?

I assumed, using the start from scratch function, would train the training data purely on the new source appearance I provide it
 
I think my confusion stems from the fact that the training button engages a multitude of functions.
It not only "retrains" the training data if you pressed the start from scratch button, but it also adjusts the target models bones at each step.
when you don't press the start from scratch button, I still got the impression that the training data was being "retrained" actively.
 
I think my confusion stems from the fact that the training button engages a multitude of functions.
It not only "retrains" the training data, but it also adjusts the target models bones at each step.
It needs to do that in order to decide if it's changing the right thing.
  • Training the network is a function that aims to reduce "loss".
  • Loss is calculated by evaluating the difference between both skeletons
  • In order to calculate the new loss after tweaking the network, it needs to actually apply changes to morphs, wait for vam's physics to update and compare the skeletons again
Think of the network as modelling clay that can fill the gaps between both skeletons and training as slowly forming the clay and testing if it fits better than before. You can also smack it on the table and roll it into a ball by pressing "start from scratch". :sneaky:

Where the metaphor falls on it's face is the fact that we're actually training our little worker how form clay to fit between any two skeletons. I guess starting from scratch hits the poor worker over the head with a club.
 
Last edited:
Behold, my paint skills

1697317501917.png
 
Last edited:
Training will continue until it's stopped manually. I agree it's a bit confusing and I'll think about how to improve the UI.
You can stop training at any point. It just means the plugin hasn't updated the whole network. If you just want to let it create a morph preset for you it's fine to stop as soon as you're happy with the alignment. Waiting for the full run or even multiple is only really necessary if you want create optimized training data.

Each run teaches the network a single point of data. The question we're training the network to answer is: "What change Y should we apply to these morph sliders if the difference between the bodies is X?" A training run tries to find the best change Y for the given difference X. The change is applied and the next run trains for the resulting difference.

Ah, that makes sense. Thank you!
 
This is awesome! I've been fiddling with BoneMorphs & LFE's Heightmeasure plugin for ages!

QoL Feature-Request:
1) Disabling SoftBodyPhysics, GluteSoftbody & FreezePhysics - There's TriggerCalls for all three toggles. Could you integrate those into the Plugin or add a button into the Scene?
2) Keep "Head Ratio" of Target constant - "Head Ratio" is Character height divided by head height. Default is 7.6, "Hero's proportion" around 8.0 etcetc.
your plugin changes Body Scale etc., so it implicitly changes the "Head ratio".
Can be solved by adding LFE's Heightmeasure plugin to the target char in the scene and using its builtin head scaling feature (plugin adjusts "Head Big" morph until desired ratio is achieved - "Head Big" doesn't change neck-length, so rescaling doesn't affect animations).
 
Last edited:
This is awesome! I've been fiddling with BoneMorphs & LFE's Heightmeasure plugin for ages!

QoL Feature-Request:
1) Disabling SoftBodyPhysics, GluteSoftbody & FreezePhysics - There's TriggerCalls for all three toggles. Could you integrate those into the Plugin or add a button into the Scene?
2) Keep "Head Ratio" of Target constant - "Head Ratio" is Character height divided by head height. Default is 7.6, "Hero's proportion" around 8.0 etcetc.
your plugin changes Body Scale etc., so it implicitly changes the "Head ratio".
Can be solved by adding LFE's Heightmeasure plugin to the target char in the scene and using its builtin head scaling feature (plugin adjusts "Head Big" morph until desired ratio is achieved - "Head Big" doesn't change neck-length, so rescaling doesn't affect animations).
Thanks for the feedback, I'll look into it. Could you point me in the right direction which triggers to add exactly, please?

Are you certain it's not Head Scale? I can't find a built in "Head Big" morph. On the upside, head scale doesn't affect any of the used rigidbodies so I'll just need to figure out how to measure/calculate the ratio and we're golden.
 
Last edited:
Thanks for the feedback, I'll look into it. Could you point me in the right direction which triggers to add exactly, please?
I'll look up the triggers next VaM-Session. Just forgot the name of the receivers.

FreezePhysics:
Atom: Target
Receiver: AtomControl
Value: FreezePhysics

Softbody Physics:
Atom: Target
Receiver: SoftbodyEnabler
Value: Enable

Glute Softbody Physics:
Atom: Target
Receiver: LowerPhysicsMesh
Value: On


Are you certain it's not Head Scale? I can't find a built in "Head Big" morph. On the upside, head scale doesn't affect any of the used rigidbodies so I'll just need to figure out how to measure/calculate the ratio and we're golden.

Or you forgoe the figuring and simply look at the sourcecode of this nifty plugin: https://hub.virtamate.com/resources/height-measure-plugin.7017/ (Unfortunately, the Overview page is a bit out of date. You'll have to look at the GUI.)

Plonk on a Person-atom, then on the left side you'll see the height in units of "head height". Open GUI, scroll down a ways, there's a checkbox ... aaaaand I forgot what it's called. Autoheadscale? When you tick the box, a slider will pop up - use that to select your desired "head ratio" (Person height/Head height. Default 7.6). Then, to the right, you can select which morph is used for adjustment: "Head scale" or "Head Big". Play around a little with the slider to get a feel for the effect.
Personally, I prefer "Head Big" bcs it leaves the "center of mass" of the head in the same place. "Head Scale" doesn't.

P.S.: Akshually ... the heightmeasurement plugin exposes a lot of measurement data as JSONstorables - odds are that you can control it from your plugin, or a trigger.
 
Last edited:
Thanks, I'd like to get around the dependency but I'll take a look at it.

Ok, thanks. Maybe having a look at the plugin's code will help.

Speaking of which: I've noticed in your scene that collisions on the two characters, as well as on the "floor slate" are off ... is that important for your plugin's functionality?

(Makes it a bit harder to integrate LFE's heightmeasurement plugin into the scene - it works better with active collisions and the character firmly rooted to the floor. Also, it'll automatically re-activate softbody physics.. So I have to constantly turn collisions & softbody on/off and freeze/unfreeze physics)
 
Last edited:
Ok, thanks. Maybe having a look at the plugin's code will help.

Speaking of which: I've noticed in your scene that collisions on the two characters, as well as on the "floor slate" are off ... is that important for your plugin's functionality?

(Makes it a bit harder to integrate LFE's heightmeasurement plugin into the scene - it works better with active collisions and the character firmly rooted to the floor. Also, it'll automatically re-activate softbody physics.. So I have to constantly turn collisions & softbody on/off and freeze/unfreeze physics)
Yes, training requires physics updates (and there are ~5000 per training run)
All the plugin wants to know is how the rigidbodies' positions change in relation to each other. Having them collide with the floor, move too slowly due to drag or even oscillate due to soft body physics would slow the process down tremendously and cause all sorts of issues.
 
Last edited:
hello, just wanna ask a few question.
1. If I want to save the trained model to use next time, do I need to save the scene or just press the save button?
2. Is there a way to train multiple models and load the specific model to the scene ? from my understanding now we have 2 models (default + saved) and if i trained the new one and press save then it will replace the old saved model right ?
 
Oh! Other question!

I've read up on what you explained about training the model - but still unsure I understand correctly. You mentioned that, for the model to be flexible, you'd need to train it on a lot of different looks ("skeleton-configurations"?).

What you didn't clarify was whether you meant "Keep the target morph-preset constant and train the model with a lot of different source morph presets" or "Keep the source morph-preset constant and train the model with a lot of different target morph presets". Or whether to vary both target and source during training?

So lets say I have a favorite "body" - I build a mergemorph from it and only vary face & genitals. All the relevant information about node positions is in the merged bodymorph (I think? Headnode position might vary depending on what head I slap onto the body?)

If I understand correctly, the optimal procedure for my usecase would be resetting the model, using this body-template as target and training the model on a set of different source presets?
 
hello, just wanna ask a few question.
1. If I want to save the trained model to use next time, do I need to save the scene or just press the save button?
2. Is there a way to train multiple models and load the specific model to the scene ? from my understanding now we have 2 models (default + saved) and if i trained the new one and press save then it will replace the old saved model right ?
1. You press the save button.
2. Yes but I didn't bother creating the logic/UI for it. Networks are saved to/loaded from "Saves/PluginData/TBD/BodyNet/network.json" and you can manage them there. Just copy the files and rename whichever you want to use at the time to network.json.
 
Last edited:
Oh! Other question!

I've read up on what you explained about training the model - but still unsure I understand correctly. You mentioned that, for the model to be flexible, you'd need to train it on a lot of different looks ("skeleton-configurations"?).

What you didn't clarify was whether you meant "Keep the target morph-preset constant and train the model with a lot of different source morph presets" or "Keep the source morph-preset constant and train the model with a lot of different target morph presets". Or whether to vary both target and source during training?

So lets say I have a favorite "body" - I build a mergemorph from it and only vary face & genitals. All the relevant information about node positions is in the merged bodymorph (I think? Headnode position might vary depending on what head I slap onto the body?)

If I understand correctly, the optimal procedure for my usecase would be resetting the model, using this body-template as target and training the model on a set of different source presets?
Yes, you got the right conclusion.
It gets better at what you train it on. If you want it to be able to handle anything somewhat well you'd train it on as many source+target combinations as possible. For a model that is better at matching a specific body type, you'd train it on a single or a few similar source morph presets. Similarly, you can train it to match a single target to a bunch of different sources.
To be precise, the model is trained on the *difference* between morph presets and doesn't care about how either of them look like so much. You'll get the best results if you train and use it on a set of similar differences rather than expecting it to match your look to anything from Hulk to Shortstack.
 
Back
Top Bottom