• Happy Holidays Guest!

    We want to announce that we will be working at reduced staffing for the holidays. Specifically Monday the 23rd until Jan 2nd.

    This will affect approval queue times and responses to support tickets. Please adjust your plans accordingly and enjoy yourselves this holiday season!

  • Hi Guest!

    Please be aware that we have released a critical security patch for VaM. We strongly recommend updating to version 1.22.0.7 using the VaM_Updater found in your installation folder.

    Details about the security patch can be found here.
VAM Evolutionary Character Creation

Other VAM Evolutionary Character Creation

Is it possible to reduce the total number of persons in the scene to just 1? I found it that the loading time of new looks created increased a lot(maybe 3x & even more) due to loading same morphs & same textures to 3 same characters.
Maybe it's not worth that amount of time(3x time & more) to wait to see every new look loaded 'cause it's fairly simple to just drag RMB to see what it looks like from another view angle.
Hi, yes, this is very possible, you can just select the persons (the side and angled version probably) and remove them yourself, and save the scene as a new scene, and keep using that as a companion save. Maybe I'll add another .var where this is already done to keep it simpler for users.
 
Unfortunately I'm not getting past the black command window with the cursor; I've tried deleting the settings.json file, but no joy..... I also tried running as admin and get the same
I'm unhappy to hear that. Can you maybe try running a cmd (windows-key, the press cmd and enter) and then go to the directory where the .exe is located and try to run it from there? Maybe it will show you some error message? Do you use windows 10?
 
I'm stumped at the beginning. What is/ where is the win.exe file?
The .exe file is in the zip file. You need to unzip it, and there you will find within the directory "VAM Evolutionary Character Creation v1.1.0" a file called: "VAM Evolutionary Character Creation.exe". That is the file you need to run.
 
Can scale of the parents be taken into account when creating for like dwarfs and goblins ???
I think you can set the scale by using a template file with a certain scale. But if you mean, that you use different parents with different scales, that won't make a difference. But you can make sure, that all the generate children by the program, are all dwarfs for instances, by using a template file of a dwarf.
 
pinosante updated VAM Evolutionary Character Creation with a new update entry:

Now (hopefully) compatible with windows 7!

Recompiled the pyinstaller using Python 3.8.10 which should be compatible with windows 7. The earlier versions were compiled with Python 3.9 which gave " Missing api-ms-win-core-path-l1-1.0.dll" errors on windows 7 machines.

I'm working on some new cool features for the app but they will be added later, I wanted to get this fix out ASAP.

Read the rest of this update entry...
 
This plugin is just so great for people like me who can't model shit, spent hours trying to have great looks that do not look too much like the originals, beginning to see results :D thanks a lot for putting this up there mate !
 

Attachments

  • 1652109184.jpg
    1652109184.jpg
    680.2 KB · Views: 0
  • 1652109435.jpg
    1652109435.jpg
    799.1 KB · Views: 0
  • 1652109863.jpg
    1652109863.jpg
    564.8 KB · Views: 0
Awesome, that's exactly why I made it :). I'm not the person to tweak 300 settings. Thanks for sharing your progress, looks great!
 
Okay, hear me out on this one, cause I dunno if I just randomly lucked out on this, or if this is something you can make more streamlined or tweaked to achieve results. But would using this evolutionary rating system allow you to (eventually) achieve a DESTINATION look, rather than a "hot or not" type of rating like intended?

Basically for those that, saaay, want to morph an existing characters look into that of one we already know (basically brute-force look-a-like creation). I know it'd pretty much be us eyeballing the resulting children produced and going "Yeah, that KINDA looks closer" and then rating them out of 5.
I tried going through 50 generations just to see what would happen if I tried to morph an existing character with a general body type of the person I was trying to end up at. After about 20 or so, the children all kinda level out to a single look and won't skew all that much to make any more solid decisions. Re-running the tool resets the morph variation again, but obviously all the previous rating data is lost and it's just morphing again from the close-ish model.

End result looks pretty good, I'd say I achieved about an 8/10 closeness, but trying ANOTHER model is proving difficult. Adding a TON of other looks to my Appearance folder help with the variation, but again, you're basically stuck with approving child models that may only share ONE feature you like (whether it be body, head shape, or just the way their eyes look. As of right now we can only add the WHOLE model morph to the gene pool, and the results of the morphs we DON'T want might skew our favouritism for future generations.

Dunno HOW you would go about streamlining that process. Unless you add separate ratings for different child aspects like SPECIFICALLY rating head, torso, legs, height, etc. THAT is something I would GLADLY spend the time rating and refining to get the look I'd want, hell I'd even go as far as even just PURELY a head generator with specifics for eyes, mouth, jawline, etc. and just "transplant" morph that head morph onto a body of my choice!?

Clearly I've been doing a lot of thinking on this one, as I'm sure you have while making this system.
 
Okay, hear me out on this one, cause I dunno if I just randomly lucked out on this, or if this is something you can make more streamlined or tweaked to achieve results. But would using this evolutionary rating system allow you to (eventually) achieve a DESTINATION look, rather than a "hot or not" type of rating like intended?

Basically for those that, saaay, want to morph an existing characters look into that of one we already know (basically brute-force look-a-like creation). I know it'd pretty much be us eyeballing the resulting children produced and going "Yeah, that KINDA looks closer" and then rating them out of 5.
I tried going through 50 generations just to see what would happen if I tried to morph an existing character with a general body type of the person I was trying to end up at. After about 20 or so, the children all kinda level out to a single look and won't skew all that much to make any more solid decisions. Re-running the tool resets the morph variation again, but obviously all the previous rating data is lost and it's just morphing again from the close-ish model.

End result looks pretty good, I'd say I achieved about an 8/10 closeness, but trying ANOTHER model is proving difficult. Adding a TON of other looks to my Appearance folder help with the variation, but again, you're basically stuck with approving child models that may only share ONE feature you like (whether it be body, head shape, or just the way their eyes look. As of right now we can only add the WHOLE model morph to the gene pool, and the results of the morphs we DON'T want might skew our favouritism for future generations.

Dunno HOW you would go about streamlining that process. Unless you add separate ratings for different child aspects like SPECIFICALLY rating head, torso, legs, height, etc. THAT is something I would GLADLY spend the time rating and refining to get the look I'd want, hell I'd even go as far as even just PURELY a head generator with specifics for eyes, mouth, jawline, etc. and just "transplant" morph that head morph onto a body of my choice!?

Clearly I've been doing a lot of thinking on this one, as I'm sure you have while making this system.
Yeah, I considered your use case, which is very interesting. Actually there was a police suspect profiling software solution I saw online somewhere (but I can't find it anymore, unfortunately) where they used a similar method. This was a digital alternative to the pencil drawing of a suspect. You would have 20 faces and then the question would be: pick the face which looks closest to the suspect you saw. And then based on that, another 20 faces. Etc. After about 10 of these runs, you would get pretty close results to the actual suspect. To prove this case, they did this for moviestars. And it was possible to get a reasonably similar picture as a moviestar using this method. Again: I can't find this link anymore (I have been looking for it) so I have no more details. Maybe your google-fu is stronger than mine...

Regarding splitting the morphs: it's a bit difficult. I would have to look into it. The problem is that people use all kinds of morphs, so I have no real way to determine whether a morph is influencing the head or the body, or both. Morph mass merger/manager (plugin) does morph splitting, but to be honest I don't really know how. And then you still have the problem that for instance the Carmen morph (which is one of the default morphs in VAM) change the shape of basically the whole model, head to toe.

Another way to approach this, is to not use the "genetic algorithm" method, but to use a variation method. So you would variate a random number of morphs, with a small percentage. Facegen has a feature like this, which works pretty well. I don't know what happens under the hood, so to speak, but I think by choosing a small number of morphs and giving them some slight variation, you might get somewhere. On the other hand, when you think about it, at this point you are basically writing a script which is randomly mashing buttons on the morph settings and hoping for the best. This is something you could do yourself (just changing morph values and looking if it improves the similarity).

So possible solutions:
  • Allowing rating head/breast/body, with the downside that it's not 100% accurate
  • Increasing random variation percentages (bit of a "lottery" approach)
I am curious: you said that you were able to achieve a 8/10 similarity (which is a pretty decent result if you ask me) but had trouble with another look? So you can't repeat the 8/10 similarity?

Btw, I'm pretty swamped with work right now so any future stuff could take months for me to find time to implement it. I'm in the process of finishing up a last big update to the tool and then I'll have to focus on my work.
 
Yeah, I considered your use case, which is very interesting. Actually there was a police suspect profiling software solution I saw online somewhere (but I can't find it anymore, unfortunately) where they used a similar method. This was a digital alternative to the pencil drawing of a suspect. You would have 20 faces and then the question would be: pick the face which looks closest to the suspect you saw. And then based on that, another 20 faces. Etc. After about 10 of these runs, you would get pretty close results to the actual suspect. To prove this case, they did this for moviestars. And it was possible to get a reasonably similar picture as a moviestar using this method. Again: I can't find this link anymore (I have been looking for it) so I have no more details. Maybe your google-fu is stronger than mine...

I'll take a gander around, see what I can find!?

Regarding splitting the morphs: it's a bit difficult. I would have to look into it. The problem is that people use all kinds of morphs, so I have no real way to determine whether a morph is influencing the head or the body, or both. Morph mass merger/manager (plugin) does morph splitting, but to be honest I don't really know how. And then you still have the problem that for instance the Carmen morph (which is one of the default morphs in VAM) change the shape of basically the whole model, head to toe.

Ahh, figured as much. Only REALLY got into the Virtamate scene about 2 months ago, so the limitations on what you can and cannot do are still fresh for me. Wasn't sure if there was a tag system similar to clothing that would allow you to specify a category for the morphs so you can reduce em down that way. I know there's a filter, but it seems to be custom tags too soooooo....?‍♂

I have a background in programming and a bit of hobbyist 3D modelling and Unity/UE4/UE5 and have a Valve Index since Half Life: Alyx was announced, so I've picked up on most things rather quickly. Morphs in general seem about as convenient as they are difficult, do to SO MANY custom morphs added to my addons folder after adding tons of looks, plugins, etc.

Another way to approach this, is to not use the "genetic algorithm" method, but to use a variation method. So you would variate a random number of morphs, with a small percentage. Facegen has a feature like this, which works pretty well. I don't know what happens under the hood, so to speak, but I think by choosing a small number of morphs and giving them some slight variation, you might get somewhere. On the other hand, when you think about it, at this point you are basically writing a script which is randomly mashing buttons on the morph settings and hoping for the best. This is something you could do yourself (just changing morph values and looking if it improves the similarity).

So possible solutions:
  • Allowing rating head/breast/body, with the downside that it's not 100% accurate
  • Increasing random variation percentages (bit of a "lottery" approach)

At FIRST I thought that was exactly what was happening! My initial pool of appearances I had I thought it was just taking random SAMPLES of the provided morphs within the looks saved in there. That's when I realized...
DumbDumbAndDumberGIF.gif

But I AM surprised that with enough generations is was still able to reduce down the variation based on my rating. Either it was a FLUKE that a bunch of parents managed to share a BUNCH of similar features that it favoured the ones I was looking for in the end, or the gene pool eventually evened out in such a way that I got said features I was looking for.

I am curious: you said that you were able to achieve a 8/10 similarity (which is a pretty decent result if you ask me) but had trouble with another look? So you can't repeat the 8/10 similarity?

I think it's just the assortment of looks I have at my disposal. I'm gonna chalk it up to "diversity" being a key component to it not giving me the features I want, if ya know what I mean. Maybe I'd have better/quicker results if I started with a model that's close to my target and manually alter her looks myself to the best of my ability BEFORE attempting Gen50 hell!??

Btw, I'm pretty swamped with work right now so any future stuff could take months for me to find time to implement it. I'm in the process of finishing up a last big update to the tool and then I'll have to focus on my work.

No worries, bud! Life before hobby, otherwise hobby can't exist!? Intrigued to see what ya cook up for the update!
 
At FIRST I thought that was exactly what was happening! My initial pool of appearances I had I thought it was just taking random SAMPLES of the provided morphs within the looks saved in there. That's when I realized...
View attachment 120964
But I AM surprised that with enough generations is was still able to reduce down the variation based on my rating. Either it was a FLUKE that a bunch of parents managed to share a BUNCH of similar features that it favoured the ones I was looking for in the end, or the gene pool eventually evened out in such a way that I got said features I was looking for.
Just for clarity, maybe you already gathered as much, but the app (if you use Gaussian Sampling, which I recommend if you have 20+ appearances) creates this blueprint based on ALL appearances, and then, samples from that blueprint. That is only for the first generation. After that, using crossover, (basically flipping a coin for each morph in both parents and deciding which to use for the child) the childs are generated. So at some point, at generation 10+ or something, the highest rated childs will have overlapping morphs, since the childs you like will probably have some of those morphs in them. So if you rate fairly, at some point your favorite morph settings will "converge" so to speak. (That is the beauty of the process).

Talking about this, if you want to create a character referenced from a photograph, using the gaussian sample method is absolutely *not* the way to go. In that case, you want a random crossover initialization from a very diverse appearances folder. But maybe this is exactly what you did, and I told you nothing new.

Btw, the new app features are basically ease of use improvements.
 
Just for clarity, maybe you already gathered as much, but the app (if you use Gaussian Sampling, which I recommend if you have 20+ appearances) creates this blueprint based on ALL appearances, and then, samples from that blueprint. That is only for the first generation. After that, using crossover, (basically flipping a coin for each morph in both parents and deciding which to use for the child) the childs are generated. So at some point, at generation 10+ or something, the highest rated childs will have overlapping morphs, since the childs you like will probably have some of those morphs in them. So if you rate fairly, at some point your favorite morph settings will "converge" so to speak. (That is the beauty of the process).

Ahhhh kay, I wasn't sure how much each of the highest rated childs' morphs would remain. Good to hear I WAS going about it the right-ish way!?

Talking about this, if you want to create a character referenced from a photograph, using the gaussian sample method is absolutely *not* the way to go. In that case, you want a random crossover initialization from a very diverse appearances folder. But maybe this is exactly what you did, and I told you nothing new.

AwkwardEyesGIF.gif


Funnily enough, that's what I did with my FIRST model (the one that's an 8/10), but for the remaining 15 or so generations I started using Gaussian to refine the look more. For the second model I used PURELY Gaussian.....so that would explain why it took me 40 or so generations before my results started looking really close to what I wanted.

Btw, the new app features are basically ease of use improvements.

Always welcome those!??
 
suggestions:
  1. Simplify the initial steps 1 & 2 into one step. I had to look at your video first to know where the appearances are stored. I bet I'm not the only one. I'd say it's relatively safe to assume the data can be found there. Just create the appearance path from the VAM base path in python. For users with weird VAM setups they can still edit settings.json.
  2. I usually end up with some good looking bodies - but the faces all look like they had an accident. That's with 49 favorited appearances. Is it possible to create a 'lock' for either face or body? I personally would like to leave the head completely untouched and generate new bodies to keep look-alike appearances.
  3. Like @kjm00 I can confirm that I once had a window that was to tall to fit my screen. For some reason now I'm unable to reproduce it for a screenshot. Weird! :unsure: I have a 3840x2160 monitor with 200% scaling in the Windows display settings. So it should be the same size like on a 1920x1080 monitor. Pretty sure it was the window with the 'Generate Next Population'-button at the bottom.
  4. Maybe a button to toggle the 'always on top' flag for the window could make switching between VAM / ECC-tool easier. Not sure if possible in python.
 
Hi @Sally Whitemane , thanks for taking the time to write out your suggestions!

1. Simplify the initial steps 1 & 2 into one step. I had to look at your video first to know where the appearances are stored. I bet I'm not the only one. I'd say it's relatively safe to assume the data can be found there. Just create the appearance path from the VAM base path in python. For users with weird VAM setups they can still edit settings.json.
Yeah, so I have been going back and forth on that. My first version of the app actually had this, since I totally agree with you, that easier is better. But then I am restricting other users who might want to organize their appearances in different folders. Someone might have custom/person/appearances/girls and /custom/person/appearances/males and /custom/person/appearances/futas as directories, or whatever structure, and that would all be unusuable with the app. So I decided to go with this version where you can choose this. It's a bit more work, but allows for much more flexibility.

2. I usually end up with some good looking bodies - but the faces all look like they had an accident. That's with 49 favorited appearances. Is it possible to create a 'lock' for either face or body? I personally would like to leave the head completely untouched and generate new bodies to keep look-alike appearances.
This happens. Just a quick check: did you set min morph treshold to 150? Because if you did not, you could end up with custom morphs and they can really mess things up. Better to set the min morph treshold to 150 and only use the appearances with morph counts higher than that. Keeping the head / body separated is a much requested feature, and a feature I'd love to have myself, but at this point it's quite a challenge for me to know which morph influences what part of the character. The Carmen morph for instance, modifies the whole model from top to toe. So it's not an easy task to decide which morph should be seen as a "head" morph or a "body" morph. Unless you have a suggestion how to approach this?

3. Like @kjm00 I can confirm that I once had a window that was to tall to fit my screen. For some reason now I'm unable to reproduce it for a screenshot. Weird! :unsure: I have a 3840x2160 monitor with 200% scaling in the Windows display settings. So it should be the same size like on a 1920x1080 monitor. Pretty sure it was the window with the 'Generate Next Population'-button at the bottom.
I added a "use small rating" button on the options window, to fix that problem. There is a big update coming up though, which hopefully fixes all of this :). With this big update, all the rating can be done within VAM.

4. Maybe a button to toggle the 'always on top' flag for the window could make switching between VAM / ECC-tool easier. Not sure if possible in python.
This should also be fixed by the new update, where everything can be done in VAM!

Thanks again for taking the time to think along with me about improving the app, much appreciated.
 
the grey seethrough UI layer doesn't go away so you can't click on the buttons
1652542898810.png


also, some buttons don't render due to camera angle
 
does the app say that it's connected?

There is a tutorial available now in the resource post.
 
also, getting hundreds of errors: [Errno 2] No such file or directory: 'F:\\vam new\\Custom\\Atom\\UIText\\VAM Evolutionary Character Creation\\Preset_VAM2PythonText.vap'
 
Back
Top Bottom