I found Ai / machine learning to convert video to mocap

Here's a tutorial using Deepmotion with a Daz3d G2 figure, I tried it with a shutterstock video and it works great.
Once you've gone thru the process with a G2 figure you can save it in DeepMotion so you can apply other video feeds.
 
I tried this a year or so ago and was not happy with the results. If you were able to make this work I'd love to see an example.
 
I doubt anything like this will be particularly good, I'm just happy to have something that only takes a few tweaks to make usable vs startin from scratch. This is really great for animating everyday motions that few artists are going to invest time into.
 
I was trying DollarsMocap:

using some videos to mimic facial expressions, but the results are not so good:

1731303702262.png


1731303726178.png


I think we should have the technology with AI to make this work, does anyone know a better tool to mocap facial expressions from videos?

I was expecting some result like this at least:
https://www.dollarsmocap.com/nvis

Maybe it works better with LINK but I don't have an iphone to test it:
https://www.dollarsmocap.com/link
 
It looks amazing, but I think is not public released yet, and also is image to video, not to 3D morph instructions.
But maybe there is something there we can use in a future, if it has a middle step like "extract facial expressions details" before applying them to the source image, then we could convert those expressions to moprhs instead
 
Something like this would be amazing:


It's for iClone, is there any way to adapt to VAM? or maybe will be possible with VAM2?
 
Hi, Dollars MoCap developers here, thank you for trying our products.

EGAO is our low-end product. For AccuFace, we match it with NVIS, as both are based on NVIDIA's capture technology.

If you have an iPhone, you don't need to use LINK. Instead, you can directly use this free plugin, which supports many iOS facial capture programs, https://hub.virtamate.com/resources/facial-motion-capture.328/

The above are related to the capture side. On the other hand, the morphs on the VAM side are also worth paying attention to. The default VAM character has limited facial capture morphs, but you can add more by adding custom morphs, such as,
Both LFE and our plugin support custom morph mapping (we referenced a lot of LFE's code).

Lastly, this is really a limitation we can't do much about—VAM characters are based on Daz G2, which is several generations behind the latest G9, and their performance can't compare with the latest iClone characters mentioned above.
 
Hi, Dollars MoCap developers here, thank you for trying our products.

EGAO is our low-end product. For AccuFace, we match it with NVIS, as both are based on NVIDIA's capture technology.

If you have an iPhone, you don't need to use LINK. Instead, you can directly use this free plugin, which supports many iOS facial capture programs, https://hub.virtamate.com/resources/facial-motion-capture.328/

The above are related to the capture side. On the other hand, the morphs on the VAM side are also worth paying attention to. The default VAM character has limited facial capture morphs, but you can add more by adding custom morphs, such as,
Both LFE and our plugin support custom morph mapping (we referenced a lot of LFE's code).

Lastly, this is really a limitation we can't do much about—VAM characters are based on Daz G2, which is several generations behind the latest G9, and their performance can't compare with the latest iClone characters mentioned above.

I was using NVIS, and I have already installed the morphs from Jackaroo, do I need to configure something extra or NVIS will use them automatically if they are installed in VAM?
 
Deepmotion or any video-to-mocap is a dead end for VAM.
The existing models are not train on porn/sex movement, so when you try to transform sex movement to mocap the model is very very bad.
 
Deepmotion or any video-to-mocap is a dead end for VAM.
The existing models are not train on porn/sex movement, so when you try to transform sex movement to mocap the model is very very bad.

I tried out Deepmotion recently with a random TikTok dance and it looked pretty decent when I brought into VaM. Sex motion probably won't work like you said but I imagine motions for decent handjob and blowjob animations are possible.
 
I was using NVIS, and I have already installed the morphs from Jackaroo, do I need to configure something extra or NVIS will use them automatically if they are installed in VAM?

Yes, you will need to map the morphs by modifying a config json file, around 4:00 in this video,

 
Yes, you will need to map the morphs by modifying a config json file, around 4:00 in this video,


Do you have that .json configured with Jackaroo Morphs? I'm not sure how to edit it.
Also would be great to include it already in your installation as an alternative if you consider they are better morphs

Do we need just to put the morph's "name" ?
1731608584670.png


Also I can see there are a lot of empty morphs (and Jackaroo does not have all of them neither)
 
Last edited:
Do you have that .json configured with Jackaroo Morphs? I'm not sure how to edit it.
Also would be great to include it already in your installation as an alternative if you consider they are better morphs
Thank you for the advice!

Do we need just to put the morph's "name" ?
Yes, and you can also adjust the strength value to increase or decrease the range of the morphs

Also I can see there are a lot of empty morphs (and Jackaroo does not have all of them neither)
Yes, you can add individual morphs from other morph packs. But as mentioned earlier with the G2 limitation, the final result may still be somewhat limited.
 
Back
Top Bottom