• Hi Guest!

    We are extremely excited to announce the release of our first Beta1.1 and the first release of our Public AddonKit!
    To participate in the Beta, a subscription to the Entertainer or Creator Tier is required. For access to the Public AddonKit you must be a Creator tier member. Once subscribed, download instructions can be found here.

    Click here for information and guides regarding the VaM2 beta. Join our Discord server for more announcements and community discussion about VaM2.
  • Hi Guest!

    VaM2 Resource Categories have now been added to the Hub! For information on posting VaM2 resources and details about VaM2 related changes to our Community Forums, please see our official announcement here.

Photorealistic videos!

I would say now I can significantly improve VAM generated video using AI. Below is snapshot from a short video. Upper part is original VAM record, bottom - processed in Comfyui (WAN 2.1). Pay attention of faces.
Unfortunately, I can easily process just around 5sec video.
This looks very nice, I was looking into VACE and decided top stop by here to see if anyone had tried it out with a VAM video yet. Which workflow did you use? Did you have to first make an image with image2image from a frame from the video? All the workflows that I've seen so far seem to want a reference image to use for modifying the video
 
Last edited:
AI can already take on a lot of work, but it is not yet effective enough. The costs for the ever-increasing models are currently exploding, as can be seen in the prices for use. China's short-term subsidized models don't help either.

However, another question arises. Do we want that? YouTube has been active since July 15th. New guidelines introduced that will no longer pay for AI content (the same thing permanently and recurring). This shows the problem of AI content. One or two new videos are interesting but after a few days the internet is flooded with rubbish of the same kind. Creative content handcrafted by people is disappearing.

Personally, I'm fascinated by AI but I don't post it online because there's no real added value for others.
 
Nvidia already implementing AI real time post processing in Unreal Engine. Unfortunately it will work for faces only.
Damn imagine if we could do this in real time on the fly in VR... photorealistic graphics in any game without needing 500 tflops of gpu. AI could do this... but we'll need way faster NPUs... and they'll need to be used by developpers.
Nvidia already implementing AI real time post processing in Unreal Engine. Unfortunately it will work for faces only.
 
This looks very nice, I was looking into VACE and decided top stop by here to see if anyone had tried it out with a VAM video yet. Which workflow did you use? Did you have to first make an image with image2image from a frame from the video? All the workflows that I've seen so far seem to want a reference image to use for modifying the video
The input is just any video. Seems nobody paid attention that we can use the denoise parameter to make it very close to the original or totally different and be close to the text. So, if a cat walks in the original video but text prompt says "dragon walks" then cat will be transformed to a dragon. The denoise parameter should be high in this case like > 0.8. To keep forms and movements close to original I use the depth map. In the workflow you can add a reference image if you wish but I usually don't use it. The only bad thing is video is around 5 seconds and takes 3-5 minutes to generate on 4090.
For the perfect result I would like to have the direct export poses from VAM. I hope finally somebody will create the pose export plugin (video, not static).
I will provide the workflow at the end of this week, it requires a cleanup.
Frankly, now I am crazy about AI, I didn't touch VAM VR for months.
 
Back
Top Bottom