• Happy Holidays Guest!

    We want to announce that we will be working at reduced staffing for the holidays. Specifically Monday the 23rd until Jan 2nd.

    This will affect approval queue times and responses to support tickets. Please adjust your plans accordingly and enjoy yourselves this holiday season!

  • Hi Guest!

    Please be aware that we have released a critical security patch for VaM. We strongly recommend updating to version 1.22.0.7 using the VaM_Updater found in your installation folder.

    Details about the security patch can be found here.
Video Renderer for 3D VR180, VR360 and Flat 2D & Audio + BVH Animation Recorder

Plugins Video Renderer for 3D VR180, VR360 and Flat 2D & Audio + BVH Animation Recorder

Did you read the previous posts about multithreading the renderer?
I did not see this before! thanks!
 
Empty Atom Paired with this did the Trick!
Tied it to the camera rig and made sure to LOCK it in once it was aligned!

Can you explain in some more detail on what you did here, maybe with some screenshots? I tried parenting the WindowCamera to the Empty atom which would make the camera move a bit and sway but I'm still not exactly seeing what the POV view button will show when activated, which is a more close up and intimate view.

I've tried parenting the CameraRig to the Empty atom too but in the preview window it just shows black. The AlignHelper seems to help things but I just can't get the type of POV view I want which is basically just mimicking the POV view camera but while still avoiding clipping issues when the model comes too close.

Any help would be greatly appreciated!

Here's some screenshots of what I have and showing what I mean (the desktop game view vs what the VR renderer is previewing):



Screenshot-2024-02-15-104435.png


Screenshot-2024-02-15-104416.png
 
Last edited:
Add an Empty atom
Can you explain in some more detail on what you did here, maybe with some screenshots? I tried parenting the WindowCamera to the Empty atom which would make the camera move a bit and sway but I'm still not exactly seeing what the POV view button will show when activated, which is a more close up and intimate view.

I've tried parenting the CameraRig to the Empty atom too but in the preview window it just shows black. The AlignHelper seems to help things but I just can't get the type of POV view I want which is basically just mimicking the POV view camera but while still avoiding clipping issues when the model comes too close.

Any help would be greatly appreciated!

Here's some screenshots of what I have and showing what I mean (the desktop game view vs what the VR renderer is previewing):


Screenshot-2024-02-15-104332.png


Screenshot-2024-02-15-104435.png


Screenshot-2024-02-15-104416.png

i did a little writeup under Shadow venoms guide, hopefully he adds it to his and other people can find it
https://hub.virtamate.com/threads/t...ing-for-timeline-animations.49431/post-149899
 
Add an Empty atom


i did a little writeup under Shadow venoms guide, hopefully he adds it to his and other people can find it
https://hub.virtamate.com/threads/t...ing-for-timeline-animations.49431/post-149899

Thanks for the help! I was able to replicate how the camera looks in flat 2D rendering. The problem occurs when you try to render in stereo 3D, it then is more zoomed out which can be fixed a bit by using the different sliders in AlignHelper but it still is hard to avoid body clipping.
 
Thanks for the help! I was able to replicate how the camera looks in flat 2D rendering. The problem occurs when you try to render in stereo 3D, it then is more zoomed out which can be fixed a bit by using the different sliders in AlignHelper but it still is hard to avoid body clipping.
For clipping that e. g. happens with Embody if you render to flat, I just parent empty atom to person's head, and then simply move it forward, until I can't see clipping on preview window. You should be able to do it if you just move camera forward until clipping is gone.
 
Anyone ever have trouble with rendering a scene in PNG quality and when you put the first frame into Avidemux it shows up black? It only happens with particular scenes it seems and when I try to render the video it doesn't work. If I see the first frame actually show in Avidemux then I know it's going to work.

Also if possible, can someone please post an example code of what you put into ffmpeg for a VR 8K video? I've seen a few floating around and even the example in the original post, but I always get stuttering in my video using this over Avidemux.

Here are some examples I've tried already that render but have stuttering:

hevc_nvenc - Uses GPU

ffmpeg -framerate 60 -r 60 -i 20240122-120345/20240122-120345_%06d.png -i audio.mp3 -c:v hevc_nvenc -preset slow -tune hq -rc vbr -level:v 6.2 -cq 0 -qmin:v 17 -qmax:v 21 -spatial-aq 1 -aq-strength 5 output8k.mp4

libx265 - Uses CPU

ffmpeg -framerate 60 -i 20240121-125153/20240121-125153_%06d.png -i audio.mp3 -c:v libx265 -tune:v fastdecode -level:v 6.2 -crf 19 output8k.mp4
 
All I use is ffmpeg, but I render in 4k, not 8k and I do flat. Also, never use GPU, it adds lots of artifacting.
Anyone ever have trouble with rendering a scene in PNG quality and when you put the first frame into Avidemux it shows up black? It only happens with particular scenes it seems and when I try to render the video it doesn't work. If I see the first frame actually show in Avidemux then I know it's going to work.

Also if possible, can someone please post an example code of what you put into ffmpeg for a VR 8K video? I've seen a few floating around and even the example in the original post, but I always get stuttering in my video using this over Avidemux.

Here are some examples I've tried already that render but have stuttering:

hevc_nvenc - Uses GPU

ffmpeg -framerate 60 -r 60 -i 20240122-120345/20240122-120345_%06d.png -i audio.mp3 -c:v hevc_nvenc -preset slow -tune hq -rc vbr -level:v 6.2 -cq 0 -qmin:v 17 -qmax:v 21 -spatial-aq 1 -aq-strength 5 output8k.mp4

libx265 - Uses CPU

ffmpeg -framerate 60 -i 20240121-125153/20240121-125153_%06d.png -i audio.mp3 -c:v libx265 -tune:v fastdecode -level:v 6.2 -crf 19 output8k.mp4
 
All I use is ffmpeg, but I render in 4k, not 8k and I do flat. Also, never use GPU, it adds lots of artifacting.

Good to know! I've found a good preset using NVIDIA HEVC with 100000 kbps bitrate at 8K VR using Avidemux but yeah sometimes PNG is hit or miss. I think if the scene is complex it can't handle it.

Can you post your example line of code you use with ffmpeg or does the one I have look good already? I'm just wondering why I'm getting stuttering.
 
This is the command for the scene that is literally rendering as we speak;

ffmpeg -framerate 60 -thread_queue_size 512 -i render/render_%06d.png -i render/render_006899.wav -c:v libx265 -preset slow -thread_queue_size 4096 -crf 28 -pix_fmt yuv420p10le -c:a aac -b:a 384k -shortest Andrea9.mp4
 
This is the command for the scene that is literally rendering as we speak;

ffmpeg -framerate 60 -thread_queue_size 512 -i render/render_%06d.png -i render/render_006899.wav -c:v libx265 -preset slow -thread_queue_size 4096 -crf 28 -pix_fmt yuv420p10le -c:a aac -b:a 384k -shortest Andrea9.mp4

Thank you! I will try this next.
 
I just found a way to send the raw rendered frame images directly to FFmpeg for video encoding, without writing the images to disk, and without even encoding to PNG or JPEG first. Calling FFmpeg from VaM itself isn't possible due to VaM's security system, but I figured out that FFmpeg can actually read raw input data from a TCP socket, and sending TCP data is NOT blocked in VaM. Honestly, I'm just amazed by the fact that FFmpeg can read from a socket out-of-the-box without any external tools.

I did some first tests - including sending the TCP data from the actual renderer -, and it does actually seem to work. It requires calling FFmpeg from the command line manually with some additional parameters just before starting the rendering in VaM, and the audio still has to be merged manually at the end (not sure if I want to tackle that as well), but I think that's really cool already. Not only does that speed things up yet again by eliminating PNG/JPEG encoding entirely and rendering the final video file "live", but it means that you don't need tons of disk space to save the individual frame images anymore. You only need enough for the final rendered video.

I'll still test and experiment with it a bit before releasing anything. Couldn't keep myself from posting about it though. :)
 
I just found a way to send the raw rendered frame images directly to FFmpeg for video encoding, without writing the images to disk, and without even encoding to PNG or JPEG first. Calling FFmpeg from VaM itself isn't possible due to VaM's security system, but I figured out that FFmpeg can actually read raw input data from a TCP socket, and sending TCP data is NOT blocked in VaM. Honestly, I'm just amazed by the fact that FFmpeg can read from a socket out-of-the-box without any external tools.

I did some first tests - including sending the TCP data from the actual renderer -, and it does actually seem to work. It requires calling FFmpeg from the command line manually with some additional parameters just before starting the rendering in VaM, and the audio still has to be merged manually at the end (not sure if I want to tackle that as well), but I think that's really cool already. Not only does that speed things up yet again by eliminating PNG/JPEG encoding entirely and rendering the final video file "live", but it means that you don't need tons of disk space to save the individual frame images anymore. You only need enough for the final rendered video.

I'll still test and experiment with it a bit before releasing anything. Couldn't keep myself from posting about it though. :)
Sounds amazing! Can't wait to try it out eventually.
 
I just found a way to send the raw rendered frame images directly to FFmpeg for video encoding, without writing the images to disk, and without even encoding to PNG or JPEG first. Calling FFmpeg from VaM itself isn't possible due to VaM's security system, but I figured out that FFmpeg can actually read raw input data from a TCP socket, and sending TCP data is NOT blocked in VaM. Honestly, I'm just amazed by the fact that FFmpeg can read from a socket out-of-the-box without any external tools.

I did some first tests - including sending the TCP data from the actual renderer -, and it does actually seem to work. It requires calling FFmpeg from the command line manually with some additional parameters just before starting the rendering in VaM, and the audio still has to be merged manually at the end (not sure if I want to tackle that as well), but I think that's really cool already. Not only does that speed things up yet again by eliminating PNG/JPEG encoding entirely and rendering the final video file "live", but it means that you don't need tons of disk space to save the individual frame images anymore. You only need enough for the final rendered video.

I'll still test and experiment with it a bit before releasing anything. Couldn't keep myself from posting about it though. :)

Amazing work! Just curious how would you have to merge the audio? With FFmpeg again or some other easier way?
 
Amazing work! Just curious how would you have to merge the audio? With FFmpeg again or some other easier way?
Yeah, I do it using FFmpeg directly. Avidemux or LosslessCut should work too, as they are just frontends for FFmpeg.
 
I just found a way to send the raw rendered frame images directly to FFmpeg for video encoding, without writing the images to disk, and without even encoding to PNG or JPEG first. Calling FFmpeg from VaM itself isn't possible due to VaM's security system, but I figured out that FFmpeg can actually read raw input data from a TCP socket, and sending TCP data is NOT blocked in VaM. Honestly, I'm just amazed by the fact that FFmpeg can read from a socket out-of-the-box without any external tools.

I did some first tests - including sending the TCP data from the actual renderer -, and it does actually seem to work. It requires calling FFmpeg from the command line manually with some additional parameters just before starting the rendering in VaM, and the audio still has to be merged manually at the end (not sure if I want to tackle that as well), but I think that's really cool already. Not only does that speed things up yet again by eliminating PNG/JPEG encoding entirely and rendering the final video file "live", but it means that you don't need tons of disk space to save the individual frame images anymore. You only need enough for the final rendered video.

I'll still test and experiment with it a bit before releasing anything. Couldn't keep myself from posting about it though. :)
amazing, can't wait to see what you come up with!
 
I've just uploaded a first version with the streaming support I talked about. You can get the VAR file from here. It works for me, but note that it's experimental again. Would be cool to see what results other people get. It requires a bit of setup (because it's not possible to run FFmpeg directly from VaM. The user must do that manually), but once you get used to it, it should be very simple.
I tried adding the audio to the rendered video without having to do extra steps after recording, but nothing I tried worked. Maybe someone with more FFmpeg knowledge has an idea, but for now I'm giving up on that.


Here's a (somewhat) quick guide:

First of all, you need a command-line version of FFmpeg (I use version 6.1.1. No clue about other versions) installed.
In the plugin's VaM UI, set the "Stream Mode" (very far down on the right) to "Stream". "Host" and "Port" can be left at their defaults. Now before you start the video recording in VaM, you have to run FFmpeg in a console with some specific options. Here's what I use for 4K video without transparency at 60FPS:

Bash:
ffmpeg -y -f rawvideo -pix_fmt rgb24 -s 3840x1920 -r 60 -i tcp://127.0.0.1:54341?listen -vf vflip -c:v libx265 -preset medium -pix_fmt yuv420p -crf 20 video.mp4

Make sure to read the description of the most important options below. The options for the input format must match the ones you use in the plugin, otherwise this will not work.

Once you started FFmpeg with these special options, you can start the video recording in VaM, which will send the raw frames to FFmpeg live instead of writing images to disk. After recording is finished, FFmpeg should finish too. You should then have a fully rendered video file.

The video file will not have audio yet. The audio file is still in the same place that it always was for the plugin. You have to merge the video and audio manually. This should be possible in any good video editor like Avidemux or LosslessCut. I use FFmpeg for it as well, with a command like this:

Bash:
ffmpeg -i video.mp4 -i audio.wav -c copy -c:a libmp3lame -b:a 256K video-final.mp4

This assumes that the audio file was renamed to "audio.wav" and put into the video file's directory. A simple script could do both of these FFmpeg commands one after the other automatically, so that it doesn't feel like two steps anymore.


Here's a description of the important options for the first FFmpeg command above. Note that their order is sometimes important:

  • "-y" is for automatically overwriting the video file if it already exists
  • "-f rawvideo" tells FFmpeg to expect raw image data for the frames, instead of something like JPEG or PNG.
  • "-pix_fmt rgb24" sets the pixel format. If you record video with transparency preserved, you have to use "-pix_fmt argb" instead.
  • "-s 3840x1920" sets the width and height of the frame images (4K in this case)
  • "-s 60" sets the framerate.
  • "-i tcp://..." makes FFmpeg read the individual frames from a TCP socket. FFmpeg acts as the server, and the VaM plugin connects to it as a client.
  • "-vf vflip" mirrors the images vertically. Without it, the video will be upside-down. This is probably due to the pixel order in Unity textures.
  • The other options just describe how to encode the output video. You can choose whatever options you want here.
 
I've just uploaded a first version with the streaming support I talked about. You can get the VAR file from here. It works for me, but note that it's experimental again. Would be cool to see what results other people get. It requires a bit of setup (because it's not possible to run FFmpeg directly from VaM. The user must do that manually), but once you get used to it, it should be very simple.
I tried adding the audio to the rendered video without having to do extra steps after recording, but nothing I tried worked. Maybe someone with more FFmpeg knowledge has an idea, but for now I'm giving up on that.


Here's a (somewhat) quick guide:

First of all, you need a command-line version of FFmpeg (I use version 6.1.1. No clue about other versions) installed.
In the plugin's VaM UI, set the "Stream Mode" (very far down on the right) to "Stream". "Host" and "Port" can be left at their defaults. Now before you start the video recording in VaM, you have to run FFmpeg in a console with some specific options. Here's what I use for 4K video without transparency at 60FPS:

Bash:
ffmpeg -y -f rawvideo -pix_fmt rgb24 -s 3840x1920 -r 60 -i tcp://127.0.0.1:54341?listen -vf vflip -c:v libx265 -preset medium -pix_fmt yuv420p -crf 20 video.mp4

Make sure to read the description of the most important options below. The options for the input format must match the ones you use in the plugin, otherwise this will not work.

Once you started FFmpeg with these special options, you can start the video recording in VaM, which will send the raw frames to FFmpeg live instead of writing images to disk. After recording is finished, FFmpeg should finish too. You should then have a fully rendered video file.

The video file will not have audio yet. The audio file is still in the same place that it always was for the plugin. You have to merge the video and audio manually. This should be possible in any good video editor like Avidemux or LosslessCut. I use FFmpeg for it as well, with a command like this:

Bash:
ffmpeg -i video.mp4 -i audio.wav -c copy -c:a libmp3lame -b:a 256K video-final.mp4

This assumes that the audio file was renamed to "audio.wav" and put into the video file's directory. A simple script could do both of these FFmpeg commands one after the other automatically, so that it doesn't feel like two steps anymore.


Here's a description of the important options for the first FFmpeg command above. Note that their order is sometimes important:

  • "-y" is for automatically overwriting the video file if it already exists
  • "-f rawvideo" tells FFmpeg to expect raw image data for the frames, instead of something like JPEG or PNG.
  • "-pix_fmt rgb24" sets the pixel format. If you record video with transparency preserved, you have to use "-pix_fmt argb" instead.
  • "-s 3840x1920" sets the width and height of the frame images (4K in this case)
  • "-s 60" sets the framerate.
  • "-i tcp://..." makes FFmpeg read the individual frames from a TCP socket. FFmpeg acts as the server, and the VaM plugin connects to it as a client.
  • "-vf vflip" mirrors the images vertically. Without it, the video will be upside-down. This is probably due to the pixel order in Unity textures.
  • The other options just describe how to encode the output video. You can choose whatever options you want here.
I am on Linux and got some errors about being unable to write to TCP socket. However after checking option in VAM to allow plugins to use the network and then reloading the plugin, it worked!

This is so good! With your ffmpeg options, video is very small and looks pretty good! Speed is about 0.185 of realtime for the example video I made in 1920x1200. Considering there is zero time spent later in avidemux, it's super good.

Also, not sure if I was doing it wrong before, but if I recorded via "resume playback and record" and then merged image filed with avidemux with the intention of a perfect loop, there was always some discrepancy and it was kind of broken, never could get quite good enough looping. But with direct frame feed now, it seems much better. I'll do more testing, but so far it looks perfect!

Big thanks for creating this, it's INSANE and works like a charm.
 
I am on Linux and got some errors about being unable to write to TCP socket. However after checking option in VAM to allow plugins to use the network and then reloading the plugin, it worked!

This is so good! With your ffmpeg options, video is very small and looks pretty good! Speed is about 0.185 of realtime for the example video I made in 1920x1200. Considering there is zero time spent later in avidemux, it's super good.

Also, not sure if I was doing it wrong before, but if I recorded via "resume playback and record" and then merged image filed with avidemux with the intention of a perfect loop, there was always some discrepancy and it was kind of broken, never could get quite good enough looping. But with direct frame feed now, it seems much better. I'll do more testing, but so far it looks perfect!

Big thanks for creating this, it's INSANE and works like a charm.

The thing with having to enable plugin network access is interesting. It seems obvious that this would have to be enabled, but I actually disabled it for testing and the plugin still worked. Maybe I had to restart VaM for these changes to take effect.

Quality should hopefully be the same as when rendering to PNG images, because there is no lossy compression involved in both cases. I did once compare the streamed video file with one created the "old" way from PNGs (from the same rendering, using the slower "Stream + Images" option), and the resulting file was exactly the same, byte for byte. That was only for a small test though, so I can't be certain.

I've actually never tested or even thought about the "resume" feature of the plugin, so I'm glad it not only works but might have actually fixed something. ;)
 
The thing with having to enable plugin network access is interesting. It seems obvious that this would have to be enabled, but I actually disabled it for testing and the plugin still worked. Maybe I had to restart VaM for these changes to take effect.

Quality should hopefully be the same as when rendering to PNG images, because there is no lossy compression involved in both cases. I did once compare the streamed video file with one created the "old" way from PNGs (from the same rendering, using the slower "Stream + Images" option), and the resulting file was exactly the same, byte for byte. That was only for a small test though, so I can't be certain.

I've actually never tested or even thought about the "resume" feature of the plugin, so I'm glad it not only works but might have actually fixed something. ;)
About resume thing, it still produced slightly visible glitch, but that's just VAM things I think. If you resume from Timeline let's say, very first frame or few frames are not yet updated to the movement, and it kind of snaps. But simply having animation played and hitting record whenever, fixes that. For loop animation starting from exact point shouldn't matter. I tested animation + camera timeline which lasts exactly 10 seconds. I recorded at random point 10 seconds and it looped perfectly! So yes, your improvements have definitely made it possible now!
 
Awesome work again! It doesn't seem to like VR however. Got an endless scrolling video when trying to stream and render an 8K 180 SBS VR video.
 

Attachments

  • video.mp4
    86.4 MB
Awesome work again! It doesn't seem to like VR however. Got an endless scrolling video when trying to stream and render an 8K 180 SBS VR video.
That looks like the resolution you provided to FFmpeg doesn't match the one that the plugin renders in. Are you using the correct value for the "-s" option to FFmpeg? I just rendered an 8K VR180 Stereo video that looked fine with "-s 7680x3840". When you start recording in VaM, the plugin should print a line to the message log containing the size and format you should use for FFmpeg ("Sending raw image data ...").
 
That looks like the resolution you provided to FFmpeg doesn't match the one that the plugin renders in. Are you using the correct value for the "-s" option to FFmpeg? I just rendered an 8K VR180 Stereo video that looked fine with "-s 7680x3840". When you start recording in VaM, the plugin should print a line to the message log containing the size and format you should use for FFmpeg ("Sending raw image data ...").
Perfect, that worked! Didn't realize I had the wrong resolution so thank you.
 
This is not the static look, mind you. It looks fine before the animation starts. When playing the scene at normal game speed, it looks almost perfect. It's only in video rendering that it does it this bad.
Did you ever find out how to completely fix it? I have a scene where the characters walks forward, it's wild how awful it looks. It's also all using timeline.
 
Back
Top Bottom