• Hi Guest!

    We are extremely excited to announce the release of our first Beta1.1 and the first release of our Public AddonKit!
    To participate in the Beta, a subscription to the Entertainer or Creator Tier is required. For access to the Public AddonKit you must be a Creator tier member. Once subscribed, download instructions can be found here.

    Click here for information and guides regarding the VaM2 beta. Join our Discord server for more announcements and community discussion about VaM2.
  • Hi Guest!

    VaM2 Resource Categories have now been added to the Hub! For information on posting VaM2 resources and details about VaM2 related changes to our Community Forums, please see our official announcement here.
Video Renderer for 3D VR180, VR360 and Flat 2D & Audio + BVH Animation Recorder

Plugins + Scripts Video Renderer for 3D VR180, VR360 and Flat 2D & Audio + BVH Animation Recorder

Download [<1 MB]
When I tried to render a 9:16 vertical video using this plugin, only part of the preview screen was output as an image, but after checking the source code (Eosin_VRRenderer.cs) I found that changing
C#:
Rect sourceRect = new Rect(0, 0, renderTex.width, renderTex.width);
to
C#:
Rect sourceRect = new Rect(0, 0, renderTex.width, renderTex.height);
fixed the issue and the rendering worked correctly.
 
Guys I have these error logs :

!> Error during attempt to load assetbundle Eosin.VRRenderer.15:/Custom/Scripts/Eosin/VRRenderer.shaderbundle. Not valid
!> VRRenderer: Could not load LilyRender shader!
!> VRRenderer: Could not load LilyRenderAlpha shader!
!> VRRenderer: Could not load LilyRenderRotate shader!
!> VRRenderer: Could not load LilyRenderRotateTriangle shader!
!> VRRenderer: Could not load PixelSliceEquirect shader!
!> VRRenderer: Could not load AlphaFromDifference shader!

Someone upper on this discussion said that it was because of name conflict or something. Please help.
 
Guys I have these error logs :

!> Error during attempt to load assetbundle Eosin.VRRenderer.15:/Custom/Scripts/Eosin/VRRenderer.shaderbundle. Not valid
!> VRRenderer: Could not load LilyRender shader!
!> VRRenderer: Could not load LilyRenderAlpha shader!
!> VRRenderer: Could not load LilyRenderRotate shader!
!> VRRenderer: Could not load LilyRenderRotateTriangle shader!
!> VRRenderer: Could not load PixelSliceEquirect shader!
!> VRRenderer: Could not load AlphaFromDifference shader!

Someone upper on this discussion said that it was because of name conflict or something. Please help.
You have a community asset in your scene that is conflicting with the renderer.

When I get this error I systematically delete community assets until I find the offending one.

I have never got a scene to work with an asset that triggers this error, so hopefully you can work your scene without it. When you find it.
 
You have a community asset in your scene that is conflicting with the renderer.

When I get this error I systematically delete community assets until I find the offending one.

I have never got a scene to work with an asset that triggers this error, so hopefully you can work your scene without it. When you find it.
I had to delete like 3 CUA and then it worked, but I really want to keep them on my scene. There's no way at all ? By the way, when you have the render, is it gonna be a video where you can actually rotate the camera ?
 
Last edited:
I had to delete like 3 CUA and then it worked, but I really want to keep them on my scene. There's no way at all ? By the way, when you have the render, is it gonna be a video where you can actually rotate the camera ?

It's probably just 1 of those three, but no, I've never got this plugin to work without deleting the offending asset.

I had to delete like 3 CUA and then it worked, but I really want to keep them on my scene. There's no way at all ? By the way, when you have the render, is it gonna be a video where you can actually rotate the camera ?

You can rotate in place in a VR180 video, but you're stuck in 3D space in the location of your camera. You can use a camride plugin and make videos with a moving perspective, but that's all predefined prior to filming.
 
It's probably just 1 of those three, but no, I've never got this plugin to work without deleting the offending asset.



You can rotate in place in a VR180 video, but you're stuck in 3D space in the location of your camera. You can use a camride plugin and make videos with a moving perspective, but that's all predefined prior to filming.
Yes, what I would want to do is to be able to rotate my camera while watching the render video I generated. Does this plugin allow to do that ? And if yes, which settings should I put ?
 
If I understand your question, you're asking if you can rotate the perspective of your videoplayer, while in VR? These are videoplayer settings you'll have in Pigasus/Heresphere or whatever.

If you record a VR180 video this is possible.

Hopefully I understood your question.
 
If I understand your question, you're asking if you can rotate the perspective of your videoplayer, while in VR? These are videoplayer settings you'll have in Pigasus/Heresphere or whatever.

If you record a VR180 video this is possible.

Hopefully I understood your question.
So if I use one of these two applications I could rotate while reading the rendered video ?
 
When rendering, the hair was floating randomly, but when I was playing normally, it was stable
Is there a pre baking function?
 
Last edited:
For your problem ...

I do my editing in Davinci Resolve not Avidemux ... but a classic way to get a speed shift is to not have the speed of capture not set in the editor (Avidemux). If you are capturing at 60fps and Avidemux happens to be defaulting to 24 fps like standard movies/TV it will come out as slow motion.

But I also think that the new Timeline plugin can do more time tricks now? I don't remember previous a setting for "realtime" versus "frametime" in the options for each animation sequence. When capturing you want "frametime". It defaults to "realtime". The current scene I'm working on has triggered triggered audio and fully dies when set to realtime.
The plugin can't "drop" frames so it can never get out of sync when set to gametime. One frame rendered is one delta time of audio.

Cool Trick: You can even do things like triggering appearance changes and other things that take time and you don't see it in the output. The capture plugin stops and waits. This means you can do far-far-far more complex scenes captured than you can ever run in real-time VAM. The ability to do appearance and clothing swaps and otherwise immersion breaking triggered actions as part of the animation is a big bonus.

For scene and rendering recommendations ....
I can't give you advice on what "reasonable" looks like. In the pr0n world everyone seems to want to max out their Quest 2 and I can give tell you how I do that.

In VAM...

While rendering scenes I have the standard VAM set to .5 with no aliasing. The plugin doesn't care. It runs it's own render pipeline and has it's own settings for that.
You want to set shader quality to max that one matters. Smoothing to taste.

"Usually" in game engines you want the physics rate at the desired frame rate. But the latest thing I'm working on has so much soft body, hair, and clothing VAM just stopped cold. I ended up flailing a solution by doubling the physics rate and setting the queue to 3. A game engine shouldn't need to compute physics more than once time per frame or ever need to queue physics ticks - so *shrug*.

The only thing I use PostMagic for is LUT if the scene I'm rendering already uses a LUT. Otherwise I do the color grading in Davinci. I can't think of any other plugins I use that make more pretty or change how things are computed.

In the capture plugin settings ...

My 2080ti is able to handle ful 8x anti-aliasing and 2048 per cubemap face (for 8K capture) without running out of vram. You should be able to do teh same with the 2089.

You have to enable the post effects check mark to catch anything done by PostMagic (the LUT). I capture 60fps with no frame skipping.
I changed jpeg from 99 to full max 100 - cuz why not? PNG would be even better for post editing but it's way to slow even for me. Especialy if I don't do any color grading.

Assuming VAM can render out fast enough (for my latest scene it definitely can't) I'm getting about 180 frames (jpeg images) per minute written to disk. So a 3 minute video takes about an hour to render fully.

In my latest capture the least jpeg friendly 8K frames are saving as 10meg each on disk. A "pure black" 8K frame is 3 megs on disk.

For my editing and compression in Davinci Resolve ...

(I used the $300 pay version but the only version most people need is free. I definitely recommend it.)

If you are rendering for a Quest 2 - the max it can handle in it's decoder is ~80 Mbit/s of h.265 "target" bitrate.
In my latest clip the actual bitrate after encode was 93 Mbit/s.

The resulting for size for 35 secs of 8K video was ... wait for it ... 400 megs.

Think that's big? Fun fact ... YouTube can handle uploads of Avid files. The VC-3 compressed DNxHD version is 16 gigs ... just for 35 seconds of 8K video.

The "sweet spot" in my opinion is 5K 30Mbit/s h.265 encode from 7K capture in VAM. Playable in a Quest 1 and still looks great in a Quest 2.
The actual bit rate after encode was 47 Mbit/s with a 200 meg file size.

What are the encode settings used?

This is the guide for the newest nvidia preset naming schemes. https://docs.nvidia.com/video-technologies/video-codec-sdk/nvenc-preset-migration-guide/

I believe the highest quality setting in Davinci for h.265 encodes in using settings similar to the "P5" profile level of nvenc.

Using 7K plugin output, tweaking down the target bit rate to 60 and lowering the profile to P4 (old name "HP" for High Performance) might be closer to what someone calls "reasonable".

Hopefully this long winded non-answer can give you some ideas for your own captures and encodes.
You are my new god ! 😄 this gametime vs realtime in the timeline plugin is a game change for video rendering... I didnt understand why physics were going crazy when i tried this -greatful- renderer in the first time...
But now we talk !
I can finally produce some real nice bouncing boobs videos :ROFLMAO: 😘

Thank you guys 🙏:love:

Edit 1 : May i suggest you @Eosin to add a screen preview offset XY parameters, for managing multiple cameras ? Would be lovely ! <3

Edit 2 : Is a 3.09Go size for a 13:49 min video at 4K (UHD) 60fps a good or bad ratio ? I used this set up for the encoding :
Code:
ffmpeg -framerate 60 -i CAM1/CAM1_%06d.jpg -i CAM1/CAM1_049754.wav -c:v libx265 -tune:v fastdecode -level:v 6.2 -crf 20 CAM1.mp4
Wondering how to reduce the size with a minimum quality loss ofc... may be increase the CRF value to 30 or higher ? if anyone know a good tip to achieve such a thing, feel free to share :)

Edit 3 : After a few search, tries and tests, i found a way to reduce drasticly file size with a low level of loss, i first render images at -crf 16 to mp4, then i just re-encode the new mp4 to mkv with a 2nd pass :
Code:
ffmpeg -i input.mp4 -vcodec libx265 -crf 26 output.mkv
i got a 924Mo mp4 for 2min26 video and got a final mkv at 74,5Mo. Guess values between 24-26 crf are good for the 2nd pass, and 16-18 for the 1st pass, depending on the complexity of your scenes
This not the best workflow but it can help in some cases
Hope this can help anyone
 
Last edited:

Similar threads

Replies
4
Views
5K
Plugins + Scripts Freeze 2d
Replies
3
Views
198
Replies
12
Views
2K
Deleted member 70582
D
Back
Top Bottom