• Hi Guest!

    This is a notice regarding recent upgrades to the Hub. Over the last month, we have added several new features to improve your experience.
    You can check out the details in our official announcement!
Video Renderer for 3D VR180, VR360 and Flat 2D & Audio + BVH Animation Recorder

Plugins Video Renderer for 3D VR180, VR360 and Flat 2D & Audio + BVH Animation Recorder

How do we use the shadersource files which are included in the latest (v10c) release ?
 
How do we use the shadersource files which are included in the latest (v10c) release ?
I put them in just for people to be able to edit the plugin or in case I abandon it, since they cannot be found in the var file. So you do not need them at all to use the plugin.
 
Is there some issue with it, it doesn't render properly? I haven't tried it. If you are getting visual seams between the different sides of the cube, you can use the overlap option which should help a lot, but I'm not sure why a skin shader would show up differently or not at all.

I hope in the newest versions this is no longer an issue?

it doesn't show up in the preview nor in the final rendered images, but idk if that's an issue with the SSS plugin in general
 
Eosin updated Video and Image Renderer for 3D VR180 and VR360 and Flat 2D with a new update entry:

v11: Panoramic stereo, full transparency, reduced VRAM, camera target

- Added panoramic stereo mode for VR render: Each pixel column is rendered from the correct position, creating seamless accurate stereo in all view directions except directly up and down.
>>> This option is around 15-30 times slower than the normal method. However, very low VRAM cost also allows maximum rendering quality next to accurate stereo. VaM will be unresponsive while rendering out a frame.
- Full support for transparency with semi-transparent clothing and materials in all cases...

Read the rest of this update entry...
 
lmao amazing timing, was JUST about to ask if we could have the camera not be frozen so we can use timeline animations on the camera
 
ah, may i request a "stop recording" button/action that we can use with timeline? currently, i have to set the seconds to record manually for every camera timeline animation. it'd be much easier to have a similar thing like start recording so we can record as long as needed and trigger the stop recording manually.

unless there is and i've missed it lmao

also, huge thank you again for your time and effort on this. you've legit pushed the envelope farther with this amazing plugin
 
ah, may i request a "stop recording" button/action that we can use with timeline? currently, i have to set the seconds to record manually for every camera timeline animation. it'd be much easier to have a similar thing like start recording so we can record as long as needed and trigger the stop recording manually.
unless there is and i've missed it lmao
also, huge thank you again for your time and effort on this. you've legit pushed the envelope farther with this amazing plugin
No worries, I added it, it's just one line of code.
Also you should be able to animate the camera by just animating the Empty/Atom that holds the plugin pretty much since first version. It seems to work on my end.
 
Eosin updated Video and Image Renderer for 3D VR180 and VR360 and Flat 2D with a new update entry:

v12: Support command buffer and post-processing effects

- Added support for command buffer effects such as Hunting-Succubus' Subsurface Scattering Skin
- Added support for post-processing effects such as MacGruber's PostMagic
> Some post processing effects don't work correctly in a VR render. This may be partially ameliorated by using "smooth stitching" functionality. If...

Read the rest of this update entry...
 
Last version that worked for me was v10. 11 & 12 fails to load shaderbundles, and spits out constant object reference errors when loaded on anything other than empty atom. It works on a blank scene with no character. I'll how to test later when I get back home to find out what's not compatible.

Edit: Put it in a subscene and brought it over to another scene and it works. Using macgruber's pose camera causes the preview to stop, (plugin on empty binded to windows camera, not on windows camera. Not sure it capture all the movement by pose camera, so will run test render.

Edit#2: Rendering pauses when switching to pose camera (m key -- it uses windows camera), but resumes when I switch back. Pose Camera can still switch angles while inactive, but no other movement besides that. Can't load plugin on windows camera at all, not even in an empty scene. Seems like I can no longer use it with windows camera.
 
Last edited:
Last version that worked for me was v10. 11 & 12 fails to load shaderbundles, and spits out constant object reference errors when loaded on anything other than empty atom. It works on a blank scene with no character. I'll how to test later when I get back home to find out what's not compatible.
Thank you, the issue with reference errors when using on something other than an Empty should be fixed now. The problem with the shader appears to be incompatibility with SuperShot plugin because I am using MacGruber's shader also and when it's already loaded it can't get loaded again. I clarified the error message and in that case supersampling is unavailable.
 
I could try putting on other things, like person atoms. I know that used to work as well. Post magic's motion blur doesn't work, just completely blurs every image regardless of settings on desktop mode. Not really a problem for 60fps and higher, but is somewhat useful when recording at 30.
 
I could try putting on other things, like person atoms. I know that used to work as well. Post magic's motion blur doesn't work, just completely blurs every image regardless of settings on desktop mode. Not really a problem for 60fps and higher, but is somewhat useful when recording at 30.
Yep I noticed that too, also some effects with PostMagic are relatively stronger in the game view than in the plugin. I haven't figured out why though. However I believe all of the effects except Depth of Field (which seems to work OK) can also be added 1:1 in post-processing in a video editor as they are just image-based and don't use any information from the game except the rendered image (it is a little less convenient but it could be done right when creating a video from an image sequence).
 
Thank you for this awesome plugin! Would it be possible to have a pause and resume recording function?
 
Hey there @Eosin

Got 1 "why not" item, 1 feature request and a feature/fix suggestion.

First - Why not
You added the slow way to get the best depth - have you considered fast way for "Fake 3D" aka 2.5D?
If you can figure out the shader, using the zBuffer depth map and a single cube camera is all you need for stereoscopic effect.
Are the zBuffers for Unity normalized? I know they aren't in specific units. But is the 0-to-1 normalized to the far clip and near clip? If so - the zBuffer images can be cubemapped as an equirect also.

Supposedly you can use this reshade filter with VAM to make VAM flatscreen stereoscopic but I never figured it out. This is a link to the parallax function to see the algorithm used.

Second - Feature request:
Can you have a debug option for taking a picture that spits out all of the individual cube face images? If possible their images and depth map images.
Could be useful testing the next thing ...

Third - Feature/fix suggestion:

The issue at the intersections when doing 90 degree rotations is"Retinal Rivalry". (Took a while to find the name of it). The IPD shifts the frustum volumes of the cameras so they all no longer meet up at 90 degrees. There are places in space that are only seen by one eye.

The mitigation is to increase the FOV (and resolution) to have all of the frustrums meet up and overlap. Then fade (smoothstep?) across the overlaps. In the example I studied, he increased FOV from 90 to 108 degrees for front facing cameras. The side cameras are kept at 90 degrees.

Why 108 degrees? I think there's math/rendering easiness involved but not 100% sure.
At 108 degrees and a 1 "unit" circle radius - a 0.056 "unit" IPD gets accounted for exactly.
Anything at that 1 "unit" radius or farther away is covered by the overlap. A larger IPD pushes the full coverage radius back if 108 degree FOV is kept.

I was wondering if there is an even further improvement by comparing the zBuffers of the overlap.
The pixel color that wins from the two overlapping render images should always be the closest one (I think)?
Compensating for Retrinal Rivalry with Increased FOV overlap.jpg
 
hi @Eosin ✋
Are you considering an update that slows down the sound during recording to make Realtime Lipsync, rhythm audio, etc. work?🥺
 
Eosin updated Video and Image Renderer for 3D VR180 and VR360 and Flat 2D with a new update entry:

v13: Audio recording, pause and resume recording

- Added audio recording when recording video. Audio is recorded at the position of the rendering camera (the control holding the plugin), not the main game view. Audio is saved as a WAV file to the video folder.
> Audio sample range should usually remain at 32768 but for safety if unexpectedly there are popping noises in the recording you may try reducing this by a notch or two.
- Video and audio recording can now be paused and resumed (within the same game session). Escape pauses recording...

Read the rest of this update entry...
 
Wie installiert man das? wenn ich nur die var-Datei in meinen Ordner lege, kommen nur Fehlermeldungen, wo lege ich den anderen Ordner ab, nachdem ich die Shadersource entpackt habe?
 
hi @Eosin ✋
Are you considering an update that slows down the sound during recording to make Realtime Lipsync, rhythm audio, etc. work?🥺
To be honest I wasn't even of the issue since I have been testing mostly with music-based and silent scenes, however now you can record audio at the same time and this also slows down audio playback internally to the capture rate so it should also synchronize with audio-based behaviors and plugins. Thanks
Thank you for this awesome plugin! Would it be possible to have a pause and resume recording function?
Yes I added this, thanks for the suggestion. However you cannot pause/resume across different game session (like recording a very long scene at very high resolution), in this case I think there could be some control over multiple session by using the Timeline plugin to end/start render at specific points in the scene
Hey there @Eosin

Got 1 "why not" item, 1 feature request and a feature/fix suggestion.

First - Why not
You added the slow way to get the best depth - have you considered fast way for "Fake 3D" aka 2.5D?
If you can figure out the shader, using the zBuffer depth map and a single cube camera is all you need for stereoscopic effect.
Are the zBuffers for Unity normalized? I know they aren't in specific units. But is the 0-to-1 normalized to the far clip and near clip? If so - the zBuffer images can be cubemapped as an equirect also.

Supposedly you can use this reshade filter with VAM to make VAM flatscreen stereoscopic but I never figured it out. This is a link to the parallax function to see the algorithm used.

Second - Feature request:
Can you have a debug option for taking a picture that spits out all of the individual cube face images? If possible their images and depth map images.
Could be useful testing the next thing ...

Third - Feature/fix suggestion:

The issue at the intersections when doing 90 degree rotations is"Retinal Rivalry". (Took a while to find the name of it). The IPD shifts the frustum volumes of the cameras so they all no longer meet up at 90 degrees. There are places in space that are only seen by one eye.

The mitigation is to increase the FOV (and resolution) to have all of the frustrums meet up and overlap. Then fade (smoothstep?) across the overlaps. In the example I studied, he increased FOV from 90 to 108 degrees for front facing cameras. The side cameras are kept at 90 degrees.

Why 108 degrees? I think there's math/rendering easiness involved but not 100% sure.
At 108 degrees and a 1 "unit" circle radius - a 0.056 "unit" IPD gets accounted for exactly.
Anything at that 1 "unit" radius or farther away is covered by the overlap. A larger IPD pushes the full coverage radius back if 108 degree FOV is kept.

I was wondering if there is an even further improvement by comparing the zBuffers of the overlap.
The pixel color that wins from the two overlapping render images should always be the closest one (I think)?
Hello, sorry I could add the debug/separate and depth images next time, indeed depth can be used in post-processing in some fun ways. However it is a little involved since the user would need to specify camera planes etc. in the UI which most users probably never heard of.
Thanks for you suggestions but I think at the moment the overlapped field of view stuff seems too advanced to me (I thought about it a bit when I exported some images to a stitcher), that's why I just put some bricks you can place on the seams to hide them, I think it's kind of cool, it's like you're in a cage. Also I don't get if you just increase all the FOVs above 90, they all overlap and you alpha-blend them (unless you prioritize front view for example), it's just a different way to try to hide the seams, no? But at the end of the day you can't get a "correct" view of the intersection area unless you actually point the camera at it from the correct position. Even the official VR recorder for Unreal engine to my knowledge just moves and rotates camera around thousands of time each frame to get the good stereo
However I don't think depth-based stereoscopy is necessary since almost no time is spent rendering cameras, there would not be much advantage for the loss of accuracy.
 
Last edited:
Hello, sorry I could add the debug/separate and depth images next time, indeed depth can be used in post-processing in some fun ways. However it is a little involved since the user would need to specify camera planes etc. in the UI which most users probably never heard of.
Thanks for you suggestions but I think at the moment the overlapped field of view stuff seems too advanced to me (I thought about it a bit when I exported some images to a stitcher), that's why I just put some bricks you can place on the seams to hide them, I think it's kind of cool, it's like you're in a cage. Also I don't get if you just increase all the FOVs above 90, they all overlap and you alpha-blend them (unless you prioritize front view for example), it's just a different way to try to hide the seams, no? But at the end of the day you can't get a "correct" view of the intersection area unless you actually point the camera at it from the correct position. Even the official VR recorder for Unreal engine to my knowledge just moves and rotates camera around thousands of time each frame to get the good stereo
However I don't think depth-based stereoscopy is necessary since almost no time is spent rendering cameras, there would not be much advantage for the loss of accuracy.


Overlapping does give you a depth effect and it's the correct one if you would see if you were standing there and glanced left - but not if you turn your head left.

So indeed - the main purpose is to make it seamless. The scenes I always record are black backgrounded and forward focused so it's generally moot for me.

Eventually in VAM 3.0 it will be on the fly ray-traced and you get a perfect VR images without trickery.

You are already adding so many features you'll need to create a whole new UI just for this.

Thanks for the continued awesome
 
i think there's a bug, once i record something, i can't record again. it says:

VRRenderer: Reached end of recording time span, lengthen time or restart.

and nothing i do will allow me to even use it, can't even take a single screenshot without that message popping up
 
die Frage gewinnen. Wohin geht die Shadersource-Datei? Wenn ich einfach die var-Datei einfüge und jemand hinzufüge, kommen nur Fehlermeldungen ohne Pause? Wenn ich die Shadersource extrahiert habe, wo füge ich sie ein? Hilfestellung wird hier wirklich sehr klein geschrieben
 
Could you please tell me what i do wrong, i want to record just simple movie, set plugin to atom , and in preview i see what is needed, i set plugin to record flat file. And all i get i bunch of scrennshots from scnee. Is it ok ? sholud it always create pictures and i have to join them in some outer applictation ? it does not create mp4 or avi files ?

I use VR for steam VR ( Pimax 8K). Which application is best to use to watch movie created with this plugin ?
 
i think there's a bug, once i record something, i can't record again. it says:

VRRenderer: Reached end of recording time span, lengthen time or restart.

and nothing i do will allow me to even use it, can't even take a single screenshot without that message popping up
Thanks for the report, this should be fixed. Also please note that "seconds to record" when using the pause function currently refers to the length of the whole video, not a section of it when using the pause/resume. So if you record full 10 seconds (for example), the message will come up and you have the option to increase seconds to record to 20 if you want to append 10 more seconds after that, or start with 20 and stop after 10 seconds for example
Could you please tell me what i do wrong, i want to record just simple movie, set plugin to atom , and in preview i see what is needed, i set plugin to record flat file. And all i get i bunch of scrennshots from scnee. Is it ok ? sholud it always create pictures and i have to join them in some outer applictation ? it does not create mp4 or avi files ?

I use VR for steam VR ( Pimax 8K). Which application is best to use to watch movie created with this plugin ?
Please read main page of the plugin which specifies how to make a video from the frames using Avidemux. Also flat will record normal 2D video, not VR
die Frage gewinnen. Wohin geht die Shadersource-Datei? Wenn ich einfach die var-Datei einfüge und jemand hinzufüge, kommen nur Fehlermeldungen ohne Pause? Wenn ich die Shadersource extrahiert habe, wo füge ich sie ein? Hilfestellung wird hier wirklich sehr klein geschrieben
Please copy any error messages here. You do not need the shadersource files, they are for programmers.
 
I read first page and still im not sure, so could you answer my questions directly instead of pointing to first page :) ( ive tried to record 2d flat movie ) :

Could you please tell me what i do wrong, i want to record just simple movie, set plugin to atom , and in preview i see what is needed, i set plugin to record flat file. And all i get i bunch of scrennshots from scnee. Is it ok ? sholud it always create pictures and i have to join them in some outer applictation ? it does not create mp4 or avi files ?


2) for VR recording:
I use VR for steam VR ( Pimax 8K). Which application is best to use to watch movie created with this plugin ?
 
Back
Top Bottom