Video Renderer for 3D VR180, VR360 and Flat 2D & Audio + BVH Animation Recorder

Plugins Video Renderer for 3D VR180, VR360 and Flat 2D & Audio + BVH Animation Recorder

Eosin updated Video and Image Renderer for 3D VR180 and VR360 and Flat 2D with a new update entry:

v7: Transparent VR and flat renders and preview enhancements

- Added transparency support for VR and flat renders. This allows compositing multiple VR videos together and compositing Virt-a-Mate VR footage into real-life VR footage.
>>> Note: Loading any scene with PostMagic disables transparency everywhere until you restart VaM
- Preview window can stay open permanently for positioning scene elements relative to camera
- Improved crosshair and added borders to preview window so it's clearly visible in empty...

Read the rest of this update entry...
 
!> Error during attempt to load assetbundle Eosin.VRRenderer.6:/Custom/Scripts/Eosin/MacGruber_Convolution.shaderbundle. Not valid
!> VRRenderer: Could not load convolution shader!

i get this error with the plugin loaded.
 
Suggestion for folks:

You can add the plugin to the Window Camera.. Thankfully they point in the same direction.

The window camera can be set to full 90 degrees FOV to help frame the shot. Before I noticed there was a preview I used the UI monitor that already comes from the window camera.

If nothing else the icon is already a camera.

For Eosin:

Naught but a love letter because transparency!
Be still my beating heart.
I never imagined you could spit out an alpha channel from a VR camera.
Brilliant.
 
Eosin updated Video and Image Renderer for 3D VR180 and VR360 and Flat 2D with a new update entry:

v8: Real time VR preview and background image for instant compositing

- You can now preview the VR render in real time at the quality level of the final render. This should make it much easier to position things accurately and estimate the impact of cubemap resolution, smooth stitching and MSAA settings.
- You can now load a background image into the preview window. This makes it easy to composite a render into existing VR or flat footage by matching lighting and positioning, or FOV for flat content.
- Correction to v6 note...

Read the rest of this update entry...
 
OK, so this is amazing first. A couple of issues.
I have a Vuze VR camera that takes 360stereo images and videos.
I took a photo of my room and loaded it into the preview however the preview is a "mono" single image and the photo is a stereo image. So I get the person atom stradling 2 different views of my room, her top half in one and bottom half in the other. This isn't a good preview.
Perhaps it should assume that the image loaded is the same format as you are set to render and crop either the top or bottom of the image out so it matches? This stops it having to be a manual step which sucks.
Also, resolution. I picked 4k which my camera is. However, it claims this is 3840x1920. My camera makes images 3840 x 2160 for "half" res or the full high quality videos (4K) are 3840x3840. Is it possible to add in these resolutions are I think they are fairly standard for VR camera use and it would make compositing a lot simpler...
Lastly. Is it possible to get the "preview" backdrop image automatically stitched into the frames as they are taken? This would slow processing down slightly but remove a whole annoying manual step later on. It would then be simple to load a VR360stereo image in and render the video onto it without any other software, a lot of users might appreciate this.

Amazing work btw, lots of updates, loving it.
 
Since it is still not added what file tagging a file needs to be compatible in most VR Players

For Pigasus VR Player in 180 Stereo you use

Filename_180_3dh.jpg

for Pictures

or

Filename_180_3dh.mp4


for Videos

for a dimension auto detection and most comfort playback means all Pics or Videos will load accurate from scratch.

More Questions or beginners recommendations....

For the Developer:

Please explain how one ads a empty atom and how to add the plugin acurate to that.
And most of all how to see / find the empty atom then in EDIT mode.
That has been a mess for me to even find the empty atom ;)

To anyone using is currently:

Empty ATOM Workflow:


- How do i make a model look with her eyes accurate into the empty atom position like looking into the camera ?

Window Camera Worklow:

- @ragingsimian
Since you recommended one can also add the plugin it to a Window Camera.

-How can i add the VR Render plugin to a camera instead empty atom so the camera then works ?

I see no picture if i add the Plugin to a camera even it is activated.
I only see a black box and no preview.

Empty ATOM Workflow - Squeezed and stretched looking ATOM Camera and Renders:

-What size and distance angle has the empty atom to be always placed correct to not let renders look like looking
at the model through a glass means floor distorted squeezed and stretched?
It even looks like that weird in the Preview Window.

I managed it now in several trials to create one moment a picture looks identical of what it looks in VR using VAM .
But the other trials always fail and i have no idea what i am doing wrong in my workflow since i also placed the ATOM same way at least i think i did.
Most pics look like fisheye distorted, as if the cam would be place in a non straight angle from far above the person but its not.



My normal workflow atm with the atom

I load the VAR at exact that position:

2021-11-16 14_03_41-VaM.jpg



I add a empty atom exact that size and placed it here now , near mouth and directly in front.

2021-11-16 13_50_12-VaM.png


So it resulted in a

working Render which looks to me accurate.

Can someone perhaps check if its ok like that and not distorded from dimensions like squeezed or stretched on ground ?

I added a

filename_180_3dh.png

tag so it works from scratch in Pigasus on Quest 2.

Here is the original PNG Upload around 10 MB

20211116-135714_180_3dh.png



Side Note regarding Stereo Render Plugins and situation such a plugin is missing for DAZ Studio:

I deeply hope someone jumps on a decent DAZ Studio 3D Stereo Side by Side ( much sharper 3D Picture results ) and 3D VR 180 Camera Pro pay plugin to render Images or Videos.

That would be a goldmine seller for anyone who can programm such accurate as it is here with such passion.
Its unbelievable DAZ has no such Plugin even in 2021.
Everyone would appreciate this if semi or professional Artists of course to render in stereo 3D, especially nowadays where VR gets mainstream and 3D Stereo side by side also also has its deserved revival in 3D Picture ART and 3D Videos / Movies.
 
Last edited:
It is also still not known what file tagging a file needs to be compatible in most VR Players here which it is still not in the current plugin.
I do not want to manually configure a VR Player to play this and that picture alltime.


Is it

180 surround spherical fisheye video: “_180F” and “_180x180F”

like filename_180x180F.jpg

?
The videos are equirectangular projection not fisheye projection.
Fisheye videos look like a circle when played in a standard video player.

Some of those naming rules Pigasus is showing are arcane and less used. These are easier to understand (IMHO) and I've never found a player that doesn't understand it for the most common formats. https://forum.skybox.xyz/d/157-filename-rules-for-vr-format

These are commonly used and understood by people and players.
For 180 monoscopic --> videoname_180.mp4
For 360 monoscopic --> videoname_360.mp4
For 180 stereoscopic --> videoname_SBS_180.mp4 or videoname_LR_180.mp4
For 360 stereoscopic --> videoname_OU_360.mp4 or videoname_TB_360.mp4
Note that you *should* select 2:1 aspect ratio for the output of any SBS/LR video and 1:1 for OU/TB
For VR full-width not half-width is the common standard
 
Thx .

For 180 stereoscopic --> videoname_SBS_180.mp4 or videoname_LR_180.mp4 or

works great too yes at least in Pigasus.

So my experiences currently are

For Pigasus VR Player ( Oculus Quest Platform )

Pictures ( JPG or PNG ) :


filename_SBS_180.png

filename_lr_180.png

filename_180_3dh.png


works great to be auto detected

Same for Video, but sure with extention ".mp4"


For Whirligig VR Player ( PC VR Platform )

default settings must be on or at least BARREL Setting has to be activated.

Only Filenames

filename_lr_180.png

filename_180_3dh.png


work.

Do not set the App to "autodetect" Dimensions.
It wont work then anymore.
This app is full of bugs even after so many years which is so sad to see since the devs do not listen at all fixing their issues here.
I installed latest beta so this is how it is behaving currently.

I am unsure if Skybox VR ( Quest and RitS / PC Platform ) now supports 3D Picture Support since they didnt even answer complaints in a thread

If anyones knows PCVR or Quest Players which also autodetect the Dimensions once added them please recommend them of course.
 
Last edited:
OK, so this is amazing first. A couple of issues.
I have a Vuze VR camera that takes 360stereo images and videos.
I took a photo of my room and loaded it into the preview however the preview is a "mono" single image and the photo is a stereo image. So I get the person atom stradling 2 different views of my room, her top half in one and bottom half in the other. This isn't a good preview.
Perhaps it should assume that the image loaded is the same format as you are set to render and crop either the top or bottom of the image out so it matches? This stops it having to be a manual step which sucks.
Also, resolution. I picked 4k which my camera is. However, it claims this is 3840x1920. My camera makes images 3840 x 2160 for "half" res or the full high quality videos (4K) are 3840x3840. Is it possible to add in these resolutions are I think they are fairly standard for VR camera use and it would make compositing a lot simpler...
Lastly. Is it possible to get the "preview" backdrop image automatically stitched into the frames as they are taken? This would slow processing down slightly but remove a whole annoying manual step later on. It would then be simple to load a VR360stereo image in and render the video onto it without any other software, a lot of users might appreciate this.

Amazing work btw, lots of updates, loving it.

To do what you want to do properly takes a bit of work and I recommend using Davinci Resolve. If you are going to do this a lot the pro version is $300 dollars IF you get the USB key version that comes with Fusion for free. Otherwise you spend $300 for Resolve and $300 for Fusion. You need Fusion to do editing natively in VR.
Or - if you are a masochist. You can do what you want using tricks in Blender. Also free.
If you aren't going to do this a lot - sure - you can eyeball the perspective and try to get feet in the right spots and size scales right.

I explained it a bit up at the top but to do a 360 projection in stereo the aspect ratio per *should* MUST be 2:1 and you end up back at 1:1 when you combine both eyes. That's why the VUZE camera has 360 3D as 3840 x 3840.

If you want "easy mode". Do 3D 360 for both the camera and the VAM capture. Then they will overlay perfectly without resizing aspect ratios.
Chop off the front or back part you don't want and there you go.

You want to save yourself that hassle so you can spend all day fiddling with where to locate stuff.
 
To anyone using is currently:

Empty ATOM Workflow:


- How do i make a model look with her eyes accurate into the empty atom position like looking into the camera ?

Window Camera Worklow:

- @ragingsimian
Since you recommended one can also add the plugin it to a Window Camera.

-How can i add the VR Render plugin to a camera instead empty atom so the camera then works ?
I'll edit this post after work today with screenshots.

Empty ATOM Workflow - Squeezed and stretched looking ATOM Camera and Renders:

-What size and distance angle has the empty atom to be always placed correct to not let renders look like looking
at the model through a glass means floor distorted squeezed and stretched?
It even looks like that weird in the Preview Window.

VAM has multiple cameras and in VR you can see the locations of the others.
The desktop view is a 2D camera. When you hop into that camera it becomes "person" and anything tagged to follow person will start looking towards that camera.

The VR camera and the plugin's are in essence globed shaped camera sensors. You can't represent the output of a globe shaped image onto a flat surface without distorting it to fit. The type of distortion used is the image projection type.

I managed it now in several trials to create one moment a picture looks identical of what it looks in VR using VAM .
But the other trials always fail and i have no idea what i am doing wrong in my workflow since i also placed the ATOM same way at least i think i did.
Most pics look like fisheye distorted, as if the cam would be place in a non straight angle from far above the person but its not.

It's easier than you think. Just a few things to wrap your head around about VR projections and you're golden. It'll be easier to see what I mean when you have the 2D window camera and the plugin camera riding together.[/QUOTE]
 
I explained it a bit up at the top but to do a 360 projection in stereo the aspect ratio per *should* MUST be 2:1 and you end up back at 1:1 when you combine both eyes. That's why the VUZE camera has 360 3D as 3840 x 3840.

It took me a bit to work out what you were trying to say, but finally I realised I need the FINAL image to be 1:1, when I selected that I got the correct 4k 3840 x 3840 resolution option! Thank you.
I guess it's no fault of the plugin that this stuff is confusing lol.
Anyway, I did a still shot and am playing with overlaying it onto a photo from my VUZE camera now. The trick I think is to replicate the height of the camera exactly and the rotation for "center" of view. Everything else should then line up. It will be fun to play with.

I would still like the ability to have the preview image pre-rendered into the stills produced by the plugin. I do not have money for video editing software at this time sadly.
 
It took me a bit to work out what you were trying to say, but finally I realised I need the FINAL image to be 1:1, when I selected that I got the correct 4k 3840 x 3840 resolution option! Thank you.
I guess it's no fault of the plugin that this stuff is confusing lol.
Anyway, I did a still shot and am playing with overlaying it onto a photo from my VUZE camera now. The trick I think is to replicate the height of the camera exactly and the rotation for "center" of view. Everything else should then line up. It will be fun to play with.

I would still like the ability to have the preview image pre-rendered into the stills produced by the plugin. I do not have money for video editing software at this time sadly.

You will be happy to know that the company that makes DaVinci Resolve is very happy giving away a free version that has 90% of what's needed for making near professional grade video productions.

And my man Hugh Hou has you covered showing you step by step you how to get started with a useful example for your needs.

As for rendering the preview image into the stills - not sure what you meant. But if you piggy back the Window Camera with VR plugin camera you can screen shot the view seen by the Window Camera.

(example coming in a bit)
 
Hopefully the below quick tut will be helpfull to someone

These 2 video examples were taken from the same camera position.
The 2D flat version was set to output 16:9 at 1080p
https://www.redgifs.com/watch/trimcleanhart

The 3D VR version was set to output 2:1 at 3840x1920
|https://www.redgifs.com/watch/memorablehardaddax

How it was done ...

Click each image thumbnails to make them full size

Note in the first image the icon for the window camera
You can select it directly here or select it in the selection menu
screen 1 window camera.PNG
To find it in the selection menu
Open the atom selection menu
Unhide items if needed
Select the "Window Camera" and it's control
screen 2 open item.PNG
Enabling the window camera will bring up it's video screen in the scene.
Enabling hud will create a window down by the UI.
Change the angle to 90 degrees.
Screen 3 enable cam with 90 degrees.PNG
Go to the plugins tab for the Window Camera
add new entries and load the plugin
Open the Custom UI menu and enjoy making videos and screen shots
screen 6 select plugin.PNG
 

Attachments

  • FLAT_90.jpg
    FLAT_90.jpg
    653 KB · Views: 0
  • FLAT_16-9.jpg
    FLAT_16-9.jpg
    724.4 KB · Views: 0
Last edited:
Eosin updated Video and Image Renderer for 3D VR180 and VR360 and Flat 2D with a new update entry:

v9: Direct compositing to render, video background support, rotate eyes functionality for VR render

- You can now render the background directly to the output and don't have to composite in an external editor if you don't need advanced compositing features
- You can now load a video file as a background source and directly composite it into the output (Note: No support for x265 codec which applies to some high-resolution VR videos)
- Note: May maybe rarely get into a bugged state where you can't composite with video even after reloading plugin. Have to restart VaM and it should work.
-...

Read the rest of this update entry...
 
Amazing work! This is a really awesome plugin. I can't wait to try this latest update.
Thanks!
(btw, " As for rendering the preview image into the stills - not sure what you meant. ", This update is exactly what I meant, " - You can now render the background directly to the output and don't have to composite in an external editor if you don't need advanced compositing features " )
 
Last edited:

Oh my stars and garters!

I was so excited I never realized you were doing the stereo that way! I thought LilyRender handled the stereo problem not monoscopic cubemapping. I had no idea Unity sucked at that.

So - stereo cubemap problems - I know them well. Me and those seams have gone to war to bring 3D VR to Blender Eevee via the same cubemap trick you are doing in Unity.

I fell from my chair when I first saw @CouchOimoSp (on Twitter) figured out how to win the war.

This simple mask.
Mask_2160(core1800).png
These are the left eyes. The first one I didn't use the transparency mask and the second one I did.
No-Blend-Right_Camera_Equirect.pngBlended_LeftEquirect.png
Full VR 180 image with the mask applied. There used to be a decently hidden but still noticeable seam at the bookshelves.
Mr_Elephant_VR180_SBS_LR.png


If you speak node tree at all from any rendering system you should be able to follow this ...

Blending Stereoscopic Cubemap Seams.PNG


The big top box is an image of the front cube face.
The UV map gives it a place to sit on the cube face.
That image is made emmissive - ergo a film projector.
That image is sent to a mix shader - bookmark that for a moment

In parallel - The same image is sent to mix shader that is combining color with transparency (opacity?).
The amount the image is being made transparent being controlled by the alpha channel of the mask . (That means you don't want to lose the alpha channel of the image.)

And now what that node tree actually does ...
overlapping faces.PNG
overlapping faces2.PNG

The side of the cube the camera is point at is pushed in towards the camera. A pushed in rectangle like this still becomes part of a sphere so translates appropriately in the projection from a globe to equirect.

As you get closer to the edges of the front face it starts projecting less light while at the same time allowing more light to come in from behind. The light coming in from behind is the same "eye" of different cameras. One cube face camera's left-eye combines light with another's left eye while always maintaining the same total light intensity reaching the panoramic camera inside the modified cube.

If you look at the image of the projection boxes showing the projected images you can see that where the left face interlaps with the front is where the seam was at the bookshelves. That's now overlaping with one left-eye's perspective on the bookshelf (literal use of the word) melding with the other's.

The vertical ones were the most noticeable but you also get sliding shifts at the ceiling and floor if you want looking up and down to be stereo. A carpet with prominent circles like that one made the shift stick out (from a different angle). This mask trick applies to all 4 edges of the front face.

(Question for the class - where are there still minor seams that can't be fixed?)

How you translate this magic into Unity? I'll be buggered if I know - but you should be able to understand what the light is doing and that's the main thing.

Hope it helps!

Even with the wrong stereoscopy this is still perfect for my own selfish needs.

I have never put anything of interest far enough at the sides to notice loss of stereo vision.

You're already my hero and making it more true every update.

===
Credits not mine ...
The "Mr. Elephant" blend file is one of the Eevee demo files created by By Glenn Melenhorst .
It was never intended to be used for VR - which made it perfect to show both what VR can do and what freaking Glenn can do. He took no shortcuts.
So yeah ... VAM is not the first place I've shamelessly repurposed other people's non-VR work.
 
Last edited:
Maybe just me mass up some settings. The preview window seems can't capture the cloth well.

VaM 11_19_2021 4_31_34 PM.png
 
Eosin updated Video and Image Renderer for 3D VR180 and VR360 and Flat 2D with a new update entry:

v10: Triangle map for 360° stereo, chroma key option

- Added triangular mapping for stereo 360° VR with pivoting eyes. Scene will be built from 3x3 120°+ FOV cameras instead of cube, reducing seams to 3 (1 directly behind viewer, 2 on sides)
View attachment 76883
- Custom seam texture and color can be chosen, seam hiding now has parallax which can make it less distracting
- Issue: some materials and items are invisible when outputting transparency and there is nothing behind them, these items don't seem to write to alpha or z-buffer
>>>...

Read the rest of this update entry...
 
Maybe just me mass up some settings. The preview window seems can't capture the cloth well.
Thank you, it seems this issue is because of the transparency option. Unfortunately I think some things cannot be rendered with transparency the normal way because these items are themselves rendered as transparent and don't write the transparency channel, so it just stays at zero. So either you can turn off transparency, or in the new version I added an option for a background color, so you could remove the background in an external software.
Oh my stars and garters!

I was so excited I never realized you were doing the stereo that way! I thought LilyRender handled the stereo problem not monoscopic cubemapping. I had no idea Unity sucked at that.

So - stereo cubemap problems - I know them well. Me and those seams have gone to war to bring 3D VR to Blender Eevee via the same cubemap trick you are doing in Unity.

I fell from my chair when I first saw @CouchOimoSp (on Twitter) figured out how to win the war.

This simple mask.
View attachment 76437
These are the left eyes. The first one I didn't use the transparency mask and the second one I did.
View attachment 76439View attachment 76440
Full VR 180 image with the mask applied. There used to be a decently hidden but still noticeable seam at the bookshelves.
View attachment 76441


If you speak node tree at all from any rendering system you should be able to follow this ...

View attachment 76438

The big top box is an image of the front cube face.
The UV map gives it a place to sit on the cube face.
That image is made emmissive - ergo a film projector.
That image is sent to a mix shader - bookmark that for a moment

In parallel - The same image is sent to mix shader that is combining color with transparency (opacity?).
The amount the image is being made transparent being controlled by the alpha channel of the mask . (That means you don't want to lose the alpha channel of the image.)

And now what that node tree actually does ...

The side of the cube the camera is point at is pushed in towards the camera. A pushed in rectangle like this still becomes part of a sphere so translates appropriately in the projection from a globe to equirect.

As you get closer to the edges of the front face it starts projecting less light while at the same time allowing more light to come in from behind. The light coming in from behind is the same "eye" of different cameras. One cube face camera's left-eye combines light with another's left eye while always maintaining the same total light intensity reaching the panoramic camera inside the modified cube.

If you look at the image of the projection boxes showing the projected images you can see that where the left face interlaps with the front is where the seam was at the bookshelves. That's now overlaping with one left-eye's perspective on the bookshelf (literal use of the word) melding with the other's.

The vertical ones were the most noticeable but you also get sliding shifts at the ceiling and floor if you want looking up and down to be stereo. A carpet with prominent circles like that one made the shift stick out (from a different angle). This mask trick applies to all 4 edges of the front face.

(Question for the class - where are there still minor seams that can't be fixed?)

How you translate this magic into Unity? I'll be buggered if I know - but you should be able to understand what the light is doing and that's the main thing.

Hope it helps!

Even with the wrong stereoscopy this is still perfect for my own selfish needs.

I have never put anything of interest far enough at the sides to notice loss of stereo vision.

You're already my hero and making it more true every update.

===
Credits not mine ...
The "Mr. Elephant" blend file is one of the Eevee demo files created by By Glenn Melenhorst .
It was never intended to be used for VR - which made it perfect to show both what VR can do and what freaking Glenn can do. He took no shortcuts.
So yeah ... VAM is not the first place I've shamelessly repurposed other people's non-VR work.
Oh I think we are talking about different things. The type of seam visible in your image is indeed fixed in LilyRender with the option of overlapping. I think it occurs most strongly with certain post-processing effects, and fog and such, but you can even see it in a basic skybox. I can't say I understand the mathematics of it though.

The seam that occurs in the "split" render that can be rendered with the rotate eyes thing is more like a seam in spatial continuity, since one set of images is produced from a different position in space than another (like when turning around and the eyes switch sides). So it's like 3 slightly different VR renders packed into one, but each one gives correct stereopsis, a miniature version of what a path tracer does for spherical stereo, just render each pixel from a different position and get pixel perfect 360 stereo all around, unfortunately can't do that in real time (rendering each pixel column separately is at least 50-100 times slower, at that point one might as well get a path tracer and 100 cups of coffee)

is there a way to have this work with post-magic and this https://hub.virtamate.com/resources/subsurface-scattering-skin-beta.9677/ ?
Is there some issue with it, it doesn't render properly? I haven't tried it. If you are getting visual seams between the different sides of the cube, you can use the overlap option which should help a lot, but I'm not sure why a skin shader would show up differently or not at all.
Anyone help me! i cant seem to record at all! i click on all the buttons and nothing happens only the menu goes away and doesnt even record @Eosin please help.
!> Error during attempt to load assetbundle Eosin.VRRenderer.6:/Custom/Scripts/Eosin/MacGruber_Convolution.shaderbundle. Not valid
!> VRRenderer: Could not load convolution shader!

i get this error with the plugin loaded.
I hope in the newest versions this is no longer an issue?
 
Thank you, it seems this issue is because of the transparency option. Unfortunately I think some things cannot be rendered with transparency the normal way because these items are themselves rendered as transparent and don't write the transparency channel, so it just stays at zero. So either you can turn off transparency, or in the new version I added an option for a background color, so you could remove the background in an external software.

Actually I don't think it's really a matter for preview purpose. The preview window do it's job well for composition.

edit: You mentioned about background so I Just try to turn on the scene skybox. Yes the clothes all fine now. Thank you.
 
renderchanges.png
@Eosin I was working on a transparent video plugin that I was using myself and planning to clean up and release, but you beat me to the punch in a serious way, lol.

I had found a workaround for capturing the clothing transparency by using the difference between rendering on a white background and a black background as the alpha. I've attached a version of the script where I implemented that in your renderer.

You're welcome to incorporate this change with attribution since it also falls under the same CC BY-SA license. (Though be warned at the moment it only works for the flat renderer and hasn't been tested super thoroughly)
 

Attachments

  • Eosin_VRRenderer.cs
    118.7 KB · Views: 0
(I'm using translation software. Sorry if the words are wrong.)

Mr. Eosin
Thanks for the great plugin.
Especially the latest version works very well.

I have a question.

With this plugin, after I shoot a SS or MOVIE, all the physics of the scene stops.
Therefore, I am forced to reload the scene.

Is this something I am doing wrong?
Or is this a specification?

Thanks a lot. :coffee:
 
@Eosin I was working on a transparent video plugin that I was using myself and planning to clean up and release, but you beat me to the punch in a serious way, lol.

I had found a workaround for capturing the clothing transparency by using the difference between rendering on a white background and a black background as the alpha. I've attached a version of the script where I implemented that in your renderer.

You're welcome to incorporate this change with attribution since it also falls under the same CC BY-SA license. (Though be warned at the moment it only works for the flat renderer and hasn't been tested super thoroughly)
Wow that's a great solution, I didn't even think about that. I will definitely put this in the next version, thank you! I'm even more impressed that you found your way around my messy awful code though lol.

(I'm using translation software. Sorry if the words are wrong.)

Mr. Eosin
Thanks for the great plugin.
Especially the latest version works very well.

I have a question.

With this plugin, after I shoot a SS or MOVIE, all the physics of the scene stops.
Therefore, I am forced to reload the scene.

Is this something I am doing wrong?
Or is this a specification?

Thanks a lot. :coffee:
Thank you for this report, this was a serious bug (when rendering to flat image or video). It should be fixed now in the new version.
 
Last edited:
Back
Top Bottom