How to Customize ALVR-Client for Quest 2 Passthrough on Windows with Any Mask Color

really good find. I was getting annoyed by the loss in precision and trying to rewrite with a totally different approach but this is much better. that repo also has some other useful stuff I'll dig into later but for now here's updated build with your mask shader. blend mode sucks on this build though, sorry
 

Attachments

  • alxr-client-quest.7z.var
    3.3 MB · Views: 0
  • alxr-client-quest.7z.var
    3.3 MB · Views: 0
really good find. I was getting annoyed by the loss in precision and trying to rewrite with a totally different approach but this is much better. that repo also has some other useful stuff I'll dig into later but for now here's updated build with your mask shader. blend mode sucks on this build though, sorry

Appreciate all of the work on this. Looks great now, almost can't see the outline anymore. Is this just for 18.5 or does it work with newer versions?
 
btw for anyone interested in contributing directly or otherwise this might be a helpful read to understand color space stuff on a conceptual level and how the quest handles it specifically
 
uh which one do you have? latest one on my post? dont know what wouldve caused it but I have noticed it too in some builds. might be too sesitive with the chroma key
 
Master branch got an update 2 days ago to version 0.13 with some notable changes to the vulkan plugin including a new vertex buffer layout. Ive been trying to actually learn vulkan the past few days and missed the update while making my own attempt at modifying the layout. My changes to the vulkan side sucked so I scrapped them and only kept the colorfunctions library and some of my shaders (honestly I should say "our" bc everyone here contributed) which are still better looking to me.


But the new layout does look much better to work with and could make it a little easier to deal with the outline. One method would be to actually use the UV coordinates to sample slightly outside the frag shader and either actually check those colors as well and fade the alpha. This should work okay but I feel like it may be costly to client performance which I'm trying to be considerate of.

My headset craps out and gets unplayable decoder lag (125ms total when I'm usually at 25) above native res ~130-150ish HVEC or ~240 H264 even with strong foveation and good settings otherwise. I've dialed things in decently well but I do occasionally get hard decoder stutter causing a bunch of big blocks of color to pop up that don't get faded out. I'm not sure if this means that my new shaders are more costly or they just can't detect the color or won't run on some fragments when there's already decoder lag. I've setup the GPU profiler on meta quest developer hub so I can make sure I'm not inducing headset side lag. It's highly likely it could just be my router or PC messing up.

The most effective option to remove the outline would also be the most costly I believe, but this one comes straight from the oculus documentation and is worth trying. So currently we have the video stream and we have one passthrough layer below it that we reveal by erasing some of our texture by sampling each pixel and checking the delta E against a predefined color. This method kind of sucks but I get why we do it. The alternative that Meta suggests is to have another layer between your texture and the passthrough layer. This layer would be set to grey and the alpha would be set to 1 minus the key color's chroma. Their example used rgb and chose a fully green mask but I would use a different format. Each additional layer has potential to mess up GPU performance but I have a feeling that this could more computationally efficient. Or it could be exactly the same but either way I'd expect it to do much better at blending out any outline

But I actually think the core issue and the most realistic fix is just gonna come from picking the right color spaces to work with at every step of the pipeline, from video stream to sampler to frag shader. The issue may not even be pixel coverage but rather that there's still lost data. We're only working with 8 bits of color after all


Speaking of which, disable the reduced color banding for NVENC option in your server. That tries to give 10 bits for more accurate color but we're tossing out 2 of them anyway. Please correct me if I'm wrong but that's how I understand it anyway and after tweaking that plus additional settings my video definitely looks better.


Also another general tip for using this,
I swear this has nothing to do with my name but get the anime/cell shader plugin on here. You dont need to make them look like anime characters, you want this because it lets you directly apply lighting to the skin and adjust the color and softness of shadows as well as add your own outline that can in some cases help with the mask outline. Give it a shot and see if it helps.

I will upload a build soon with current changes and the 0.13 update. I'm not happy with the mask but the blend mode is massively improved by using delta E to check luminance as well as an experimental but surprisingly effective test against a generic skin tone that prevents blending. Im not 100% sure how much it's doing but it does seem to help and I'm considering including a LUT with an array of skin tones to check against. Regardless I actually prefer blend mode in the current state and if you put even slightly colored lighting you get a fully opaque character that can still have shadows. It's easily the best version of this so far when you get it set up right.

Speaking of setup anyone know how to change the default scene/character? Tired of looking at that ugly ass bitch T posing when I start the game
 
Sorry I haven't been responsive here, just either busy or dealing with headaches & lethargy, about the recent changes, there isn't a new vertex buffer, there isn't one at all now :) basically before I was creating a vertex & index buffers for a (screen-space) quad the new version removes them and replaced with a single over-sized triange, the positions & uvs are constants in the vertex shader i also removed some redudant matrix transforms plus some other minor tweaks. So the changes are mainly removing buffers, removing pipeline setup and modified/simplified vertex shader.

Regarding delta-e formula that you guys having been looking at and/or testing, delta-e 2000 while the most accurate way to perceptually compare colours it is the most costly, surprised it was even usuable but i was considering it as an non-default option. I don't think you need to use this to deal with the outline issues, from what I've briefly read here it comes off like chroma spill, if that is the case you should be able to handle this seperately while using a cheaper, simpler versions of delta-e. There's like 3 or 4 standards each with more accuracy but more computationally costly.

Just search for chroma key spill, that is what I will be looking at soon.
 
Last edited:
Sorry I haven't been responsive here, just either busy or dealing with headaches & lethargy, about the recent changes, there isn't a new vertex buffer, there isn't one at all now :) basically before I was creating a vertex & index buffers for a (screen-space) quad the new version removes them and replaced with a single over-sized triange, the positions & uvs are constants in the vertex shader i also removed some redudant matrix transforms plus some other minor tweaks. So the changes are mainly removing buffers, removing pipeline setup and modified/simplified vertex shader.

Regarding delta-e formula that you guys having been looking at and/or testing, delta-e 2000 while the most accurate way to perceptually compare colours it is the most costly, surprised it was even usuable but i was considering it as an non-default option. I don't think you need to use this to deal with the outline issues, from what I've briefly read here it comes off like chroma spill, if that is the case you should be able to handle this seperately while using a cheaper, simpler versions of delta-e. There's like 3 or 4 standards each with more accuracy but more computationally costly.

Just search for chroma key spill, that is what I will be looking at soon.

yeah no worries I had also planned to message you and never did, just been trying to learn vulkan from zero so I can ask more useful questions




as for the computational cost of deltaE I wouldn't be too concerned
1693944345111.png
GPU usage is high of course but below 90% is fine for <25 ms latency. I'm still a bit confused as to which color space we should be using and if we should keep using the conversion during sampling. Seems like an odd step since we then swizzle it back to RGB but I did also try just taking it as XYZ and it might've worked but I might've had some extra steps. In any case not too concerned about optimization at the moment, just want to improve the accuracy. Honestly if you could just try to clearly list the various color spaces/pixel formats we go through in order that would be super helpful. I did check out meta's detailed writeup about color spaces and their recommendations I'm just not sure how that plays out with the sampling conversion and all

I did look into chroma key spill and mostly saw people suggesting to feather the pixels on the outside which I guess we could do by taking the coordinates in the frag shader but I'm not sure how costly that could be. I've also seen suggestions about having another layer that matches alpha to color but that could be costly.

Anyway I have some stuff I want to try but I'd love to hear your ideas. Let me know if you're feeling well enough to chat on discord or whatever
 
Last edited:
I'm no expert in this area but the typical colour spaces used for chroma-keying are ycbcr formats, HSV, LAB. Basically anything that seperates out the luma and chroma is going to be way better than my initial version for supporting any colour key, definately don't want to use rgb.

I forgot to mention a few things, in OpenXR there are two ways a runtime can support passthrough (a runtime could support both), there is the core spec way which is environment blend modes or explicit passthrough extensions. The code paths are different and the latter typically have more/higher level functionality.

With the core spec way, there are two non-opaque env blend modes, additive blend where pure black pixels are treated as transparent and an alpha blend mode where there's alpha value to blend passthrough & render layer.

In alxr I've add support for a both methods but currently the core-spec way I'm not using the passthrough shaders, and picking additive blend if both non-opaque modes are available as that already handles a kind of chroma-keying of black for free. To better support runtimes that do this method of passthrough, i'll be re-using these shaders but I maybe slightly modifying the masking shader to either convert matching pixels to pure black for additive mode or set an alpha value (as is now).

With the non-core spec way of passthrough extensions, there is typically settings to configure how composition is done in code that you wont see in the shader, you may want to consider reading this, it's for Oculus's vendor specific passthrough extension but htc has a simillar (though less feature rich) version.
 
Oh one last thing, if you're learning Vulkan, you can easily pick up OpenXR as the API has the same arch design i don't mean concepts just the design of the API, how it's extenable, etc.
 
Yeah I've been working on learning both. I've worked with openxr before but only in unity or unreal where you don't even need to write code to use extensions. I dabble in many languages including c++ but I find it much slower to parse when I'm looking at a new code space compared to c# or js. I've been reading through the openxr, vulkan, and meta references a lot but I have to look elsewhere for examples and tutorials for vulkan bc the usage sections aren't clear enough to me. But it's no big deal, I'm just learning as I go. I was actually planning to build the oculus native samples we already have included in the repo so I have more to go off of





but the typical colour spaces used for chroma-keying are ycbcr formats, HSV, LAB.
Right so at the moment I'm using a delta E function that takes two SRGB colors, converts to lab, then does the comparison. That obviously sucks and is worse than a few days ago when I was converting to lab and comparing against a key color already in LAB so I'll go back to that. If I just do .xyz does that immediately give me YUV or do I need to swizzle the values in the sampler?


I did read that oculus page and I knew we have both additive and alpha blend, just didn't realize you weren't using official meta extension with additive. I'm not entirely clear on the implications (does that mean no styling?) but I'll go review those snippets again.
Edit: nvm after rereading that oculus page I get what you're saying

I also like the additive blend more but your original shaders was too aggressive in removing shadows on people turning them into ghosts when the scene was dim. I've altered it to use the same delta E library to test for luminance and also have it boost alpha when it's close enough to skin tones. The overhead doesn't seem like an issue so far and the results are much clearer. It actually looks like what you'd expect mask mode to look like, provided you have either a bit of direct lighting or any amount of colored ambient light. I'll do some more performance testing and then can show you the code if you wanna try it.


Last thing, could the new (I think?) constants for key color be used for configuration? I was gonna go try to read from a file and see how it goes but just thought I'd ask
 
Last edited:
Right so at the moment I'm using a delta E function that takes two SRGB colors, converts to lab, then does the comparison. That obviously sucks and is worse than a few days ago when I was converting to lab and comparing against a key color already in LAB so I'll go back to that. If I just do .xyz does that immediately give me YUV or do I need to swizzle the values in the sampler?

I'm not familliar with the xyz colour space but I have seen rgb to lab conversions using that as an intermediate step. Yes there's no reason to convert the key for every pixel shader invocation, precompute that once, I've seen that repository but I don't recall all the available functions. Note the function that samples the video texture outputs a pixel in linear rgb space (check baseVideoFrag.glsl to see `sRGBToLinearRGB`), that needs to be done in the shaders as these textures typically don't have a vulkan format sRGB type for automatic conversion to take place (out of my control, AHardwareBuffer imports).


I did read that oculus page and I knew we have both additive and alpha blend, just didn't realize you weren't using official meta extension with additive. I'm not entirely clear on the implications (does that mean no styling?) but I'll go review those snippets again.
Edit: nvm after rereading that oculus page I get what you're saying

I also like the additive blend more but your original shaders was too aggressive in removing shadows on people turning them into ghosts when the scene was dim. I've altered it to use the same delta E library to test for luminance and also have it boost alpha when it's close enough to skin tones. The overhead doesn't seem like an issue so far and the results are much clearer. It actually looks like what you'd expect mask mode to look like, provided you have either a bit of direct lighting or any amount of colored ambient light. I'll do some more performance testing and then can show you the code if you wanna try it.

I'm using meta's passthrough extensions (+ HTC, and Pico), what i mean is I support using either way, some runtimes like Meta's OpenXR runtimes do not support passthrough using the core spec blends mode, only with vendor extensions. All I meant is for the case of other runtimes that only support the core blend modes way. I'll just be adding some missing logic in C++ to switch those modes (like with quest) and re-using the mask shaders (with the improvements i/u/we do in the end). The only thing you need to note that i might be tweaking mask shader a little. it's not a big deal really.

Last thing, could the new (I think?) constants for key color be used for configuration? I was gonna go try to read from a file and see how it goes but just thought I'd ask

Not sure which constants are you referring too?
 
I'm not familliar with the xyz colour space but I have seen rgb to lab conversions using that as an intermediate step. Yes there's no reason to convert the key for every pixel shader invocation, precompute that once, I've seen that repository but I don't recall all the available functions. Note the function that samples the video texture outputs a pixel in linear rgb space (check baseVideoFrag.glsl to see `sRGBToLinearRGB`), that needs to be done in the shaders as these textures typically don't have a vulkan format sRGB type for automatic conversion to take place (out of my control, AHardwareBuffer imports).

Sorry I confused both of us, I was not using CIE XYZ which is the intermediate step you're talking about and shows up in the colorfunctions repo. I was using sYCC which is a member of the YCbCr family so I thought it made sense. The reason I said xyz is because you can
do .xyz as generic identifiers for the values to get Y, Cb, and Cr if you change the sampler to YCBCR_IDENTITY. I'm not sure if swizzling needed to be changed but I think my end result made sense and I think you can still swizzle to RGB and green just becomes Y. I did that for a few days and then realized SRGB would be better as colorfunctions provides composite functions to get DeltaE and luminance direct from SRGB but not SYCC. Also @fiestamart who showed me the LAB stuff in this thread was using XYZ_TO_LAB but I think that was a mistake since we don't work with it directly.
1693954992610.png


In any case I guess I'll take another look at sYCC but SRGB wasn't too bad. If I understand correctly, the sampler spits out SRGB that you would normally need to linearize for RGB but I can cut that step out and put the raw .rgb values into my SRGB_TO_LAB methods. Unless I have that backwards, but even if I'm right I forgot to actually disable the linearize function which would explain some recent accuracy issues. Sorry, if you're confused by what I'm saying don't worry because I'm even more confused, but I'll become a color expert soon enough.

I'm using meta's passthrough extensions (+ HTC, and Pico), what i mean is I support using either way, some runtimes like Meta's OpenXR runtimes do not support passthrough using the core spec blends mode, only with vendor extensions. All I meant is for the case of other runtimes that only support the core blend modes way. I'll just be adding some missing logic in C++ to switch those modes (like with quest) and re-using the mask shaders (with the improvements i/u/we do in the end). The only thing you need to note that i might be tweaking mask shader a little. it's not a big deal really.

Understood

Not sure which constants are you referring too?

I havent seen the old shaders in a while but pretty sure those constants werent there before and IIRC the vulkan plugin just had a single variable holding the key color until the current version. Could you not just plug in right here to read from a file?
1693951868502.png
1693955502936.png



btw for whoever needs here's a great resource for some info about color spaces and our LAB conversions and a more reliable color converter than many others
 
Last edited:
builds for all android versions updated to 0.13.0 on my other page. If you missed the update on that page definitely go check it out. I actually think the mask mode is as good as it can get, at least for a game like vam with configurable colors. You can even adjust the background within the entire green spectrum (i.e. changing only hue) which improves compatibility a bit
 
@krch @animetiddyenthusiast

Hey with v0.15 I've upgraded the NDK version to the newest (LTS) that came out a couple of days ago, v26.0.10792818, it's easy to switch back the to the last version but very soon this will be the min version because of some upcomming changes I have.

Later on tonight I'll be pushing a bug fix for Lynx R-1, it will be v0.15.1 once that is available @krch you can add this headset to the list of supported if you like but until I've done the client side UI (I've started to work on it) and Lynx doesn't have physical controllers the only way to switch passthrough modes atm is with adb, e.g:

adb shell setprop debug.alxr.passthrough_mode <-mode-> where <-mode-> is one of None, BlendLayer, MaskLayer

This system property can be used for any android based headset.

If you guys have been wondering why there are multiple apks, without going into much details the main reason is because some android openxr runtimes for standalone headsets are not compatible with the OpenXR cross-vendor loader for android and provide vendor specific ones, Lynx's runtime is one of those temporarily so it has it's own apk but eventually it will be so the generic apk (alxr-client.apk) will be the one. In the longer term there will only be a single apk, regardless of vendor specific loaders but that's a bit more tricky to deal with right now.
 
@krch @animetiddyenthusiast

Hey with v0.15 I've upgraded the NDK version to the newest (LTS) that came out a couple of days ago, v26.0.10792818, it's easy to switch back the to the last version but very soon this will be the min version because of some upcomming changes I have.

Later on tonight I'll be pushing a bug fix for Lynx R-1, it will be v0.15.1 once that is available @krch you can add this headset to the list of supported if you like but until I've done the client side UI (I've started to work on it) and Lynx doesn't have physical controllers the only way to switch passthrough modes atm is with adb, e.g:

adb shell setprop debug.alxr.passthrough_mode <-mode-> where <-mode-> is one of None, BlendLayer, MaskLayer

This system property can be used for any android based headset.

If you guys have been wondering why there are multiple apks, without going into much details the main reason is because some android openxr runtimes for standalone headsets are not compatible with the OpenXR cross-vendor loader for android and provide vendor specific ones, Lynx's runtime is one of those temporarily so it has it's own apk but eventually it will be so the generic apk (alxr-client.apk) will be the one. In the longer term there will only be a single apk, regardless of vendor specific loaders but that's a bit more tricky to deal with right now.
Excellent! and Thanks! I'll update the tutorial this weekend..
 
@krch @animetiddyenthusiast

Hey with v0.15 I've upgraded the NDK version to the newest (LTS) that came out a couple of days ago, v26.0.10792818, it's easy to switch back the to the last version but very soon this will be the min version because of some upcomming changes I have.

Later on tonight I'll be pushing a bug fix for Lynx R-1, it will be v0.15.1 once that is available @krch you can add this headset to the list of supported if you like but until I've done the client side UI (I've started to work on it) and Lynx doesn't have physical controllers the only way to switch passthrough modes atm is with adb, e.g:

adb shell setprop debug.alxr.passthrough_mode <-mode-> where <-mode-> is one of None, BlendLayer, MaskLayer

This system property can be used for any android based headset.

If you guys have been wondering why there are multiple apks, without going into much details the main reason is because some android openxr runtimes for standalone headsets are not compatible with the OpenXR cross-vendor loader for android and provide vendor specific ones, Lynx's runtime is one of those temporarily so it has it's own apk but eventually it will be so the generic apk (alxr-client.apk) will be the one. In the longer term there will only be a single apk, regardless of vendor specific loaders but that's a bit more tricky to deal with right now.
havent been around but I did see the ndk upgrade. Will update my version soon, appreciate the ping
 
Back
Top Bottom