• Happy Holidays Guest!

    We want to announce that we will be working at reduced staffing for the holidays. Specifically Monday the 23rd until Jan 2nd.

    This will affect approval queue times and responses to support tickets. Please adjust your plans accordingly and enjoy yourselves this holiday season!

  • Hi Guest!

    Please be aware that we have released a critical security patch for VaM. We strongly recommend updating to version 1.22.0.7 using the VaM_Updater found in your installation folder.

    Details about the security patch can be found here.
CPU Performance Patch (Up to 30% faster physics, up to 60% more FPS)

Other CPU Performance Patch (Up to 30% faster physics, up to 60% more FPS)

Would launching a bat file like this work (without having SteamVR open)?

"C:\Program Files\Virtual Desktop Streamer\VirtualDesktop.Streamer.exe" "X:\VAM\VaM.exe" -vrmode OpenXR
Probably not since that unity version doesnt support openxr, but there is also this https://gitlab.com/znixian/OpenOVR
This forwards openvr calls to openxr, potentially bypassing steamvr or oculus middleware
 
Probably not since that unity version doesnt support openxr, but there is also this https://gitlab.com/znixian/OpenOVR
This forwards openvr calls to openxr, potentially bypassing steamvr or oculus middleware
Thanks for the suggestions. The new batch file didn't have an effect. I tried to get OpenComposite working, pretty sure I had it configured correctly for a per-game install with the 64 bit DLL from their repo replacing the one in VAM's Plugin folder along with their ini file. It appears to be running as per opencomposit.log but I had a strange warping camera effect. The best I can describe it is the camera is on the end of a stick instead of on a gimbal. I had chat jippity helping and that's as far as we got. So I'm thinking there are two possibilities, either it's been using OpenXR the whole time, or it's not possible. Has anyone got OpenComposite working in VAM with the benchmark specifying OpenXR?

1720888752982.png

Maybe the 5950x and 4090 are just much better than a 5800 and 3090? I might play around with the the performance patch config for VR at some point.
 
Thanks for the suggestions. The new batch file didn't have an effect. I tried to get OpenComposite working, pretty sure I had it configured correctly for a per-game install with the 64 bit DLL from their repo replacing the one in VAM's Plugin folder along with their ini file. It appears to be running as per opencomposit.log but I had a strange warping camera effect. The best I can describe it is the camera is on the end of a stick instead of on a gimbal. I had chat jippity helping and that's as far as we got. So I'm thinking there are two possibilities, either it's been using OpenXR the whole time, or it's not possible. Has anyone got OpenComposite working in VAM with the benchmark specifying OpenXR?


Maybe the 5950x and 4090 are just much better than a 5800 and 3090? I might play around with the the performance patch config for VR at some point.
Should you use the 32bit dll version instead?
 
Should you use the 32bit dll version instead?
Tried it just in case and it didn't work. I figured out a powershell script to check if VAM is 64 bit and it appears to be. IIRC the game just launched in desktop mode.
 
Thanks for the suggestions. The new batch file didn't have an effect. I tried to get OpenComposite working, pretty sure I had it configured correctly for a per-game install with the 64 bit DLL from their repo replacing the one in VAM's Plugin folder along with their ini file. It appears to be running as per opencomposit.log but I had a strange warping camera effect. The best I can describe it is the camera is on the end of a stick instead of on a gimbal. I had chat jippity helping and that's as far as we got. So I'm thinking there are two possibilities, either it's been using OpenXR the whole time, or it's not possible. Has anyone got OpenComposite working in VAM with the benchmark specifying OpenXR?


Maybe the 5950x and 4090 are just much better than a 5800 and 3090? I might play around with the the performance patch config for VR at some point.
oh, you are video streaming to your oculus via wifi? Yeah that's probably why the benchmark is so bad: every frame has to be video encoded and that's eating up the CPU. Can you connect an oculus via displayport to your computer?
 
oh, you are video streaming to your oculus via wifi? Yeah that's probably why the benchmark is so bad: every frame has to be video encoded and that's eating up the CPU. Can you connect an oculus via displayport to your computer?
With the Rift CV1 that was possible, but the direct GPU connection was removed at some point. The options now are the Link Cable via USB-C, or WIFI. I'll try the link cable to see if it makes a difference but I'm doubtful. If not, I'm happy enough as it is.
 
oh, you are video streaming to your oculus via wifi? Yeah that's probably why the benchmark is so bad: every frame has to be video encoded and that's eating up the CPU. Can you connect an oculus via displayport to your computer?
Oh shit... It's impossible. Is there any way to make it less bad without compromising quality too much? Like reducing the bitrate or some other gimmick?
 
Oh shit... It's impossible. Is there any way to make it less bad without compromising quality too much? Like reducing the bitrate or some other gimmick?
If you have recent nvidia card (from 2060up i think), the video encoding is done by the GPU on a separate thread/dedicated chip and the loss in performance is near zero. I don't know about AMD cards but should be the same. This at least if you use Virtual desktop / steam link for the streaming...
 
oh, you are video streaming to your oculus via wifi? Yeah that's probably why the benchmark is so bad: every frame has to be video encoded and that's eating up the CPU. Can you connect an oculus via displayport to your computer?
If you have recent nvidia card (from 2060up i think), the video encoding is done by the GPU on a separate dedicated encoding chip and the loss in performance is near zero. I don't know about AMD cards but should be the same. This at least if you use Virtual desktop / steam link for the streaming...
 
Any fix for
error.png


Edit: Ok some Plugin (idk which one) in my BepInEx folder made this error.
 
Last edited:
I have very bad English, it took me 3 hours to read all the pages of this topic. I can't wait to get back from my vacation to try it! turtlebackgoofy you are a passionate and relentless genius. I love ! MeshedVR the same without you the world would not be the same. You are demigods.

I'm a beginner, I haven't been able to try yet but can you confirm that I understand correctly

1) Do you have to disable HT in the motherboard bios?

2) You must set Vsynch to Fast in the Nvidia control panel to remove the limit.

3) you need the config file

gfx-enable-gfx-jobs=1
gfx-enable-native-gfx-jobs=1
wait-for-native-debugger=0

4) Edit setting for my hardware I find the settings of Shioro (thanks!) But i don't think its the same for my Intel i7 12700H (mobile). 14 cores (6 performance and 8 Efficient) ? I have an RTX3070Ti mobile 150W into my laptop Lenovo legion 5i pro gen7

[threads]
computeColliders=8
skinmeshPart=8
applyMorphs=8
skinmeshPartMaxPerChar=8
applyMorphMaxPerChar=8
affinity=1,3,5,7,9,11,13,15

[threadsVR]
computeColliders=8
skinmeshPart=8
applyMorphs=8
skinmeshPartMaxPerChar=8
applyMorphMaxPerChar=8
affinity=1,3,5,7,9,11,13,15

[profiler]
enabled=0

Should I replace 8 with 6 and subtract 13,15 ??

Why in certain configurations is there “Engine affinity” in addition?

Thanks for your help. I have the feeling that our demigods are preparing a supercharged version.
 
I have very bad English, it took me 3 hours to read all the pages of this topic. I can't wait to get back from my vacation to try it! turtlebackgoofy you are a passionate and relentless genius. I love ! MeshedVR the same without you the world would not be the same. You are demigods.

I'm a beginner, I haven't been able to try yet but can you confirm that I understand correctly

1) Do you have to disable HT in the motherboard bios?

2) You must set Vsynch to Fast in the Nvidia control panel to remove the limit.

3) you need the config file

gfx-enable-gfx-jobs=1
gfx-enable-native-gfx-jobs=1
wait-for-native-debugger=0

4) Edit setting for my hardware I find the settings of Shioro (thanks!) But i don't think its the same for my Intel i7 12700H (mobile). 14 cores (6 performance and 8 Efficient) ? I have an RTX3070Ti mobile 150W into my laptop Lenovo legion 5i pro gen7

[threads]
computeColliders=8
skinmeshPart=8
applyMorphs=8
skinmeshPartMaxPerChar=8
applyMorphMaxPerChar=8
affinity=1,3,5,7,9,11,13,15

[threadsVR]
computeColliders=8
skinmeshPart=8
applyMorphs=8
skinmeshPartMaxPerChar=8
applyMorphMaxPerChar=8
affinity=1,3,5,7,9,11,13,15

[profiler]
enabled=0

Should I replace 8 with 6 and subtract 13,15 ??

Why in certain configurations is there “Engine affinity” in addition?

Thanks for your help. I have the feeling that our demigods are preparing a supercharged version.
1. If you want to turn HT on or off, Yes, this must be done within the BIOS.

2. Vsync settings are irrelevant in itself, what you ask has nothing to do with performance per se but sounds like you are trying to reduce screen tearing instead.
To get full uninterupted performance however, Vsync should always be fully disabled and a Frame limiter should be used instead.

3. I don't understand the question here.
The data you listed here goes into the boot.config file as explained on the main page.
If however you notice that these 3 lines reduce your performance because your GPU is unsupported for these settings then you simply revert back to what it used to be.

4. The question you ask about engine affinity is answered on the main page of this product. As well as instructions on which cores to apply in what manner.

Make sure to check out the blue spoiler buttons on the main page, as well as the button under the section "FAQ". Most of the questions are answered here, and additional info on certain settings is explained.
 
1. If you want to turn HT on or off, Yes, this must be done within the BIOS.

2. Vsync settings are irrelevant in itself, what you ask has nothing to do with performance per se but sounds like you are trying to reduce screen tearing instead.
To get full uninterupted performance however, Vsync should always be fully disabled and a Frame limiter should be used instead.

3. I don't understand the question here.
The data you listed here goes into the boot.config file as explained on the main page.
If however you notice that these 3 lines reduce your performance because your GPU is unsupported for these settings then you simply revert back to what it used to be.

4. The question you ask about engine affinity is answered on the main page of this product. As well as instructions on which cores to apply in what manner.

Make sure to check out the blue spoiler buttons on the main page, as well as the button under the section "FAQ". Most of the questions are answered here, and additional info on certain settings is explained.
Thanks but i have seen spoilers buttons. But the 12700K is not a 12700H. 8P+4E vs 6P+8E

In examples 12700K is written
computeColliders=6
skinmeshPart=1
affinity=1,3,5,7,9,11,13,15

But why 6 in first line ? 8 are listed in affinity ? 12700K has 8Pcores. Im gonna try to read the FAQ.

I have a 12700H 6P+8E so if i understand i must type

computeColliders=6
skinmeshPart=1
affinity=1,3,5,7,9,11

you have to be sure that the efficient cores are those among the 14.I have read that Extreme Tuning d'Intel can show the P cores

In the example of 12400/12500/12600K which have 6 PCores. It's written 8 and affinity with 8 listed. I don't understand.

I'm exclusively in VR and I have a lot less FPS in VR compared to Desktop. half, maybe even less.
 
Last edited:
What settings do I need in SkinMeshPartDLL.ini for 11900f pls?

I set it to these, but not sure it's correct...

[threads]
computeColliders=6
skinmeshPart=1
affinity=1,3,5,7,9,11,13,15

[threadsVR]
computeColliders=6
skinmeshPart=1
affinity=1,3,5,7,9,11,13,15

[profiler]
enabled=0
 
What settings do I need in SkinMeshPartDLL.ini for 11900f pls?

I set it to these, but not sure it's correct...

[threads]
computeColliders=6
skinmeshPart=1
affinity=1,3,5,7,9,11,13,15

[threadsVR]
computeColliders=6
skinmeshPart=1
affinity=1,3,5,7,9,11,13,15

[profiler]
enabled=0
11900 have 8 cores. Maybe 8 colliders ? I don't know. And somebody wrote he have best performance disabling HT in bios. So the affinity become 1,2,3,...
Like you i want to know what is the best settings for my 12700H.
 
Last edited:
do you guys think itd be worth returning my 4070 super and getting a 4080 super for VR in vam? or should i just keep the 4070 super, i have a ryzen 5700x3d
 
I just tried with my intel 12700H and 3070Ti Mobile, I have no difference. I think I set it wrong. I left 1,3,5,... because I didn't disable HT. I also left engine affinity. Can someone guide me?

Original :
Benchmark-20240825-151529.png


Patched V13beta1
Benchmark-20240825-154129.png


settings :
[threads]
computeColliders=4
skinmeshPart=4
skinmeshPartMaxPerChar=4
applyMorphs=4
applyMorphMaxPerChar=4
#affinity=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16
affinity=1,3,5,7,9,11
#engineAffinity=1,3

[threadsVR]
computeColliders=4
skinmeshPart=4
skinmeshPartMaxPerChar=4
applyMorphs=4
applyMorphMaxPerChar=4
affinity=1,3,5,7,9,11
#engineAffinity=1,3
[profiler]
enabled=0

And the boot.config file :
gfx-enable-gfx-jobs=1
gfx-enable-native-gfx-jobs=1
gfx-disable-mt-rendering=0
wait-for-native-debugger=0
gc-max-time-slice=3
job-worker-count=12
single-instance=1

The command : Get-Process | select VaM.exe -expand Modules -ea 0 | where {$_.ModuleName -like 'skinmeshpartdll.dll'} :
result : 1136 SkinMeshPartDLL.dll
 
Last edited:
Back
Top Bottom