We are excited to announce a new feature on the Hub: Favorites!
You can now add resources to your favorites, and organize your favorites into collections!
You can check out the details in our official announcement!
I noticed something weird: when enabling fancy hair, the relation between TotalTime and FPS disconnects. Without hair I get 9ms total time and it gets capped to 90 FPS by the headset. With hair I still get 9ms total time, but my FPS tanks to 70 FPS. I believe hair is rendered seperately from the frame itself and in VR its especially performance eating (guess because its done twice?). Also it could be that the hair is done after the frame finished, which in turn waits for HMD vsync, explaining why I never quite get 90FPS in VR, although I have a 4090.
A proper profiling of the rendering pipeline might allow even better fixes, that hair physics is from 2013-ish tomb raider, it should run perfect at 300fps on a 4090.
I didnt quite get it, did my patch make something worse than vanilla? Maybe its just a fluke:
From 240fps to 220fps, the difference is 0.38ms bigger render time.
The same 0.38ms bigger render time turns 190fps into 177fps.
It looks like enabling tjose collisions is more draining because they "steal" more FPS (20fps vs 13fps earlier), but that's just how FPS works. At 400fps they would turn it into 347fps, while having the same 0.38ms cost.
Your patch only brought improvements as far as I can see, this may simply be a VAM problem with collision trigger atoms (or the collision sphere), and it makes a massive difference with soft-body physics at sub-100 framerates too. I just went in-game in VR (Meta Quest 2, Virtual Desktop, bypassed SteamVR, EDIT: 120hz mode not enabled) to measure a more extreme example: I opened a save before I had optimized the scene and "jiggled" the female character up and down with soft-body physics on. Framerate dropped to 40-50 fps while I was doing that, and climbed back to the vertical sync maximum of 72 when I stopped. Then I scaled those six oversized collision trigger atoms down to 0.010 and jiggled the character again. Steady 72 fps. Scaled the atoms back up. 40-50 fps. Scaled them down again. Steady 72 fps. This was at a physics rate of 72hz and update cap 1. Other update caps did well at 72hz. At 90hz, update cap 1 held its own, 2 and 3 started getting some instability, as expected.
I hadn't felt VR performance in that scene worsen at all with your patch, much to the contrary. It just couldn't quite break the 40-50 fps barrier with soft-body physics on and moving, and then I scaled down those six collision trigger atoms, and bam, problem fixed. Disabling their collision doesn't have the same effect in a reliable way, only scaling them down does. Multiple atom types have that blue collision sphere, but I only had time to test the collision trigger atoms. In any case, IF it's not some bizarre problem unique to my install/setup, it looks like a considerable optimization issue in VAM, because collision trigger atoms scaled to 10 aren't actually that big; just enough to cover a large piece of furniture, which is one of their intended functions. Easy way to mark, say, a couch as a zone where different behaviors apply.
Anyway, I'm only mentioning this in case it piques your interest and in case it's within the scope of what you're doing. You've already done - and are still doing - an astounding job, way more than anyone expected to be possible with VAM 1. Thank you so much for that.
Your patch only brought improvements as far as I can see, this may simply be a VAM problem with collision trigger atoms (or the collision sphere), and it makes a massive difference with soft-body physics at sub-100 framerates too. I just went in-game in VR (Meta Quest 2, Virtual Desktop, bypassed SteamVR, EDIT: 120hz mode not enabled) to measure a more extreme example: I opened a save before I had optimized the scene and "jiggled" the female character up and down with soft-body physics on. Framerate dropped to 40-50 fps while I was doing that, and climbed back to the vertical sync maximum of 72 when I stopped. Then I scaled those six oversized collision trigger atoms down to 0.010 and jiggled the character again. Steady 72 fps. Scaled the atoms back up. 40-50 fps. Scaled them down again. Steady 72 fps. This was at a physics rate of 72hz and update cap 1. Other update caps did well at 72hz. At 90hz, update cap 1 held its own, 2 and 3 started getting some instability, as expected.
I hadn't felt VR performance in that scene worsen at all with your patch, much to the contrary. It just couldn't quite break the 40-50 fps barrier with soft-body physics on and moving, and then I scaled down those six collision trigger atoms, and bam, problem fixed. Disabling their collision doesn't have the same effect in a reliable way, only scaling them down does. Multiple atom types have that blue collision sphere, but I only had time to test the collision trigger atoms. In any case, IF it's not some bizarre problem unique to my install/setup, it looks like a considerable optimization issue in VAM, because collision trigger atoms scaled to 10 aren't actually that big; just enough to cover a large piece of furniture, which is one of their intended functions. Easy way to mark, say, a couch as a zone where different behaviors apply.
Anyway, I'm only mentioning this in case it piques your interest and in case it's within the scope of what you're doing. You've already done - and are still doing - an astounding job, way more than anyone expected to be possible with VAM 1. Thank you so much for that.
I'll give it a try, the scene in which this happens is absolutely not simple, but I had managed to reproduce the issue in the default scene at one point. I'll try simplifying the complex scene, as the default scene may just be too simple for this problem to show reliably.
yeah its something with the benchmark. Did you try loading another scene and perfmon with vanilla and the patched vam? Maybe there you see a difference in FPS.
Did a fresh install of VaM, and did the benchmark with this patch applied, with the default 5900x .ini and there wasn't really much of a change. A little bit in some areas, but unfortunately the same results as my existing VaM directory.
EDIT: The patch is working well. I'm just not seeing a jump in FPS for whatever reason. However Physics time is being reduced substantially in-comparison to without the patch.
Hello, I'm also using the 5600. What should I configure to enable close HT? (V12)
my SkinMeshPartDLL.ini setting is this
[threads]
first core = 1
computeColliders=6
skinmeshPart=6
applyMorphs=6
skinmeshPartMaxPerChar=6
applyMorphMaxPerChar=6
affinity=1,3,5,7,9,11
I noticed something weird: when enabling fancy hair, the relation between TotalTime and FPS disconnects. Without hair I get 9ms total time and it gets capped to 90 FPS by the headset. With hair I still get 9ms total time, but my FPS tanks to 70 FPS. I believe hair is rendered seperately from the frame itself and in VR its especially performance eating (guess because its done twice?). Also it could be that the hair is done after the frame finished, which in turn waits for HMD vsync, explaining why I never quite get 90FPS in VR, although I have a 4090.
A proper profiling of the rendering pipeline might allow even better fixes, that hair physics is from 2013-ish tomb raider, it should run perfect at 300fps on a 4090.
Got it. Scene is attached below, anyone's welcome to test it. I stripped it down to nothing but stock VAM assets and morphs, the only dependency should be the most recent version of Timeline (some atoms are parented via Timeline, though the problematic atoms are not among them). Here's the reproduction steps, which were taken in Desktop mode, CPU performance patch 12, default .ini settings for i9-13900K, confirmed to be running with powershell.
- Ideally the scene should be loaded with no session plugins, to prevent any lingering effects from deleting them after scene load.
- Keep the camera angle and ensure soft body physics and high quality physics are set to on in User Preferences. I tested with no Desktop Vsync. This problem should show at multiple physics rates and update caps, so take your pick on that.
- Open performance monitor, press "U" to close the UI to get around its performance impact (the white bottom bar should still be visible), then click Reset Averages on performance monitor to see what you get while the scene is still.
- Click Reset Averages again and jiggle the female character up and down by some central hip bone. Check the average while you keep doing that to see how bad the FPS loss from physics is. The motion may even look a little jerky due to framerate instability.
- Open the atom selection screen and check "Show Hidden". Find these six atoms: SENSOR.BottomView, SENSOR.FrontView, SENSOR.RearView, SENSOR.LeftView, SENSOR.RightView and SENSOR.TopView. On their collision trigger tabs, set their scale sliders all the way down.
- Close UI with "U" again, keep camera angle, repeat the same tests with performance monitor. The improvement in the jiggle test should be dramatic, more so than the still test.
- Scale all those collision triggers back to 10, repeat the tests. Performance should worsen to the same degree as before.
If you want to test something additional that I happened upon while preparing the scene: in the atom selection screen, just open loads of atoms in the scene at random, their tabs, without changing anything. Just load many menus, then close the UI with "U" and repeat the tests. When I did that, all the averages became permanently worse, as if the UI is adding clutter. So there's that. Maybe a fluke. If none of this happens to you or anyone else, then something in my VAM install or my setup is REALLY fucked up. The full setup, by the way, is: i9-13900K, RTX 4090, Windows 10 Pro, 64GB of RAM. No thermal throttling or bottlenecks observed.
EDIT: Edited attachment to make sure it was the latest save of the test scene, which it very likely already was.
Got it. Scene is attached below, anyone's welcome to test it. I stripped it down to nothing but stock VAM assets and morphs, the only dependency should be the most recent version of Timeline (some atoms are parented via Timeline, though the problematic atoms are not among them). Here's the reproduction steps, which were taken in Desktop mode, CPU performance patch 12, default .ini settings for i9-13900K, confirmed to be running with powershell.
- Ideally the scene should be loaded with no session plugins, to prevent any lingering effects from deleting them after scene load.
- Keep the camera angle and ensure soft body physics and high quality physics are set to on in User Preferences. I tested with no Desktop Vsync. This problem should show at multiple physics rates and update caps, so take your pick on that.
- Open performance monitor, press "U" to close the UI to get around its performance impact (the white bottom bar should still be visible), then click Reset Averages on performance monitor to see what you get while the scene is still.
- Click Reset Averages again and jiggle the female character up and down by some central hip bone. Check the average while you keep doing that to see how bad the FPS loss from physics is. The motion may even look a little jerky due to framerate instability.
- Open the atom selection screen and check "Show Hidden". Find these six atoms: SENSOR.BottomView, SENSOR.FrontView, SENSOR.RearView, SENSOR.LeftView, SENSOR.RightView and SENSOR.TopView. On their collision trigger tabs, set their scale sliders all the way down.
- Close UI with "U" again, keep camera angle, repeat the same tests with performance monitor. The improvement in the jiggle test should be dramatic, more so than the still test.
- Scale all those collision triggers back to 10, repeat the tests. Performance should worsen to the same degree as before.
If you want to test something additional that I happened upon while preparing the scene: in the atom selection screen, just open loads of atoms in the scene at random, their tabs, without changing anything. Just load many menus, then close the UI with "U" and repeat the tests. When I did that, all the averages became permanently worse, as if the UI is adding clutter. So there's that. Maybe a fluke. If none of this happens to you or anyone else, then something in my VAM install or my setup is REALLY fucked up. The full setup, by the way, is: i9-13900K, RTX 4090, Windows 10 Pro, 64GB of RAM. No thermal throttling or bottlenecks observed.
EDIT: Edited attachment to make sure it was the latest save of the test scene, which it very likely already was.
Yeah, whats so surprising about it? If you add a collision trigger and set it to such a huge radius, it gets checked against all other objects in the scene, which costs a lot of CPU. The triggers even touch every single bone in the male next to it, ofc it will cost a lot of CPU. As you can see the "internal physics time" (what VaM does) didnt increase, only the physx physics time. Nothing I can do about it, just dont use such big collision triggers.
Edit: now I understand, it has the AtomFilter Male01 on it, so it should only check on him, hmm... Yeah that might be fixable.
@turtlebackgoofy - I've been following this since you uploaded it, great great work. I think it would benefit from a discord server OR if @meshedvr would consider adding a channel on the VAM discord.
Clearly theres a lot of interest in discussing. Again, great work
Despite the Baseline 3 Benchmarks (in my particular case) showing no real improvement in FPS, I just wanted to point out how crazy responsive many various tasks have become since adding this patch. From simple things like opening menus, to loading scenes, to even saving a scene. That transitions between windows are now instant, where as before I was always waiting like 1-5 seconds for it respond in my heavier scenes. Clothing Simulation is also way, way better now with this. I have a scene with someone in a bed with a blanket, And every single time I have ever loaded this scene up, the blanket always falls in slowmo, and feels like it takes forever to get situated. Now with this patch, the blanket falls, almost in real time, and situates itself in like 3 seconds, instead of like 45 seconds. Not to mention, many clothes that I imported myself in the past, that had Higher vertices count then were recommended ( rec is 25k or less) now is performing like it had 1k vertices. It's insane. Like this other scene I have, has so much clothing sims that I abandoned it, but with this patch, I'm sitting there with a comfortable 98!FPS! at 4k res. This patch is a real game changer.
Ya the Benchmark might not be the best way to properly show what all this is improving! Basically every single existing scene I've opened, is running Way Way, WAY better. It's baffling. Like some scenes of mine I've gone 45fps to 90+, what?!
This really needs to be a default in VaM.
Yeah, whats so surprising about it? If you add a collision trigger and set it to such a huge radius, it gets checked against all other objects in the scene, which costs a lot of CPU. The triggers even touch every single bone in the male next to it, ofc it will cost a lot of CPU. As you can see the "internal physics time" (what VaM does) didnt increase, only the physx physics time. Nothing I can do about it, just dont use such big collision triggers.
Edit: now I understand, it has the AtomFilter Male01 on it, so it should only check on him, hmm... Yeah that might be fixable.
There is a possibility to optimize it: https://docs.unity3d.com/ScriptReference/Physics.IgnoreCollision.html , it would be even better to put each atom into a seperate layer and then ignoring the whole layers using https://docs.unity3d.com/ScriptReference/Physics.IgnoreLayerCollision.html
By getting a list of all possible colliders/rigidbodies in a scene and ignoring them to each other except the one collision you do want to check, there will be a 100 times less events fired per physics frame. Events are especialy expensive in unity.
But this requires quite a lot of refactoring in VaM code to redo the ignorecollision every time a new rigidbody/collider gets loaded or you change the triggering atom.
Yeah, whats so surprising about it? If you add a collision trigger and set it to such a huge radius, it gets checked against all other objects in the scene, which costs a lot of CPU. The triggers even touch every single bone in the male next to it, ofc it will cost a lot of CPU. As you can see the "internal physics time" (what VaM does) didnt increase, only the physx physics time. Nothing I can do about it, just dont use such big collision triggers.
Edit: now I understand, it has the AtomFilter Male01 on it, so it should only check on him, hmm... Yeah that might be fixable.
Just to clarify, I am mostly a layman I couldn't tell if the performance loss was within an expected range for this kind of feature in VAM and Unity, and the inconsistencies in the triggers just looked wrong to me. In fact, changing the filtering to "None" on all the triggers does not significantly impact the performance loss for me, which was one of the reasons this caught my eye. I couldn't find a consistent pattern that confirmed it was just a matter of collision checks against too many bones. Incidentally, did you happen to check the "UI clutter" issue I mentioned at the end of the post? Did it happen for you?
Just to clarify, I am mostly a layman I couldn't tell if the performance loss was within an expected range for this kind of feature in VAM and Unity, and the inconsistencies in the triggers just looked wrong to me. In fact, changing the filtering to "None" on all the triggers does not significantly impact the performance loss for me, which was one of the reasons this caught my eye. I couldn't find a consistent pattern that confirmed it was just a matter of collision checks against too many bones. Incidentally, did you happen to check the "UI clutter" issue I mentioned at the end of the post? Did it happen for you?
When you set the AtomFilter to "None" you still get 100 events fired from unity on each frame, there is no difference in performance. You should check if there is a plugin which does such collision checking without unitys system by only using a simple distance calculation.
The UI eats FPS because it has to be refreshed and rendered too on every frame, nothing you can do about it, except maybe skip rerendering it for 1sec, but that wouldnt work in VR and would require some fancy double buffering system (render UI once every sec and save it as a kinda photo and render it that way) that would make the UI also very irresponsive.
The UI eats FPS because it has to be refreshed and rendered too on every frame, nothing you can do about it, except maybe skip rerendering it for 1sec, but that wouldnt work in VR and would require some fancy double buffering system (render UI once every sec and save it as a kinda photo and render it that way) that would make the UI also very irresponsive.
Yes, the problem is how there was performance degradation in the tests with the UI closed. While it's open, I know FPS loss is inevitable. What I found strange is that the more of the UI I loaded, the more the performance dropped afterwards, while it was closed. I figure it's kept in memory, yes, but I don't know if performance loss to that degree is to be expected with the UI closed, which is why this caught my eye. Might have been a fluke.
Yes, the problem is how there was performance degradation in the tests with the UI closed. While it's open, I know FPS loss is inevitable. What I found strange is that the more of the UI I loaded, the more the performance dropped afterwards, while it was closed. I figure it's kept in memory, yes, but I don't know if performance loss to that degree is to be expected with the UI closed, which is why this caught my eye. Might have been a fluke.
Happens regardless, even if I trigger memory optimization, but not necessarily permanent - perhaps it gets cleaned up over longer editing sessions. Anyway, those were the only "lesser-known" optimization problems I've noticed in VAM that felt worth mentioning, thank you for verifying them Couldn't notice any problems caused directly by patch 12, only improvements, including the morph performance improvements, which were quite evident. Amazing work.
RIG:
CPU: i9-13900k (HT Enabled for most of the testing)
GPU: MSI Liquid Suprim RTX 4090
MOBO: ASUS PRIME Z790-A
RAM: 32 GB (2x16), (DDR5 6000MHz, 30/40/40/96)
PSU: MSI MEG AI1300p (1300W)
COOLER: ARCTIC Liquid Freezer II 420mm
SSD: 2 SAMSUNG SSD 980PRO 1TB w/Heatsink (1 VAM-dedicated)
CASE: CORSAIR 7000x
MONITOR:
ASUS VG28U 3840x2160p 144hz
Sooooooo. I have been agonizing about this for awhile now, trying to figure out why my 13900k/4090 rig (2160p 144Hz screen) didn't seem to be getting the numbers I'd been seeing from similar users. Obviously it's tough because we're all using slightly different setups, but in general I'm picking up the thread that this magic patch isn't quite as magical for us Intel users...or maybe not for silly tech laymen like myself, at least.
I read the instructions through multiple times, read every response in the discussion back and forth, made sure the proper DLL was running in PowerShell, and even took off the 7000x's glass panels because they kinda fuck airflow/thermals (which might have made a difference actually). Finally though I think I found the one thing that helped me the most.
Try to use Process Lasso with the following affinity rules: View attachment 333467 View attachment 333468
On my 13900k it basically means: allow vam to use P cores only and force everything else to switch to E cores (everything except system processes).
I have no idea if it actually makes any difference, didn't test it yet but this it what I'm using since the beginning.
Also in options -> power -> performance mode I added VaM.exe
hijiku, ya' blessed. Before this I'd noticed in HWMonitor (after finally downloading it) that my main CPU utilization did not seem to match the affinities specified in the patch. Using ProcessLasso, that seemed to be confirmed so I forced the proper affinities as instructed, and then saw better usage of the 1,3,5,7,9,13,15 cores as expected.
Below are a shitton of benchmarks, between an almost-fully clean install (has a couple of look VARs and a few plugin VARs, I was considering switching to it permanently) and my bloated-ass main install, with both official and custom-seshplug benchmarks. You'll see no meaningful changes even when patched/lassoed, except at WaitTimes and in Baseline 3. I've also captured HWMonitor/ProcessLasso shots during Baseline 3 in the later testing, but I didn't yet enable the ThreadProfile JSON capture so oops. >_>;
A final note before the benches is that even though I haven't had the patience to add new VR benches to the mix, like hiijiku I kinda' just FEEL like I can do more in VR right now. I'm running a notable custom plugin suite (moyashi "Post-Processing Plugin," hazmhox "VaMTweaks" at 55/4/Stable, Stopper "LightOptions" at 13), along with Naturalis v64 (all TittyMagic settings enabled, BootyMagic on with no SoftPhysics), and near-max base VaM settings (8x AA, 6 pixel light, 4 SmoothPass, 120Hz physics cap 3, HQ Motion) while running MMD2timeline plugin with 1 model on VaMTastic's "The Beach" with SallyFX CUA shader tweaks.
...and getting over 100 FPS a lot of the time (Valve Index at 120Hz)!
___________________________________________
RIG:
CPU: i9-13900k (HT Enabled for most of the testing)
GPU: MSI Liquid Suprim RTX 4090
MOBO: ASUS PRIME Z790-A
RAM: 32 GB (2x16), (DDR5 6000MHz, 30/40/40/96)
PSU: MSI MEG AI1300p (1300W)
COOLER: ARCTIC Liquid Freezer II 420mm
SSD: 2 SAMSUNG SSD 980PRO 1TB w/Heatsink (1 VAM-dedicated)
CASE: CORSAIR 7000x
MONITOR:
ASUS VG28U 3840x2160p 144hz
Sooooooo. I have been agonizing about this for awhile now, trying to figure out why my 13900k/4090 rig (2160p 144Hz screen) didn't seem to be getting the numbers I'd been seeing from similar users. Obviously it's tough because we're all using slightly different setups, but in general I'm picking up the thread that this magic patch isn't quite as magical for us Intel users...or maybe not for silly tech laymen like myself, at least.
I read the instructions through multiple times, read every response in the discussion back and forth, made sure the proper DLL was running in PowerShell, and even took off the 7000x's glass panels because they kinda fuck airflow/thermals (which might have made a difference actually). Finally though I think I found the one thing that helped me the most.
hijiku, ya' blessed. Before this I'd noticed in HWMonitor (after finally downloading it) that my main CPU utilization did not seem to match the affinities specified in the patch. Using ProcessLasso, that seemed to be confirmed so I forced the proper affinities as instructed, and then saw better usage of the 1,3,5,7,9,13,15 cores as expected.
Below are a shitton of benchmarks, between an almost-fully clean install (has a couple of look VARs and a few plugin VARs, I was considering switching to it permanently) and my bloated-ass main install, with both official and custom-seshplug benchmarks. You'll see no meaningful changes even when patched/lassoed, except at WaitTimes and in Baseline 3. I've also captured HWMonitor/ProcessLasso shots during Baseline 3 in the later testing, but I didn't yet enable the ThreadProfile JSON capture so oops. >_>;
...okay I feel extra dumb now. How do you want me to send it to you? I can't put a zip here and I can't seem to attach files in conversations either.
Though, at least someone else noted in the Benchmarks thread that there's a big difference in resolution between everything I'm running and what trety ran. They were only running 1080p and I'm running 2160p.
...okay I feel extra dumb now. How do you want me to send it to you? I can't put a zip here and I can't seem to attach files in conversations either.
Though, at least someone else noted in the Benchmarks thread that there's a big difference in resolution between everything I'm running and what trety ran. They were only running 1080p and I'm running 2160p.