"VAMvarmanager" app to the rescue!
Definitely helped with cleaning up my wild ass addonspkgs folder ??
"VAMvarmanager" app to the rescue!
I am an ex-cv1-and-rift-s ...with 3070 too (upgraded all actually) ... with oculus rift you can get rid of steam vr and for sure you must have 32GB ram. Really you should seriously consider to stop each fucking crapware steam related if you are with a good display port oculus toy (or was it hdmi the cv1??... it was such a longtime ago my last time with that ). You don't need steam to play vam with oculus.I received this result from a VR benchmark which seems good to me but I still see a lot of stuttering when using VAM, can never get a clear view above 50~ fps with ASW off. What can I do to increase my performance?
I am an ex-cv1-and-rift-s ...with 3070 too (upgraded all actually) ... with oculus rift you can get rid of steam vr and for sure you must have 32GB ram. Really you should seriously consider to stop each fucking crapware steam related if you are with a good display port oculus toy (or was it hdmi the cv1??... it was such a longtime ago my last time with that ). You don't need steam to play vam with oculus.
Indeed (just be careful to get some good coupled 16+16modules, populating more than 2 slots with different type of ram can give immediately serious issues) .. and of course don't trust it is a bad idea if you look at how much ram windows calls at duty when you play vam. 32GB and eventually the best cpu (intel!) that you can afford: you will get a very good gaming life and solid vam sessions without crashes.Actually I'm using Quest 2 with Link, not sure why it says Rift on it. You think 32GB RAM would be a good upgrade? I've been meaning to buy some more RAM recently and better VAM performance would just sweeten the deal
I used quest 2 wirelessly with virtual desktop, and the space warp setting included with VD set to always on. Everything runs very smooth. VAM stutters like crazy during head movement without asw. The asw from oculus sucks, and makes everythin wiggly.Actually I'm using Quest 2 with Link, not sure why it says Rift on it. You think 32GB RAM would be a good upgrade? I've been meaning to buy some more RAM recently and better VAM performance would just sweeten the deal
May ask the cpu power consumption during test? Baseline 3 and peaks interests me particularly.If anyone wonder... I just switched from 12900k to 13900k. Also added 32gb more RAM, but it's irrelevant
12900k
View attachment 177938
13900kView attachment 177939
I got better min1%, so guess it might be more stable, but overall is... at the margin of error.
There is one upside tho. The hightest temp i noticed during a test was 62 degrees Celsius, while 12900k went up to 74. Both using same 360 AiO.
Tho i changed thermal grease to grizzly one from stock NZXT.
In terms of VaM, was it worth? Don't think so lol
get anything but not the 4080. Its the worst. I have a 4090 now and did use a 3080 with reverb g2. No big difference in VAM cos of cpu bottleneck anyway. So either get a used 30xx or w8 for amds 7900 or get a 4090 if u are insane like the rest here.I'm waiting for someone to post a 4080 VR result, preferably with 59x0 CPU, so I can see if there is much difference to the 4090 scores. Wondering if 4080 alone would be "good enough" for now for me.
Unfortunately the problem with that is, hardware got so fast - it runs into the engine FPS-limit(*) at a low resolution.I wish everyone could share their benchmarks with 1080p resolution as a baseline. It makes comparison much easier.
Yes, but testing on 1080p to isolate the CPU performance assumes there is no artificial limit.Increasing resolution will cause more work for GPU, which will provide false results for CPU, since it will be waiting for graphic card.
All of these tech-benchmarkers on yt, Linus etc, are making CPU tests on 1080p, preferably low settings.
12900K and 13900K ran into the limit on Baseline 1, ClothSim, Baseline 2 and HairSim.As U can see on my ss i reached MAX fps only at cloth sim test, which is on GPU.
Here are at 1080p, starting from CPU power these are max values during a test:May ask the cpu power consumption during test? Baseline 3 and peaks interests me particularly.
And 1440pYes, but testing on 1080p to isolate the CPU performance assumes there is no artificial limit.
Unfortunately the problem with that is, hardware got so fast - it runs into the engine FPS-limit(*) at a low resolution.
Your and @trety results for example are affected.
* = verify the limit: load VAM in a low resolution, disable Vsync, load an empty scene, no matter what FPS counter your check (they have different precision), they all max out at around 300
If a test shows max fps >= 300 (usually maxes out at 309 FPS due to bad precision) the result is bad / not comparable.
The CPU (or to be more precise a thread of the Unity engine) went to sleep at some point during the benchmark until it was allowed to continue with the next frame. It was waiting because somewhere in the Unity engine there is check that let's this thread go to sleep if to fast. Which is usually a good thing to not waste energy, but not for a benchmark. It's like having fake-Vsync on, except with 300 Hz. (Vertical synchronization synchronized the frames to the monitor frequency, so the framerate cannot be higher than the monitors frequency)
It may not be much but it DID affect the result negative. That is a fact. The actual performance could have been higher.
better:
Either test with a higher resolution (easiest, more reliable and probably more precise) or disable the FPS-limit (which may have unknown side effects and the benchmark will complain about that plugin being loaded) to get correct and comparable results.
Loss of precision:
In general with higher FPS the measuring precision is lower. Timer resolution is limited.
The PC can measure millisecond more precise than nanoseconds.
At 1 second / 300 frames we have only 3,333 milliseconds to render one frame.
But the engine does not just measure the total frame time (script-, physics-, rendering- and wait-time). So microsecond precision is already needed.
There can be rounding errors from using less precise datatypes (float instead of double) in programming.
On top of that there are various ways to implement a FPS counter with varying precision.
For example some FPS-counters show an average of the multiple past frames to make the number more stable/human-readable and others just do 1 / frame time = FPS.
Many thanks! I find it quite insane that most RL reviews mention gaming power somewhere 70-100W and here vam can take nearly 200.. I am thinking I may change my platform and the most reasonable itx ddr4 route is a mobo with around 150 power limit. I tought that could be enough for vam but seeing your result imSorry for double post, but re-run it again. Somehow i got a bit worse results now.
Here are at 1080p, starting from CPU power these are max values during a test:
View attachment 179801
And 1440p
View attachment 179804
Comparing CPU power usage and temps between these two tests, we can clearly say CPU was actually lazy, waiting for GPU, not showing it's full potential