Benchmark Result Discussion

I received this result from a VR benchmark which seems good to me but I still see a lot of stuttering when using VAM, can never get a clear view above 50~ fps with ASW off. What can I do to increase my performance?
I am an ex-cv1-and-rift-s ...with 3070 too (upgraded all actually) ... with oculus rift you can get rid of steam vr and for sure you must have 32GB ram. Really you should seriously consider to stop each fucking crapware steam related if you are with a good display port oculus toy (or was it hdmi the cv1??... it was such a longtime ago my last time with that :) ). You don't need steam to play vam with oculus.
 
Last edited:
Just wanted to report somewhere my results of switching to a gen 4 nvme ssd. Copied my install to the new drive and ran a quick test.

VAM startup times:

Old drive - Adata MX500 1TB (Sata SSD) = 2:44

New drive - Corsair MP600 Pro 2TB (Gen 4 NVME) = 1:26
 
I am an ex-cv1-and-rift-s ...with 3070 too (upgraded all actually) ... with oculus rift you can get rid of steam vr and for sure you must have 32GB ram. Really you should seriously consider to stop each fucking crapware steam related if you are with a good display port oculus toy (or was it hdmi the cv1??... it was such a longtime ago my last time with that :) ). You don't need steam to play vam with oculus.

Actually I'm using Quest 2 with Link, not sure why it says Rift on it. You think 32GB RAM would be a good upgrade? I've been meaning to buy some more RAM recently and better VAM performance would just sweeten the deal
 
Actually I'm using Quest 2 with Link, not sure why it says Rift on it. You think 32GB RAM would be a good upgrade? I've been meaning to buy some more RAM recently and better VAM performance would just sweeten the deal
Indeed (just be careful to get some good coupled 16+16modules, populating more than 2 slots with different type of ram can give immediately serious issues) .. and of course don't trust it is a bad idea if you look at how much ram windows calls at duty when you play vam. 32GB and eventually the best cpu (intel!) that you can afford: you will get a very good gaming life and solid vam sessions without crashes.
 
Actually I'm using Quest 2 with Link, not sure why it says Rift on it. You think 32GB RAM would be a good upgrade? I've been meaning to buy some more RAM recently and better VAM performance would just sweeten the deal
I used quest 2 wirelessly with virtual desktop, and the space warp setting included with VD set to always on. Everything runs very smooth. VAM stutters like crazy during head movement without asw. The asw from oculus sucks, and makes everythin wiggly.
 
i expected better, even with a CPU from a few generations back. CPU is slightly OC using the built-in utils on my motherboard, but the GPU is bog standard with no performance tuning at all. replacing the CPU would require an entire new bundle, so this'll have to do until everything else starts suffering enough to warrant dropping that kind of money.

Benchmark-20221112-194742.png
 
3600 to 5600
big imporove in baseline 3 and min1%
but hairSim, clothingSim has no changes
bottleneck is gpu

5600@4.6
3600@4.2
ddr4 3200 c16 8g*2
50g virtual memory
3060ti@240w

3600.png
5600.png
 
5950X, Gigabyte Aorus 3070, Hero VIII X570, EK AIO Elite 360, Rift S headset (no idea why it always says Rift CV1 on report) no overclocking. Just wanted to get updated version 3 benchmarks to compare to the crazy numbers from the lucky 4090 owners here!

Rift S
Benchmark-20221118-211242.png


Desktop
Benchmark-20221118-214039.png
 



can i still gain fps? Everything is original , no oc or other just an underclock of my 3090
 
If anyone wonder... I just switched from 12900k to 13900k. Also added 32gb more RAM, but it's irrelevant

12900k
Benchmark-20221119-105622.png

13900k
Benchmark-20221119-200516.png

I got better min1%, so guess it might be more stable, but overall is... at the margin of error.

There is one upside tho. The hightest temp i noticed during a test was 62 degrees Celsius, while 12900k went up to 74. Both using same 360 AiO.
Tho i changed thermal grease to grizzly one from stock NZXT.

In terms of VaM, was it worth? Don't think so lol
 
I'm waiting for someone to post a 4080 VR result, preferably with 59x0 CPU, so I can see if there is much difference to the 4090 scores. Wondering if 4080 alone would be "good enough" for now for me.
 
If anyone wonder... I just switched from 12900k to 13900k. Also added 32gb more RAM, but it's irrelevant

12900k
View attachment 177938
13900kView attachment 177939
I got better min1%, so guess it might be more stable, but overall is... at the margin of error.

There is one upside tho. The hightest temp i noticed during a test was 62 degrees Celsius, while 12900k went up to 74. Both using same 360 AiO.
Tho i changed thermal grease to grizzly one from stock NZXT.

In terms of VaM, was it worth? Don't think so lol
May ask the cpu power consumption during test? Baseline 3 and peaks interests me particularly.
 
intel i7 12700k
Nvidia RTX 4090
64 GB DDR4 RAM / 16gb x 4 4000mhz
Windows 10

450gb Vamfolder

Benchmark-20221124-005714.png
 
Last edited:
I wish everyone could share their benchmarks with 1080p resolution as a baseline. It makes comparison much easier.
 
I'm waiting for someone to post a 4080 VR result, preferably with 59x0 CPU, so I can see if there is much difference to the 4090 scores. Wondering if 4080 alone would be "good enough" for now for me.
get anything but not the 4080. Its the worst. I have a 4090 now and did use a 3080 with reverb g2. No big difference in VAM cos of cpu bottleneck anyway. So either get a used 30xx or w8 for amds 7900 or get a 4090 if u are insane like the rest here.
 
I wish everyone could share their benchmarks with 1080p resolution as a baseline. It makes comparison much easier.
Unfortunately the problem with that is, hardware got so fast - it runs into the engine FPS-limit(*) at a low resolution.
Your and @trety results for example are affected.

* = verify the limit: load VAM in a low resolution, disable Vsync, load an empty scene, no matter what FPS counter your check (they have different precision), they all max out at around 300

If a test shows max fps >= 300 (usually maxes out at 309 FPS due to bad precision) the result is bad / not comparable.
The CPU (or to be more precise a thread of the Unity engine) went to sleep at some point during the benchmark until it was allowed to continue with the next frame. It was waiting because somewhere in the Unity engine there is check that let's this thread go to sleep if to fast. Which is usually a good thing to not waste energy, but not for a benchmark. It's like having fake-Vsync on, except with 300 Hz. (Vertical synchronization synchronized the frames to the monitor frequency, so the framerate cannot be higher than the monitors frequency)
It may not be much but it DID affect the result negative. That is a fact. The actual performance could have been higher.

better:
Either test with a higher resolution (easiest, more reliable and probably more precise) or disable the FPS-limit (which may have unknown side effects and the benchmark will complain about that plugin being loaded) to get correct and comparable results.

Loss of precision:
In general with higher FPS the measuring precision is lower. Timer resolution is limited.
The PC can measure millisecond more precise than nanoseconds.
At 1 second / 300 frames we have only 3,333 milliseconds to render one frame.
But the engine does not just measure the total frame time (script-, physics-, rendering- and wait-time). So microsecond precision is already needed.
There can be rounding errors from using less precise datatypes (float instead of double) in programming.
On top of that there are various ways to implement a FPS counter with varying precision.
For example some FPS-counters show an average of the multiple past frames to make the number more stable/human-readable and others just do 1 / frame time = FPS.
 
I did my test on 1080p since i been testing CPU, not GPU.
As U can see on my ss i reached MAX fps only at cloth and hair sim test, which are on GPU. All other results are a way below.

Increasing resolution will cause more work for GPU, which will provide false results for CPU, since it will be waiting for graphic card.
All of these tech-benchmarkers on yt, Linus etc, are making CPU tests on 1080p, preferably low settings.
 
Last edited:
Increasing resolution will cause more work for GPU, which will provide false results for CPU, since it will be waiting for graphic card.
All of these tech-benchmarkers on yt, Linus etc, are making CPU tests on 1080p, preferably low settings.
Yes, but testing on 1080p to isolate the CPU performance assumes there is no artificial limit.
As U can see on my ss i reached MAX fps only at cloth sim test, which is on GPU.
12900K and 13900K ran into the limit on Baseline 1, ClothSim, Baseline 2 and HairSim.
All 309,3 @ 1% max.

Yes, I am nitpicky about the results. It's a flaw that I want to point out because everybody seems to ignore it or does not know about it.
I found out about it because it was suspicious that my VAM FPS could not go above 309 while scripting a Plugin with FPS-counter. So I investigated.

Assuming a RAM upgrade from 2x 16GB to 4x 16GB with all 4 RAM slots populated many mainboards limit the speed compared to running only 2 modules.
Not sure if this is that case here. Could affect a CPU comparison too. Anyway, nice hardware - will shut up now.
 
Yeah i actually should re-run a test, after adding another 2 modules my bios turned off XMP profile and limited them.
I didn't measure power usage either, which someone were asking for before, tho can't do that on 12900k since it's in the box already lol
 
Sorry for double post, but re-run it again. Somehow i got a bit worse results now.
May ask the cpu power consumption during test? Baseline 3 and peaks interests me particularly.
Here are at 1080p, starting from CPU power these are max values during a test:
13900k_2nd run.png

Yes, but testing on 1080p to isolate the CPU performance assumes there is no artificial limit.
And 1440p
13900k_1440.png

Comparing CPU power usage and temps between these two tests, we can clearly say CPU was actually lazy, waiting for GPU, not showing it's full potential
 
Unfortunately the problem with that is, hardware got so fast - it runs into the engine FPS-limit(*) at a low resolution.
Your and @trety results for example are affected.

* = verify the limit: load VAM in a low resolution, disable Vsync, load an empty scene, no matter what FPS counter your check (they have different precision), they all max out at around 300

If a test shows max fps >= 300 (usually maxes out at 309 FPS due to bad precision) the result is bad / not comparable.
The CPU (or to be more precise a thread of the Unity engine) went to sleep at some point during the benchmark until it was allowed to continue with the next frame. It was waiting because somewhere in the Unity engine there is check that let's this thread go to sleep if to fast. Which is usually a good thing to not waste energy, but not for a benchmark. It's like having fake-Vsync on, except with 300 Hz. (Vertical synchronization synchronized the frames to the monitor frequency, so the framerate cannot be higher than the monitors frequency)
It may not be much but it DID affect the result negative. That is a fact. The actual performance could have been higher.

better:
Either test with a higher resolution (easiest, more reliable and probably more precise) or disable the FPS-limit (which may have unknown side effects and the benchmark will complain about that plugin being loaded) to get correct and comparable results.

Loss of precision:
In general with higher FPS the measuring precision is lower. Timer resolution is limited.
The PC can measure millisecond more precise than nanoseconds.
At 1 second / 300 frames we have only 3,333 milliseconds to render one frame.
But the engine does not just measure the total frame time (script-, physics-, rendering- and wait-time). So microsecond precision is already needed.
There can be rounding errors from using less precise datatypes (float instead of double) in programming.
On top of that there are various ways to implement a FPS counter with varying precision.
For example some FPS-counters show an average of the multiple past frames to make the number more stable/human-readable and others just do 1 / frame time = FPS.

Yeah I know that you hit the 300 fps limit at 1080p with the more powerful rigs, but the average FPS are still pretty telling of performance no?

Anyways, my point is that a baseline resolution that everybody uses makes evaluating the performance of different HW setups easier. 1080p is something everyone are able to run in 2022, but I guess 1440p is getting pretty common too.
 
Upgraded from a 3600X to a 5800X3D
- 32GB RAM at 3600MHz
- Clean VAM install
- I realised I did my original benchmark windowed (oops) so this benchmark set is *almost* 1080p. It's specifically the relative Baseline 3 numbers that are most relevant, in case anyone is contemplating this upgrade (as I was), rather than a full system re-build.

3600X
Benchmark-20221125-142522.png


5800X3D
Benchmark-20221126-064132.png
 
Sorry for double post, but re-run it again. Somehow i got a bit worse results now.

Here are at 1080p, starting from CPU power these are max values during a test:
View attachment 179801

And 1440p
View attachment 179804
Comparing CPU power usage and temps between these two tests, we can clearly say CPU was actually lazy, waiting for GPU, not showing it's full potential
Many thanks! I find it quite insane that most RL reviews mention gaming power somewhere 70-100W and here vam can take nearly 200.. I am thinking I may change my platform and the most reasonable itx ddr4 route is a mobo with around 150 power limit. I tought that could be enough for vam but seeing your result im
not so sure even a 13600k would not throttle
 
Back
Top Bottom