Benchmark Result Discussion

Another test with a version modified by me.

Benchmark-20231102-184102.png


It will only run the Baseline 3 test.

This is very useful to quickly share results in the most critical test of the entire benchmark.

If anyone is interested, I'll drop my modification in case you want to try it.
 

Attachments

  • MacGruber.BenchmarkBL3.1.var
    27 MB · Views: 0
Full test with a simple PBO adjustment.

Before:

Benchmark-20231031-220302.png


After:

Benchmark-20231104-155006.png


The difference is minimal and where the improvement stands the most in the Baseline 3 test.
 
Life has really got me down so I've took to testing out various old hardware combinations to distract and humor myself seeing how bad it can run various modern things, culminating in a full days drive to buy the in store only, not yet posted benchmark for vam, what am I doing , Ryzen 5 5600x3d.

I was really aiming for the bottom with this start, a 4 core 8 thread 35 watt apu vega 11, just looking at specs, vega 11 is better to the gtx 550ti but with the problem of shared memory, this setup cratered. Even at 720p reduced, the numbers need to be in seconds per-frame. It took 25 minutes to complete just the one scene, the on screen timer took 12-17 real world seconds for every on screen one. The cyberpunk room demo gets 10 - 24 fps with the same settings.
Using Pepito123 baseline3 only mod.
2400ge apu.png
gtx 550 ti system pull 200w , wanted to try this with a phenom 2 but the win 10 usb installer kept crashing. frames never been so consistent.
Benchmark-20231112-045619.png
gtx 750 ti system pull 130w , the non x3d gave slightly slower physics times at 50ms same fps. Reducing to 720p increased frames by 3.
Benchmark-20231112-041003.png

The 5600x3d gpu bound with only 2 fps increase and 2ms decrease in physics, a simple four character pose scene I made reduced physics time by about 5ms.
Temperature in VAM is only slightly higher (2-3) than the non x3d, other applications increased by 10-15. 305 watts 10 more over non x3d.
56x3d.png
 
How much power for gpu if I may ask? My 6800XT does the exact same numbers, I am curious the generational uplift means something in efficiency at least. For mine it takes somewhere 230W for this.
270W

It's amazing that you get the same results as I did in a previous generation.

What is your configuration? Can you share your benchmark?
 
270W

It's amazing that you get the same results as I did in a previous generation.

What is your configuration? Can you share your benchmark?

It is here. Did some tweaking on cpu and ram in bios to get some more juice, but to be honest it isnt night and day with the best possible tuning. Just get your ram timing on order.. VGA consumption tops at 250W.
1080p:
Benchmark-20220926-154516.png

1080p with tesselation set to 8x in adrenalin:
Benchmark-20220912-104120.png

1440p:
Benchmark-20220912-102713.png

1080p with tesselation 8x and 5800X3D:
Benchmark-20230904-072837.png


I switched the 6 core 5600x to an 5800X3D couple months ago and while I have seen results on baseline 3 with below 3 ms I could not achieve that even with vanilla vam. Perhaps my aged win10 is the problem. Anyway I think it was a handy upgrade because of the am4 socket. The v-cache is definitely helping vam.
 
Last edited:
Needless to say, I need to upgrade some parts..

Would there be any sense in buying something like a RTX 3060, or will even that be bottlenecked by my CPU? If I can get the Simpler Physics scene up to an average of 30 fps then that would probably be fine for me.

My PC is 96% Office work and the last 4% occasional gaming. VAM is quite interesting to play around with, so it would be nice to run scenes without having everything at Ultra Low, and without having to spend much more than $300.

Benchmark-20231124-153058.png
 
Needless to say, I need to upgrade some parts..

Would there be any sense in buying something like a RTX 3060, or will even that be bottlenecked by my CPU? If I can get the Simpler Physics scene up to an average of 30 fps then that would probably be fine for me.

My PC is 96% Office work and the last 4% occasional gaming. VAM is quite interesting to play around with, so it would be nice to run scenes without having everything at Ultra Low, and without having to spend much more than $300.
A 3060 should do 30 fps at 1440 simpler. I'm getting 55 fps at 1440 60hz simpler.
When I had a ryzen 2600 with 3060ti base3 fps were 36 physics 20ms @1080, at 300ms that is suffering from something.
 
Needless to say, I need to upgrade some parts..

Would there be any sense in buying something like a RTX 3060, or will even that be bottlenecked by my CPU? If I can get the Simpler Physics scene up to an average of 30 fps then that would probably be fine for me.

My PC is 96% Office work and the last 4% occasional gaming. VAM is quite interesting to play around with, so it would be nice to run scenes without having everything at Ultra Low, and without having to spend much more than $300.

View attachment 309297

After upgrading to a new RTX 3060, all other components stayed the same. For Baseline 3, performance is 10x better.

I didn't run a full benchmark, but for other scenes frames more or less doubled, sometimes tripled. I'm satisfied.

Benchmark-20231129-125259.png
 
Thanks for detailed explanation
I read articles that vram and Memory Bus Width is important for VR gaming and 4K,
but on the other hand some articles saying radeon is not suitable for vr gaming because of optimization in VR?

my concern is performance especially in VR mode ,sorry for my insufficient word
4080 is better and 4090 would be best one ,i know well, but i have 2K 144hz monitor and 4070 ti is ideal
so 4080 is excessive for non-vr gaming even like cyberpunk or hogwarts legacy
thats why im reluctant to buy 4080

Considering balance between performance and cost ,
i think its better to buy 4070ti and in one or two yeas to sell it and buy 50xx series

othewise should i wait for 4070ti super? it is just a rumor
I know I am super, super late to this, but.

Do you need CUDA cores? I don't mean for RTX Raytracing, you can do that with Reshade if you really want to but frankly it's kinda overblown right now. I mean do you need CUDA cores for rendering, for machine learning, for anything that REALLY needs them?

If you don't, get an AMD GPU. If you do, you already know you don't really have a choice.
 
Recently having upgraded from a GTX 970 to an RTX 3060, I was initially happy with the result, but playing around in VR, it did lack in performance unless scenes were kept very simple. It blasts through everything in Low Quality setting, but what's the fun in that.

For the fun of it, here's a benchmark with the Quest 2 and my old 2700x processor. I'm getting a 5700x on monday and then I'll redo the benchmark. While I don't expect much better performance, a 30% increase would help make simple scenes a bit more fluent, as they're usually hanging around 35 fps in VR.

Hopefully someone out there will appreciate these budget upgrade results.

Benchmark-20231208-150443.png
 
Christmas came early, the 5700X arrived a couple days earlier.

Result-wise, jumping from a 2700X to a 5700X gave me a 50% increase in 1440p.

But why have my FPS dropped in VR mode? That makes no sense? My 2700X ran 20fps in VR but the 5700X just at 18fps???

Benchmark-20231129-125259.png


Benchmark-20231209-103315.png



VR Results:

Benchmark-20231208-150443.png


Benchmark-20231209-104509.png
 
Did some profiling, reverse engineering and patching with dnspyex on the VAM engine and got a FREE 13% performance increase in the physics engine (See Baseline 3). More physics objects in a scene yield better performance boosts using my patch.

All I did was add batching and multithreading in code hotspots in GPUCollidersManager::ComputeColliders. No artifacts or bugs or any other side-effects possible. Just simple first grader optimization of engine code.

The profiler also showed me that the physics simulation is 1% waiting for GPU to finish collision simulation, 9% CPU calculations and 90% plain CPU time waste in managed wrapper calls to the mono runtime. This explains why Ryzens with X3D are so fast in VAM, their abnormous cache eats all those useless wrapper calls.

With some further optimization a 100% increase in physics speed in feasible. Outloading the physics calculations to a native library and calling the UnityPlayer.dll directly could feasible increase physics speed by 1000%.

Overall Unity and VAM code are a shitshow.

Original Assembly-CSharp.dll
orig.png

Patched Assembly-CSharp.dll
patch.png
 
So, I ran a bunch of desktop and VR benchmarks today on my R5 5600x, RTX 3060ti, 16gb ram at 3200mhz, running from nvme ssd, apart from enabling XMP nothing is overclocked and everything is stock.
The desktop resolutions came out a bit off in the results, might've had something to do with windowed mode I'm not sure, but I assume it would be too small of a difference to have any visible impact..
For the headset I used a Quest 3, both via wired oculus link, and wireless virtual desktop, when it says "Oculus Rift CV1" that's just the Oculus pc app reporting it as a CV1, because meta is lazy I guess.

Desktop: 1920x1080
107.08 fps avg
Desktop - 1080p.png


Desktop: 2560x1440
82.32 fps avg
Desktop - 1440p.png


Desktop: 3840x2160 ("4k")
51.84 fps avg
Desktop - 4K.png


Oculus Link wired cable, oculus runtime: 3744x2016 (1X, 90hz)
38.51 fps avg
VR Wired Link - 1X 90hz.png


Oculus Link wired cable, oculus runtime: 4128x2240 (1.15X, 72hz) I threw this one in because I saw some people using 72hz mode
37.13 fps avg
VR Wired Link - 1.15X 72hz.png


Oculus Link wired cable, oculus runtime: 4512x2448 (1.2X, 90hz)
28.44 fps avg
VR Wired Link - 1.2X 90hz.png


Wireless VR over Virtual Desktop, Oculus Runtime, 160mbps, HEVC, SSW disabled: Low settings (3456x1824, 90hz)
48.09 fps avg
VR Wireless VD - Low 90hz.png


Wireless VR over Virtual Desktop, Oculus Runtime, 160mbps, HEVC, SSW disabled: Medium settings (4032x2112, 90hz)
40.76 fps avg
VR Wireless VD - Medium 90hz.png


Wireless VR over Virtual Desktop, Oculus Runtime, 160mbps, HEVC, SSW disabled: High settings (4992x2592, 90hz)
32.67 fps avg
VR Wireless VD - High 90hz.png


Another benchmark that I chose not to include was that you need on average 2 barfbuckets per vr benchmark
 
Last edited:
5900X PBO & 3060 Ti 104% power limit +80 Core +1300 Mem
Benchmark-20240115-044439.png


With 1 CCD via Bios. Pretty nice fps boost.
5900x 1 CCD 3060 Ti.png
 
Last edited:
5900X & 3060 Ti ...
Interesting, so I'm getting a slightly higher avg fps on my stock 3060 ti than your stock 3060 ti, in spite of you having a better cpu and double the amount of ram.. I mean the fps is negligible but still.. I am really curious what your results would be on 4k and/or vr. Either way this probably indicates that at 1080p the 3060 ti is being maxed out, and probably the bottlenecking factor

Edit: So I ran it again in full screen mode to compare, and I figured I would run hwmonitor and msi afterburner to see if there would be a difference. It looks to me that the 3060ti is the bottleneck in VAM even for a 5600x, even on 1080p.. Which might be handy to know since I was considering upgrading.
Quick question, you have a 5900X, isn't CCDs boost only for the X3D?

1080p stock settings
1080p fullscreen stock.png


1080p overclocked, 103% power limit, +100 core clock, +1000 memory clock
1080p fullscreen 103plimit 100core 1000mem.png


4k stock
4k fullscreen stock.png


4k overclocked, 103% power limit, +100 core clock, +1000 memory clock
4k fullscreen 103plimit 100core 1000mem.png
 
Last edited:
Interesting, so I'm getting a slightly higher avg fps on my stock 3060 ti than your stock 3060 ti, in spite of you having a better cpu and double the amount of ram.. I mean the fps is negligible but still.. I am really curious what your results would be on 4k and/or vr. Either way this probably indicates that at 1080p the 3060 ti is being maxed out, and probably the bottlenecking factor

Edit: So I ran it again in full screen mode to compare, and I figured I would run hwmonitor and msi afterburner to see if there would be a difference. It looks to me that the 3060ti is the bottleneck in VAM even for a 5600x, even on 1080p.. Which might be handy to know since I was considering upgrading.
Quick question, you have a 5900X, isn't CCDs boost only for the X3D?

Hey. I'm actually running my 3060 Ti overclocked at 104% power limit +80 Core +1300 Mem. My 5900X also has PBO and is undervolted. Though I do notice VAM crashes sometimes so maybe I have to dial back my GPU OC.

I disabled a CCD because I wanted to make sure that VAM runs on the best cores. Plus, I wanted to reduce any chance of inter-CCD latency. That's what I heard anyway about old games. I'm sure newer games make better use of multi-core processing so there would be no reason to disable a CCD or set core affinity.
 
Hey. I'm actually running my 3060 Ti overclocked at 104% power limit +80 Core +1300 Mem. My 5900X also has PBO and is undervolted. Though I do notice VAM crashes sometimes so maybe I have to dial back my GPU OC.

I disabled a CCD because I wanted to make sure that VAM runs on the best cores. Plus, I wanted to reduce any chance of inter-CCD latency. That's what I heard anyway about old games. I'm sure newer games make better use of multi-core processing so there would be no reason to disable a CCD or set core affinity.
I noticed myself that going over 1000 mem clock crashed vam for me also, 1000 was the max for me to make it through the benchmark without crashing.
but I didn't know that was possible on the 5900x, I thought that was reserved for the newer x3d models, in that case I should have a look at that, maybe it's possible on mine also then.
The weird thing I found though was that for me it seemed like the cpu was not bottlenecking the gpu, and the 3060ti was the limiting factor, but you clearly got a big uplift by cpu settings alone right?
 
I noticed myself that going over 1000 mem clock crashed vam for me also, 1000 was the max for me to make it through the benchmark without crashing.
but I didn't know that was possible on the 5900x, I thought that was reserved for the newer x3d models, in that case I should have a look at that, maybe it's possible on mine also then.
The weird thing I found though was that for me it seemed like the cpu was not bottlenecking the gpu, and the 3060ti was the limiting factor, but you clearly got a big uplift by cpu settings alone right?
5600X has a single die so you can't disable another one. 5900X is a pretty bad cpu is for VAM since the game relies more on single core performance. Disabling a CCD effectively makes my cpu a higher clocked 5600X.

I believe at 1080P, the biggest bottleneck is the CPU since I could squeeze more fps with the 3060 Ti. I could see that my physics time improved by a lot by optimizing my CPU.
 
@LsRp Very interesting results.

To directly compare with my 5700X and 3060, against your 5800X3D and 2060S, the 1080p results are nearly identical for all scenes. The 4070 does a massive difference, wow! I do wonder if I shouldnt' have spent a few hundreds more on a better GFX than the 3060 but oh well now too late.

Benchmark-20240120-150128.png
 
Back
Top Bottom