Benchmark Result Discussion

I don't understand the question: everything maxed out of course :p :D
(13900K)

All fun and games until Intel (on purpose for $$$ reasons) changes a single pin on next gen CPU to force you to buy an entire new motherboard. That is reason I don't buy Intel any more. They are absolute chancers.
 
Anyone got a RX 7900 XTX in their system and can share some results? Would be really interested as AMD usually doesn't perform as well as Nvidia in VAM. Looking to upgrade to a RX 7800XT maybe, as I play a lot of other games, too.
 
@geo_gan
And still: Intel has the highest singlecore performance. So what should I do? I have NO choice ?
But please no Intel vs. AMD is better in this and that and fanboyism vs. dollar/performance but Lisa Su is a better human being against we are all just consumers to heat problems vs. power consumption, in this thread. Please don't do this. Buy whatever you want and be happy with it. M'kay?

@vnet
I've read that it performs not that good. But I've no valid tests or something. Some crashes here and there.
Check out this thread, should be yours: https://hub.virtamate.com/threads/has-anyone-tried-vam-with-the-rx-7900-xtx.31366/
 
Kingston Fury 2x16 gb RAM, running 4600;
Gigabyte Gaming OC 4090
7950x
Unsure if my results are due to the 7950x being that much behind the intel, or if I need to do some tweaking

Benchmark-20221219-215838.png
 
@zilch16
Pretty sure it's because the singlecore difference, your physics time is remarkable slower than with intel. That's what I meant above.
 
Kingston Fury 2x16 gb RAM, running 4600;
Gigabyte Gaming OC 4090
7950x
Unsure if my results are due to the 7950x being that much behind the intel, or if I need to do some tweaking

View attachment 189529
Man IDK. My old 11900k is still faster at physics than your 7950x test. Maybe some tweaking is needed. make sure you set to xmp overclock on ram, or whatever its called for amd?
Benchmark-20221022-175850.png
 
I gathered some data from this thread and made a benchmark chart. The data should be considered inaccurate as some data just had one test sample (like 2070 Super or 980Ti) while other ones like the 4090 should be quiete accurate (6 Test Samples for 1080P). I tried to eleminate obvious stray data, but the inaccuracy shows in the results - like a RX 6800XT outperforming a 6900XT or a 3090TI performing better in 1440p than 1080P. So this shall only be an orientation. I first tried to make a CPU Chart as well, but as expected, all benchmark scenes appear to be heavily GPU bound.


Bild_2022-12-20_192455740.png

(The collected fps data is a not the average, but the "mirror" scene data as this scene seems to be the most GPU bound.)
 
I gathered some data from this thread and made a benchmark chart. The data should be considered inaccurate as some data just had one test sample (like 2070 Super or 980Ti) while other ones like the 4090 should be quiete accurate (6 Test Samples for 1080P). I tried to eleminate obvious stray data, but the inaccuracy shows in the results - like a RX 6800XT outperforming a 6900XT or a 3090TI performing better in 1440p than 1080P. So this shall only be an orientation. I first tried to make a CPU Chart as well, but as expected, all benchmark scenes appear to be heavily GPU bound.


View attachment 189757
(The collected fps data is a not the average, but the "mirror" scene data as this scene seems to be the most GPU bound.)

Nice idea, but... benchmarking only GPU is pointless in terms of VaM. It's still all CPU bound too.
Even for 'gpu specific' test [Mirror one U used] i got 30fps higher results with 13900k+4090, than Zilch above with Ryzen 7950x+4090, at the same resolution.
As Matt showed above, with his current out-of-curiosity-testing platform [13900k + 2080Ti], strong CPU with worse GPU, won't benefit either.

I'm not sure which values did U used for that chart. I got 210+ avg in Mirror test at 1440p.

I was hoping we could use RenderTime in mirror, or hair sim tests to isolate GPU performance, but they seems to be affected by CPU too.
For example, even in these tests, i had better results with 13900k+3090, than some people with 4090.

So, at the end of the day, we can only compare whole systems, or isolate GPU\CPU performance from the very similar builds [like same CPU, but different GPU, and opposite].
 
Last edited:
Man IDK. My old 11900k is still faster at physics than your 7950x test. Maybe some tweaking is needed. make sure you set to xmp overclock on ram, or whatever its called for amd?View attachment 189760
Thank you, yeah; i am planning on switching out the RAM with something that is compatible with Expo to boost the speed a bit to see if that makes a difference. Seems like a much larger margin than all the other benchmarking I have seen between the two
 

So looking at this... AM4 users on CPUs prior to Zen 3 maybe shouldn't bother paying extra for a 5800x3D when the 5800x is so close? It is a shame there seems to be no 5700x ratings in these charts because it will probably be close and even cheaper...
 
So looking at this... AM4 users on CPUs prior to Zen 3 maybe shouldn't bother paying extra for a 5800x3D when the 5800x is so close? It is a shame there seems to be no 5700x ratings in these charts because it will probably be close and even cheaper...
You have to decide for yourself. I did switch from a 5600x to a 5800x3d and vam runs smoother. I have to say that all my Vr games do run way better now. So yes i think the 3d cash is black magic or sth. Still if u dont need it u could w8 for the 7800x3d.
 
I just upgraded my setup from 8350k+1070Ti to 12400f+3070, both systems have 32GB 3000MHz (XMP) RAM and i am super happy with the results!
 

Attachments

  • 8350.png
    8350.png
    849.2 KB · Views: 0
  • 12400.png
    12400.png
    847.9 KB · Views: 0
Anyone know if Resizeable BAR makes much of a difference?
it works (officially) ONLY with some games for wich nvidia was optimizing this option. Then (when I was with my old hardware), I was trying a mod to trick my nvidia drivers ... a crapware mod that someone was recommending. I don't remember if it was on reddit or somewhere else, and it was made to get compatibility with any game you wanted to apply. Crapware (that mod). Sorry.
 
Last edited:
Upgraded from a 3060TI to a RX 6900XT because of my new Ultrawide Monitor. Pretty happy with the performance. RX 6900XT delivered much better results than expected by the usually weak AMD perfomence in this game.

I am personnally surprised how the gpu upgarde much improved the physics time in all scenes. As far as I have read in this thread, the physics are handled by the CPU.


Benchmark-20230103-123547.png
Benchmark-20230103-142257.png
 
I noticed today that my GPU utilization drops down to about 55% during baseline 3. It's at about 96-99% on every other scene. Anything on my end that could be causing this?
Benchmark-20230107-171800.png
 
Last edited:
I noticed today that my GPU utilization drops down to about 55% during baseline 3. It's at about 96-99% on every other scene. Anything on my end that could be causing this?
It's normal. It's the most CPU- / physics-heavy scene.
There is one single thread on the CPU maxed out. GPU has to wait for CPU more.

If you check Task Manager CPU performance with graph set to show "Logical processors" during baseline 3 it is visible.
With it's default settings you cannot see it. Task Manager only shows all core utilization then.
There is always one graph high at a time.
It may switch / jump around a lot. The scheduler instructs another CPU core to "do stuff" all the time.
 
It's normal. It's the most CPU- / physics-heavy scene.
There is one single thread on the CPU maxed out. GPU has to wait for CPU more.

If you check Task Manager CPU performance with graph set to show "Logical processors" during baseline 3 it is visible.
With it's default settings you cannot see it. Task Manager only shows all core utilization then.
There is always one graph high at a time.
It may switch / jump around a lot. The scheduler instructs another CPU core to "do stuff" all the time.


Ahhh ok - that's where i missed it then. I had a feeling it was a CPU bottleneck, but my CPU TOTAL utilization was only around 60% or lower for that scene. That must be it. I take it only a hardware upgrade will get me around this then?

Also thank you for the detailed response!
 
I wonder how VAM will work on newest laptops ( 13th gen Intel CPU and RTX 4000 GPU). From the first glance they seem very tempting but I wonder if they will be able to reach RTX 2000 or 3000 desktop level performance in VAM. I would like to retire my laptop with GTX 1060 this year.
 
I wonder how VAM will work on newest laptops ( 13th gen Intel CPU and RTX 4000 GPU). From the first glance they seem very tempting but I wonder if they will be able to reach RTX 2000 or 3000 desktop level performance in VAM. I would like to retire my laptop with GTX 1060 this year.

The old ASUS ROG Mothership with 200 watt gpu tdp/rtx 2080 faster than the 175 watt gpu tdp/rtx 3080 laptops. The RTX 4080 laptop gpus will be powered with tdp 150 watt. In raw GPU performance the 4080 will be a step back. Maybe DLSS supported games will profit from the new platform, or the better cpu gives some advantage. (the apple M2 SOC in many real world scenarios with 20 watt tdp faster than the notebook 3090 ti gpu, nvidia notebook gpu platforms has a brutal rival, they compete against SOCs, not the desktop GPUs, is it realy enough to decrease from 200 to 150 watts against 20 watts???)
On the other side the Asus mothership has a liquid metal cpu cooler with 45 watt tdp, while the avarage 12900hk laptops come with 35 watt tdp cpu coolers. The fastest 13genth notebook cpu has only 45 watt theoritical tdp maximum, and divided beetwen twice as many cores. I hardly believe it will be a game changer. The gap beetwen desktops and notebooks will be bigger than ever. In high end rigs nvidia desktop unbeatable, apple just has overtaken the mobile segment, amd plays leading role in consoles. The only place where is a real fight, the poor pc consumer class, where intel arc, amd and nvidia fight for marketing gains, selling crap...
 
Last edited:
Hi Folks, I'm trying to get my head around these results. I've heard people say you should expect a round a 50% fps drop between desktop and VR, but I've got a massive drop. In fact it seemd to sit at 30fps rock solid for the first few tests almost like it was limited.
what am I missing?!
Thanks in advance.
Benchmark-20230110-163128.png
Benchmark-20230110-170040.png
 
Back
Top Bottom