Benchmark Result Discussion

Hi,


here my Benchs:
1080p
View attachment 111750

1440p
View attachment 111748

4k
View attachment 111751

My System:
R9 5900X (boosts up to 5.125Ghz)
32GB DDR 3200 CL16/18/18 T1
3090 from Zotac (110% Power, +140Mhz Core, +1000Mhz VRam)
2TB Gigabyte Aorus NVME @ PCIe 4.0 x 4 (for games only)

CPU and GPU are watercooled with a MoRa 420 Radiator and 4 Noctua 200mm Fans.

It looks "ok", but Baseline 3 sucks. I have no Idea why my system is so slow compared to other. Is my 3200er Ram holding me back?
I know for a fact that the ryzen 5xxx really benefits from faster ram 3600 cl16 is often the recommended. would it be financially viable to do given the smaller uplift over 3200, I dont know...

looking at the perf figures your times look very good and total time looks spot on for the level of processor and your getting 15-20 fps over a 3080 which probably about right too so id say its pretty much par for course and not much to worry about.
 
Ye something's causing the physics time to be a good deal higher than expected but only on baseline 3 the others look pretty much as expected. How "Clean" is your vam install as I have seen 1000s of vars and lots of loaded morphs (var or local files) cause significant loss of fps.
 
Nice one! At least that gives you a controlled result and hopefully its just file clutter causing the erroneous results
 
The AMD Ryzen 7 5800X3D CPU is near!
If somebody has the 5800X3D - Please post a 1080p Benchmark Version 3 from a clean VAM install.

Compared to the 5800X the 5800X3D has:
  • no overclocking support! boo!
  • 96 MB L3 cache (instead of 32 MB)
  • 200 Mhz less Boost (4.5 Ghz instead of 4.7 Ghz) to stay within thermal limits
The cache is key to 'potentially' increase it's performance.
In Cinebench the 5800X3D scores lower than a 5800X due to the lower boost.
However !!!
... in Shadow of the Tomb Raider it did smack the absolute s*it out of a 12900KS @200 FPS and 5800X3D @228 FPS!
5800X3D.jpg
Damn! Competition is a nice thing! Both want the performance crown badly.
Performance is very depended on the type of workload.

Now the big question: How does it perform in VAM? Potato or Killer?

@HiddenSign
Hey nice Hardware! Love the MoRa and 5.125 Ghz OC on CPU! Can't reach that with my 5800X on Air. ;)
I've started using Hydra by 1usmus with an undervolting profile on my CPU for power-efficency.
Indeed a weird low baseline 3 phys. time for a 5900X. Hope you get better results with a clean VAM.

Don't think it's the 3200 RAM - that score difference is to big. I've dropped my daily 24/7 settings to 3200 CL16 OC too.
Had 4x16 GB running at 3333 CL16 OC. Was stable, but felt to close to the red line.
Slowest modules are only rated 3000. Did multiple runs on 3200 OC vs 3333 OC. Results where within margin of error.
Guess CL14 vs CL16 would have a bigger impact - cannot test it with my potato RAM - CL14 fails to boot as expected.

When comparing my results here with yours here the Zotac seems slower in RenderTimes too.
Does the Zotac 3090 run at full PCI-Express speed under load? GPU-Z can verify this.
Due to your NVMe M.2 @ PCI-Express 4.0 you should have a X570 mainboard. Should be PCI-Express x16 Gen 4.0 then.
We both have watercooled 3090s. They should be similar. Highest stable clock I get is 2.1 Ghz on GPU Core & +1500 on VRAM.
Unfortunately I do not remember the GPU settings I did run back then.
 
Last edited:
I am really looking forward to the 5800X3D and the VR Rig is waiting (not the above PC). It has a 3900X at the moment.

My 3090 is a turd (fast compared to anything, else but for a 3090). I had to flash another Bios from different Zotac card to increase my power by 10%. It took me ages to tune it to the state it is now. Out of the box my 3080Ti Fe was faster than the stock 3090. But it was the only card available at that time and i got it for MSRP.
But when I benched 1080P I noticed that it was not at 100%, so I have a bottleneck elswere. The difference between 1080 and 1440 is very small, so maybe I was in a CPU limit of some sorts.

Yes, it runs at 4.0 x16. But maybe it is my PCIe config. I use all available PCIe lanes. GPU x16, 2x NVME 4.0 x4, 5GBit Nic and Soundcard.

Other Benchs:
TimeSpy: 19094
TimeSpy extreme: 10229
Port Royal: 14132
not bad, but could be better.

But first the test with a clean install.
 
I can't get excited about an AMD 5800X3D when its an obvious experimental release to respond to compete with 12th gen Intel. The Zen4 AM5 releases this fall intrigue me though.. matched with an Nvidia RTX 4090 --- That's when my fun money will be applied to a new build. Currently running an AMD 3900X and Nvidia RTX 2080 Super... and its fine for now. If MeshedVR ever releases 2.0 in a manner that lets creators create... then we'll be focused on the optimal for that platform and talking about converting the best looks/scenes to a better optimized software engine.
 
I bought all the HBM GPU's from AMD (still have my R9 Fury and Vega 56, sold the VII because the crazy amount of money it made on ebay). They were pretty much experimental stuff, but I like that kind of stuff.

Less turdy 1080P
1080p clean.png

it seems to make a huge difference in how much stuff you have in your install....
 
Good news after this day, the forum alive... I got my 12700f and 12400f. Went with the 12700 way, but it faild to strat, giving memory error from the mobo. Secretly I expected it, because I salvaged two different modules, but it was strange, if I used only one also failed to boot. I bought new mems from the ASUS recommended list, but it just can't fit because the shitti mobo, the intel cooler hides the first memory slot. Why the hell is it the recommended official mem? After one day debbuging figured out, my hdmi cabel has been broken. Changed the cabel, and finally started downloading VaM :)
 

Attachments

  • image0.jpeg
    image0.jpeg
    129.1 KB · Views: 0
Last edited:
No OC, everything on default, latest drivers
cheapest H610 mobo
two 2666 cl16 mems, not paired
Samsung 970 evo ssd.

I'm more than pleased with the results.
 

Attachments

  • Benchmark-20220408-204035.png
    Benchmark-20220408-204035.png
    842.9 KB · Views: 0
  • Benchmark-20220408-211411.png
    Benchmark-20220408-211411.png
    845.1 KB · Views: 0
Last edited:
No OC, everything on default, latest drivers
cheapest H610 mobo
two 2666 cl16 mems, not paired
Samsung 970 evo ssd.

I'm more than pleased with the results.

That's the stuff right there, looks like its right in line with what id expect a 3080 to do!👌
 
No OC, everything on default, latest drivers
cheapest H610 mobo
two 2666 cl16 mems, not paired
Samsung 970 evo ssd.

I'm more than pleased with the results.
Please get at least 3200MHz Ram. It's the optimum. Faster is a little better but slower is holding back performance. Other than that: enjoy the power of 12th Gen. Intel. That's the power of single core performance. :)
 
i7 10850k
32gb DDR 4 RAM 3000MHz CL15
RTX 3070 FE

Seems a bit low. What do you guys think?
 

Attachments

  • New.png
    New.png
    850.9 KB · Views: 0
i7 10850k
32gb DDR 4 RAM 3000MHz CL15
RTX 3070 FE

Seems a bit low. What do you guys think?
Your results roughly 30% lower than mine. Your cpu 30%, gpu 25% slower. I think your results can be better with tweaking, but it won't give you significant boost.
 
EDIT: Update moved >HERE <
I have updated the CPU comparison table.
The latest 5600X results are weird. They are all over the place. The CPU is clearly capable to do Baseline 3 physics avg. ca. 5ms - but the results by @ruinedv3 and @Jiraiya are around 10ms - that's very bad. Bloated VAM installations?
12700K does great! Physics calculations seem to be unaffected by slow 2666 CL16 RAM based on Tomb's results.

@Yankees1550 Overall looks fine to me for RTX 3070. I'd say it's all good and within margin of error.

Edit: @HolySchmidt Yeah you're probably right. I though about moving the chart too. Wasn't sure. I'll leave it there for now and make a new topic if do another update. I'm happy it helped me and others to have a better overview. Have seen people on Discord use it too.
 
Last edited:
You should make a whole new topic for the chart. Like "Hardware overview charts for VaM 1.x". Cause this is kind of important for every newbie and the results of next gen CPUs are important as well for long time users. Thanks for the overview so far! :)

This thread should be reserved for results in benchmark only and then we would have another for "discussing" only.
 
I have updated the CPU comparison table.
The latest 5600X results are weird. They are all over the place. The CPU is clearly capable to do Baseline 3 physics avg. ca. 5ms - but the results by @ruinedv3 and @Jiraiya are around 10ms - that's very bad. Bloated VAM installations?
12700K does great! Physics calculations seem to be unaffected by slow 2666 CL16 RAM based on Tomb's results.

@Yankees1550 Overall looks fine to me for RTX 3070. I'd say it all good and within margin of error.
It is a 12700F :) The AMD results a little bit strange, because the intel cpu replaced a ryzen 5 2400g in my config, and my ryzen performed better in baseline, than the ryzen CPU-s in the chart.
 
Last edited:
I have updated the CPU comparison table.
The latest 5600X results are weird. They are all over the place. The CPU is clearly capable to do Baseline 3 physics avg. ca. 5ms - but the results by @ruinedv3 and @Jiraiya are around 10ms - that's very bad. Bloated VAM installations?
12700K does great! Physics calculations seem to be unaffected by slow 2666 CL16 RAM based on Tomb's results.

@Yankees1550 Overall looks fine to me for RTX 3070. I'd say it all good and within margin of error.

My Ram is 3600 CL16 if you wanted to update that in the comparison ;)
 
Could someone please explain things which could be basic?
I tested my Ryzen 6900HS + 3700S configuration and see that Render and Script time is 1-2ms while Physics time is about 10ms with 25ms exception for Baseline 3 benchmark.
People says that Physics time it is CPU time but it means there is no meaning to upgrade GPU since 80-90% of the time is CPU time. Is it correct?
At the same time I have seen a contradiction result in this thread where a person replaced just GPU from 3080 to 3090 and got a decrease in Physics time.
I am confused, please help :)!
 
Could someone please explain things which could be basic?
I tested my Ryzen 6900HS + 3700S configuration and see that Render and Script time is 1-2ms while Physics time is about 10ms with 25ms exception for Baseline 3 benchmark.
People says that Physics time it is CPU time but it means there is no meaning to upgrade GPU since 80-90% of the time is CPU time. Is it correct?
At the same time I have seen a contradiction result in this thread where a person replaced just GPU from 3080 to 3090 and got a decrease in Physics time.
I am confused, please help :)!
Physics time is not pure CPU performance, but it is affected heavily by single core performance. In the comparison chart you can see how the two Core i7-8750H 6C performance affected by the gpu, or the ryzen 7 group performance how differs.
 
Last edited:
Physics time is not pure CPU performance, but it is affected heavily by single core performance. In the comparison chart you can see how the two Core i7-8750H 6C performance affected by the gpu, or the ryzen 7 group performance how differs.
where can I find comparison chart? do we have a convenient place for that or it is somewhere in this thread?

What heavily affected means? 10% - GPU, 90% - CPU one core? 50/50? X/Y?

I want to get an understanding how much FPS improvement can I get with 14 or 15" gaming laptop in Oculus Quest VR?
Or answer is already known and it is impossible to get satisfactory FPS in VR with current laptops since people are struggling even with desktop 3090 GPUs and 5090X overclocked CPUs?
 
where can I find comparison chart? do we have a convenient place for that or it is somewhere in this thread?

What heavily affected means? 10% - GPU, 90% - CPU one core? 50/50? X/Y?

I want to get an understanding how much FPS improvement can I get with 14 or 15" gaming laptop in Oculus Quest VR?
Or answer is already known and it is impossible to get satisfactory FPS in VR with current laptops since people are struggling even with desktop 3090 GPUs and 5090X overclocked CPUs?
We just knows what the benchmark results show. It is a community software, with the benefits and limitations... The chart two or three pages back in this thread. I replaced my beloved asus rog strix notebook to pc, because when two years ago I bought it was cheaper, than a pc conf with same performance, but now with the rtx 30 series and the intel 12th gen, and upcoming new amd cpus, rtx 40..., they are just so powerful, you can buy three faster pcs from the price of a notebook capable of VAM VR gaming. Buy a macbook and a high end pc from the price of a gaming laptop as I have done, if mobility matters :D Browsing and working with mac, and fapping with pc. One of my project managers in a meeting accidentaly just started fapping material from his notebook. There are reasons not just owning one platform...
 
Last edited:
I think I have found the reason for some of the weird benchmark results ...
TLDR: It is probably a timing / synchronization issue.

Ignore the talk about about "lower end devices". VAM is a special case and not the typical Unity game where physics are simple.
The video shows that all physics calculations without a doubt are done on the CPU. It does show the thread on the CPU in the Unity Profiler.

The guy talks about "variable update loop". That is most likely the call to all Update-functions. Every Unity- or VAM-Plugin developer knows this is "variable" due to being called every frame. Every "thing" in Unity has this Update-function implemented. All "things" must be updated once per frame from the main loop. The Unity Engine itself has no limit to how fast this is done - making it "variable" to push out as many frames per second as possible.

Physics on the other hand are handled with a call to the FixedUpdate-function at a fixed frequency. Assuming VAM's default "Auto" Physics Rate frequency of 72 Hz it will be called every 13.888 ms. (1s / 72 = 0.013888 seconds) The benchmark uses a 60 Hz / 16.666 ms physics rate.
This is the point where things get complicated. We do know that Unity does only need all Update-calls to be done every frame. FixedUpdate-calls are not required for every frame. But it does need the FixedUpdate-calculations to be done in 13.88 ms to present a frame with correct physics. Yes, yes I can see people already screaming that's not correct! Grab pitchforks and torches, but for simplicity lets ignore overhead and the fact that other calculations must be added to the frame time. In reality the time for the CPU to do it's job is much lower than 13.88 ms. With VAM scenes using simulated hair and clothes on multiple persons it seems to be a common problem that the CPU is unable to keep up.

Oversimplified and therefore probably incorrect theory:
If the CPU cannot keep up the 72 Hz (or selected phys. rate) physics frames per second the GPU becomes "soft" limited by the CPU. The GPU can only push out frames at full speed between the fixed frequency intervals. Once the interval time is reached the GPU must wait to present a frame with correctly updated physics. That is the synchronization. Something like this:
Code:
SYSTEM A:
CPU limited, GPU blocked waiting for CPU too, frametime spikes
  fixed interval: |------->|------->|------->|------->|------->|------->|-
   very slow CPU: |--------->|--------->|--------->|--------->|--------->|
   very fast GPU: |->|->|->|->|->|->|->|->|->|->|->|->|->|->|->|->|->|->|-
real phys-update: |----------|----------|----------|----------|----------|
   present frame: F--F--F--wwSF--F--wwwwS-F--wwwwwwS--wwwwwwwwSF--F--F--wS

F = fast frame, physics calculation done or GPU only frame between interval
S = slow synchronized physics frame
w = waiting for CPU

SYSTEM B:
GPU limited, smooth frametimes
  fixed interval: |------->|------->|------->|------->|------->|------->|-
   very fast CPU: |->|->|->|->|->|->|->|->|->|->|->|->|->|->|->|->|->|->|-
      decent GPU: |--->|--->|--->|--->|--->|--->|--->|--->|--->|--->|--->|
real phys-update: |---------|---------|---------|---------|----|---------|
   present frame: F----F----F----F----F----F----F----F----F----F----F----F
Due to the synchronization reaching a certain time-threshold would be critical.
Attaching a profiler to VAM would be required to proof that. But then the profiler itself would probably screw up the time critical results.

Another important thing to keep in mind: the benchmark runs with an phys. cap of 20.
It's unclear to me how a cap above 1 would affect the performance if the CPU can (but does not have to) do multiple phys. calculations per physics frame.
This could result in less or more synchronization issues.
CPUs are probably struggling to do even "one pass" in a phys. heavy 2 or 3 person scene. Again - profiling needed to proof.
Enough speculation. To many unknown details.
 
Last edited:
Back
Top Bottom