Search

Could 3-dimensional V-Cache improve the performance of Intel processors?

Share it

One question that repeatedly emerges during HUB Q&A sessions is, “Why doesn’t Intel create their own version of 3D V-Cache?” This is an intriguing query, particularly considering the advantages that 3D V-Cache has brought to AMD’s Ryzen processors, despite encountering some drawbacks.

These limitations encompass reduced core clock speeds, increased power consumption, and elevated operating temperatures. In the realm of Ryzen CPUs, the reduction in clock speeds isn’t notably significant as these components aren’t initially clocked very high, and the performance enhancement attained with the expanded L3 cache compensates for the decrease in clock speed. While power consumption isn’t a primary worry and thermal issues can be present with Ryzen processors, throttling is easily preventable.

In the context of Intel’s CPUs, specifically their present 13th and 14th-gen Core series, the power allocation is nearly maximized. This has been a focal point of extensive discussions over recent years and has recently come to a head.

It is well-known that Intel heavily relies on clock frequency to extract the maximum potential from their components. Thus, the question arises whether augmenting L3 cache could mitigate their challenges by lowering clock speeds and consequently reducing power usage, while also amplifying gaming performance beyond current levels.

To gauge the potential benefits of increased L3 cache, we revisited some assessments conducted with the 10th-gen Core series but with the newer 14th-gen parts (Raptor Lake). Essentially, this entails a cores vs. cache benchmark.

We took the Core i9-14900K, Core i7-14700K, and Core i5-14600K, disabled the E-cores entirely, set the P-cores at 5 GHz with the Ring Bus at 3 GHz, and then evaluated three configurations: 8-cores with the 14900K and 14700K, 6-cores with all three processors, and 4-cores with all three processors.

This testing approach furnished valuable insights into how the 10th-gen series compared and what primarily drove the performance surge in games back then (which was predominantly due to L3 cache). However, the 10th-gen series boasted considerably less L3 cache, with only 20 MB for the Core i9, 16 MB for the i7, and a mere 12 MB for the i5. Fast forward to the present day, and the 14600K now features more L3 cache than the 10900K at 24 MB, while the 14700K has 33 MB, and the 14900K boasts 36 MB.

Benchmark Results

Commencing with the outcomes from Assassin’s Creed Mirage, this data diverges significantly from the 10th-gen testing executed previously. The results are notably unexpected as they seem to be CPU-constrained, which indeed they are. However, expanding the core count from 6 to 8, or augmenting the L3 cache capacity, doesn’t enhance performance. Instead, the bottleneck for the 6 and 8-core configurations predominantly appears to be the 5 GHz clock frequency, indicating that in this scenario, the chief limitation of the 14th-gen architecture is frequency, not L3 cache capacity.

Transitioning to just 4-cores does precipitate a decline in performance, with cache capacity playing a minor role. Nonetheless, it is intriguing to observe the 14900K experiencing less than a 20% drop in this test with merely half of the P-cores active.

Helldivers 2 encounters some performance degradation when dropping from 8 to 6 active cores, albeit only witnessing a 7% dip in the average frame rate coupled with a marginal reduction in the 1% lows. Upon scaling down to a mere 4-cores, there is a substantial performance drop, particularly in the 1% lows, which plummet by slightly over 40%, culminating in a less than optimal gaming experience.

Ratchet & Clank, akin to Assassin’s Creed Mirage, perceives minimal performance differentiation between the 6 and 8-core configurations, once again insinuating that core clock frequency serves as the primary bottleneck. Only when reducing the active P-core count to 4 do we spot some performance degradation, with the decline being essentially consistent irrespective of L3 cache capacity.

The results from Spider-Man Remastered are intriguing for various reasons. With 8 active cores, both the 14900K and 14700K furnish nearly identical outcomes at 5 GHz, affirming that in this particular scenario, clock frequency is the chief bottleneck. However, upon descending to 6-cores, cache seems to assume greater significance. Although the margins are not substantial, the 14700K lags a mere 3% behind the 14900K, for instance.

Surprisingly, when transitioning to just 4-cores, the results revert to being frequency limited, despite the performance decrementing by merely 22% in comparison with the 8-core outcomes.

Enhanced Performance at 5.7 GHz

Before culminating this assessment, we revisited select benchmarks with the cores tuned 14% higher at 5.7 GHz, alongside a 33% amplification to the Ring Bus at 4 GHz.

We conducted this rerun in anticipation of possible critiques flagging the testing as futile due to 14th-gen processors being capable of higher clocks, deeming 5 GHz as unrealistic. Notwithstanding the acknowledgement that the bottleneck is unequivocally clock frequency and, in certain cases, core count, rather than cache capacity. Thus, elevating the clock speeds of these components will undoubtedly hike performance; it doesn’t offer novel insights beyond that.

Despite competing in several games, we encountered analogous trends, affirming that the limitation for 14th gen primarily stems from clock speed. Cache expansion does not seem to provide any assistance.

Key Findings

The results obtained were quite unexpected, differing greatly from the data recorded with 10th-gen processors three years ago. During that era, we noted that when matched at the same frequency, there wasn’t a substantial variance between the Core i5, i7, and i9 processors in most games. Any distinctions could largely be attributed to the L3 cache.

However, with the 14th-gen series, cache capacity appears to hold minimal significance. In most instances, 24 MB proved sufficient. Strangely, increased cache capacity often proved most beneficial when fewer cores were operational, albeit not consistently.

What notably influenced performance was the core count; the transition from 4 to 6 cores often wielded a substantial impact, although none of these components feature merely 4 cores. Few instances also showcased that escalating from 6 to 8 cores elicited a notable difference.

From this dataset, we can infer that incorporating 3D V-Cache into these Intel 14th-gen Core CPUs would likely be counterproductive, potentially diminishing gaming performance (at least in current games), which is rather astonishing.

We speculate that augmenting to 10 or even 12 P-cores in present-day games would merely yield a marginal performance uptick over 8 cores. Subsequently, for Intel, the sole progressive path with their existing architecture is clock speed, elucidating the trajectory thus far.

Shopping Quick Links:
  • Intel Core i7-14700K at Amazon
  • Intel Core i9-14900K at Amazon
  • Intel Core i5-14600K at Amazon
  • AMD Ryzen 7 7800X3D at Amazon
  • AMD Ryzen 9 7950X3D at Amazon
  • AMD Ryzen 9 7900X at Amazon

🤞 Don’t miss these tips!

🤞 Don’t miss these tips!

Solverwp- WordPress Theme and Plugin