Xeon E7 v3 System and Memory Architecture

So, the Xeon E5 "Haswell EP" and Xeon E7 "Haswell EX" are the same chip, but the latter has more features enabled and as result it finds a home in a different system architecture. 

Debuting alongside the Xeon E7 v3 is the new "Jordan Creek 2" buffer chip, which offers support for DDR4 LR-DIMMs or buffered RDIMMs. However if necessary it is still possible to use the original "Jordan Creek" buffer chips with DDR3, giving the Xeon E7 v3 the ability to be used with either DDR3 or DDR4. Meanwhile just like its predecessor, the Jordan Creek 2 buffers can either running in lockstep (1:1) or in performance mode (2:1). If you want more details, read our review of the Xeon E7 v2 or Intel's own comparison

To sum it up, in lockstep mode (1:1): 

  1. The Scalable Memory Buffer (SMB) is working at the same speed as the RAM, max. 1866 MT/s. 
  2. Offers higher availability as the memory subsystem can recover from two sequential RAM failures
  3. Has lower bandwidth as the SMB is running at max. 1866 MT/s 
  4. ...but also lower energy for the same reason (about 7W instead of 9W). 

In performance mode (2:1): 

  1. You get higher bandwidth as the SMB is running at 3200 MT/s (Xeon E7 v2: 2667 MT/s), twice the speed of the memory channels. The SMB combines two memory channels of DDR-4 1600.
  2. Higher energy consumption as the SMB is running at full speed (9W TDP, 2.5 W idle)
  3. The memory subsystem can recover from one device/chip failure as the data can be reconstructed in the spare chip thanks to the CRC chip. 

This is a firmware option, so you chose once whether being able to lose 2 DRAM chips is worth the bandwidth hit. 

Xeon E7 vs E5

The different platform/system architecture is the way that the Xeon E7 differentiates itself from the Xeon E5, all the while both chips have what is essentially the same die. Besides being able to use 4 and 8 socket configurations, the E7 supports much more memory. Each socket connects via Scalable Memory Interconnect 2 (SMI2) to four "Jordan Creek2" memory controllers.

Jordan Creek 2 memory buffers under the black heatsinks with 6 DIMM slots

Each of these memory buffers supports 6 DIMM slots. Multiply four sockets with four memory buffers and six dimm slots and you get a total of 96 DIMM slots. With 64 GB LR-DIMMs (see our tests of Samsung/IDT based LRDIMMs here) in those 96 DIMM slots, you get an ultra expensive server with no less than 6 TB RAM. That is why these system are natural hosts for in-memory databases such as SAP HANA and Microsoft's Hekaton. 

There is more of course. Chances are uncomfortably high that with 48 Trillion memory cells that one of those will go bad, so you want some excellent reliability features to counter that. Memory mirroring is nothing new, but the Xeon E7 v3 allows you to mirror only the critical part of your memory instead of simply dividing capacity by 2. Also new is "multiple rank sparing", which provides dynamic failover of up to four ranks of memory per memory channel. In other words, not can the system shrug off a single chip failure, but even a complete DIMM failure won't be enough to take the system down either. 

The New Xeon E7v3 Haswell Architecture Improvements: TSX & More
Comments Locked


View All Comments

  • Dmcq - Saturday, May 9, 2015 - link

    Well they'll sell where performance is an absolute must but they won't pose a problem to Intel as they won't take a large part of the market and they'd keep prices high. I see the main danger to Intel being in 64 bit ARMs eating the server market from below. I suppose one could have cheap and low power POWER machines to attack the main market but somehow it just seems unlikely with their background.
  • Guest8 - Saturday, May 9, 2015 - link

    Uh did you see Anandtech's reviews on the latest ARM server? The thing barely keeps up with an Avoton. Intel is well aware of ARM based servers and has preemptively disARMed the threat. If ARM could ever deliver Xeon class performance it would look like Power8.
  • melgross - Saturday, May 9, 2015 - link

    Chip TDP is mostly a concern for the chip itself. Other areas contribute far more waste heat than the CPU does.
  • PowerTrumps - Saturday, May 9, 2015 - link

    Power doesn't need to have a TDP of 1000W but 200W is nothing given the performance and efficiency advantage of the processors and Power hypervisor. When you can consolidate 2, 4 and 10 2 socket Intel servers into 1 x 2 socket Power8 server that is 10 x 2 x 135W = 2700 overall Watts vs 400W with the Power server. Power reduces the overall energy, cooling and rack space consumption.
  • KAlmquist - Saturday, May 9, 2015 - link

    $4115 E5-2699 (18C, 2.3 Ghz (3.6 Ghz turbo), max memory 768 GB)
    $5896 E7-8880 (18C, 2.3 Ghz (3.1 Ghz turbo), max memory 1536 GB)

    That's a big premium for the E7--enough that it probably doesn't make sense to buy an 8 socket system just to run a bunch of applications in parallel. The E7 makes sense only if you need more than 36 cores to have access to the same memory.
  • PowerTrumps - Saturday, May 9, 2015 - link

    I really enjoyed the article as well as the many data and comparison charts. It is unfortunate that most of your statements, assessments and comparisons about Power and with Intel to Power were either wrong, misleading, not fully explained or out of context. I invite the author to contact me and I will be happy to walk you through all of this so you can update this article as well as consider a future article that shows the true advantage Power8 and OpenPower truly has in the data center and the greater value available to customers.
  • KAlmquist - Saturday, May 9, 2015 - link

    I would be surprised if anybody working for Anandtech is going to contact an anonymous commentator. You can point out portions of the article that you think are wrong or misleading in this comment section.

    To do a really good article on Power8, Anandtech needs a vendor to give Anandtech access to a system to review.
  • PowerTrumps - Sunday, May 10, 2015 - link

    Admittedly I assumed when I registered for the PowerTrumps account some time ago I used a email address which they could look up. But, your point is taken. Brett Murphy with Software Information Systems (aka SIS) www.thinksis.com. Email at bmurphy@thinksis.com. If I pointed out all of the mistakes my comment would look like a blog which many don't appreciate. I have my own blog for that. I like well written articles and happy to accept criticism or shortcomings with IBM Power - just use accurate data and not misrepresent anything. Before Anandtech reviews a Power8 server, my assessment is they need to understand what makes Power tick and how it is different than Intel or SPARC for that matter. Hope they contact me.
  • thunng8 - Sunday, May 10, 2015 - link

    I too would like a more detailed review of the Power8.

    Some of the text in the article made me laugh on how wrong they are.

    For example, the great surprise that Intel is not on top.. Well anandtech has never test any Power systems before..

    And it is laughable to make any conclusions based on running of 7zip. Just about any serious enterprise server benchmark shows a greater than 2x performance advantage per core in favor of Power compared to the best Xeons. So that 50% advantage is way less than expected.

    Btw Power7 for most of its life bested Xeon in performance by very large margins. It is just now that IBM have opened up Power to other vendor that makes it exciting.
  • JohanAnandtech - Monday, May 11, 2015 - link

    I welcome constructive critism. And yes, we only had access to an IBM Power8 dev machine, so we only got a small part of the machine (1 core/2GB).

    "Some of the text in the article made me laugh on how wrong they are."
    That is pretty low. Without any pointer or argument, nobody can check your claims. Please state your concerns or mail me.

Log in

Don't have an account? Sign up now