In excess of the weekend, and in progress of the 2018 ACM/IEEE Supercomputing Convention, Intel has unveiled the “Cascade Lake Superior Performance” revision of Xeon processors specific towards info centre usage. The new CPUs are intended to be large-functionality enhances to the Cascade Lake-SP server processors, and a immediate competitor to AMD’s Epyc collection of CPUs, which have threatened Intel’s around-monopoly on the server CPU market place.
The Cascade Lake-AP CPUs are however created on Intel’s 14nm manufacturing system, as the firm’s designs for 10nm CPUs have confronted many delays. (To date, the only 10nm CPU from Intel is the Cannon Lake Main i3-8121U, a 15W part observed in spending budget notebooks.) The large-finish product will feature forty eight cores, which is attained by employing a multi-chip package deal. Intel verified to Anandtech that the two silicon dies are related by UPI (Ultra Route Interconnect) which enables for ten.4 GT/s per url (the amount of hyperlinks is unconfirmed), relatively than Intel’s individual hugely touted EMIB (Embedded Multi-Die Interconnect Bridge), which would complete greater.
SEE: Components purchasing activity record (Tech Professional Exploration)
In outcome, this structure is an try to cram the theoretical functionality of a 4P server into a 2P server, although the engineering essential to obtain this feat will inevitably demand engineering compromises. Whilst the electricity prerequisites and interaction latency involving two processor dies on the identical package deal are probably lower than two processors on the identical server, the activity of connecting four discrete processor dies in two offers without the need of degrading functionality is a significant undertaking.
Whilst Intel has tipped the Cascade Lake-AP collection as getting 12 DDR4 channels — which they declare is the most of any obtainable CPU — information and facts about utmost memory capability, frequencies, and usable variants stays unknown. Similarly, info about TDP per processor and obtainable PCI lanes also went undisclosed. Intel is organizing to make these processors obtainable in early 2019, although furthermore did not disclose pricing. Intel did note in the push release that Cascade Lake-AP carried out up to three.4 instances faster in Linpack and one.three instances faster in Stream Triad than AMD Epyc 7601, although AMD had past thirty day period publicly identified as out Intel for “questionable” configurations utilised in revealed benchmarks for the enthusiast desktop grade Main i9-9900K.
The most critical advancement Cascade Lake-AP provides to large functionality computing is guidance for Optane DIMMs, as this is the initially collection of processors from Intel to guidance the technological innovation. Optane — also acknowledged as 3D XPoint — is faster than NAND-primarily based SSDs, although slower than conventional DRAM. It is, even so, significantly extra dense than DRAM, allowing Intel to cram 512GB in a single module. Databases applications stand to advantage the most from Optane DIMMs, as in-memory computing, or just storing more substantial doing the job sets in memory, would enable for significantly faster transaction speeds, as writes do not will need to be immediately pushed through PCIe-connected strong-point out storage, removing a sizeable bottleneck.
As Optane DIMMs keep the nonvolatile qualities inherent to strong point out drives, it also increases functionality on reboots. Intel pointed out in May perhaps, in an announcement about availability of Optane DIMMs, that “for planned restarts of a NoSQL in-memory database employing Aerospike Hybrid Memory Architecture, Intel Optane DC persistent memory delivers a minutes-to-seconds restart speedup compared to DRAM-only cold restart.”
The significant takeaways for tech leaders:
- Intel’s Cascade Lake-AP collection of server CPUs are built to match the functionality of a 4P server in a 2P configuration.
- This is the initially CPU which supports Intel’s Optane DIMMs, which are large-density NVRAM sticks helpful for in-memory computing jobs, particularly databases.