02:05 | NVIDIA Counters AMD Smart Access Memory
In a discussion with GamersNexus, NVIDIA informed us that the company would be adding its own PCIe resizable Base Address Register feature. This would be a direct counter to AMD’s Smart Access Memory, or SAM, which allows the CPU to address the full GPU framebuffer on platforms combining Ryzen 5000 CPUs, RX 6000 GPUs, and 500-series chipsets. NVIDIA, however, says that this feature is a part of the PCIe specification, and will be implementing it on both Intel and AMD platforms.
NVIDIA provided the following public statement:
“The capability for resizable / larger BAR is part of the PCI Express specification. NVIDIA hardware supports this functionality and will enable it on Ampere GPUs through future software updates. We have it working internally and are seeing similar performance results. Stay tuned.”
In a follow-up question, GN asked about availability on Intel and AMD and was told the following:
“Yes, we’re working with Intel now on getting it enabled on Z490, and as long as AMD doesn’t lock us out, it should work on any of the 6 motherboards they listed as being compatible with Smart Access Memory so far.
The PCI Express Spec section 7.8.6 specifies it. This is from Revision 4.0 version 1.0, September 27, 2017.
7.8.6 Resizable BAR Capability The Resizable BAR Capability is an optional capability that allows hardware to communicate resource sizes, and system software, after determining the optimal size, to communicate this optimal size back to the hardware. Hardware communicates the resource sizes that are acceptable for 10 operation via the Resizable BAR Capability and Control registers. Hardware must support at least one size in the range from 1 MB to 512 GB.”
Finally, GN was informed that this is not restricted to PCIe Gen4 and will work on PCIe Gen3 platform.
05:50 | AMD Refutes Ryzen 5000 Paper Launch Allegations
As ever, allegations of a “paper launch” and false scarcity conspiracy theories continue unabated in regards to AMD’s recent Ryzen 5000 launch. AMD’s Ryzen 5000 launch has gone mostly the same as Nvidia’s RTX 3000-series launch, with insatiable demdand outsrtipping supply. Compounding matters further is the fact that these launches are taking place in a year that is anything but normal, where consumer spending is through the roof and customers are also competing with bot scripts for the opportunity to purchase. It’s Black Friday sales volume every day right now, and has been since about March.
If demand wasn’t already high enough, Ryzen 5000 was met with near universal praise, further spurring interest in AMD’s latest CPUs. Yet, a couple weeks removed from launch, and the CPUs are still hard to find — at MSRP, anyway. This led some users to revisit a tweet from AMD’s Frank Azor regarding a paper launch, when Azor did what AMD twitter executives do best: he took jabs at NVIDIA for a lack of availability after the RTX 30 launch.
Now, responding to annoyed potential customers, Azor said the following: “There’s a big difference between a ‘paper launch’ and shipping tons of units but demand exceeds supply,” said Azor in response to one Twiter user.
Azor also took this opportunity to address concerns over whether or not AMD took enough precautions to ensure customers — and not bots — got the product. “Yes, we made a strong effort & succeeded in many cases. It’s a battle that is never completely won but I applaud our teams efforts & those of our partners during this round. We continue to learn & adapt with every launch. We want our products in the hands of their intended users,” said Azor.
This is obvious, of course: It benefits AMD and its competitors to get more market share and sell to more users, which also encourages purchases of even more products in the industry as full systems are built. We don’t doubt that none of these manufacturers want to sell to bots, who only purchase the new silicon and not everything else, and it’s also true that false scarcity is crazy-person level of theorizing when you’re talking about billion-dollar architectural investments that need to be recouped immediately.
11:37 | MSI Factory Catches Fire
Recently, MSI’s Bao3’an1 Shenzhen factory suffered from a fire — this is actually one that we visited in March of 2019 and we can show some footage of it. The factory has 10 main SMT lines, or surface mount technology lines, each of which is 121 meters long. The lines are responsible for producing 1.6 million motherboards per month and a similar quantity of video cards per month. The factory’s machinery includes robotic testing and assembly, automated circuit quality control, exhaustive GPU thermal testing, and SMT lines, which are comprised of solder masks and solder machines, solder reflow stations, AOI, and pick-and-place machines.
The cause of the fire was not disclosed. The good news is that MSI is reporting that there are no injuries, and no production lines have been damaged. MSI has also released an official statement.
“A fire accident occurred in MSI Baoan factory in Shenzhen on November 5th afternoon. MSI activated its emergency measures and notified the fire department to deal with it immediately. No injuries were caused and the production line was not damaged. In the future, MSI will continue to strengthen the education and training of personnel. At present, all units are operating normally. Thank you for your concern.”
13:52 | Zen 3 Delidded, Die Exposed
Twitter and Flickr user Fritzchens Fritz has delidded (somewhat brutally) a Ryzen 5 5600X to reveal the die and top mounted circuitry. Fritzchens Fritz has been doing this for some time, dismantling CPUs and GPUs to offer high resolution photographs of the silicon and die. In his latest excursion with the Ryzen 5 5600X, Fritz inadvertently removed the CCD from the substrate while delidding. For the uninitiated, Ryzen CPUs make use of a soldered IHS, which complicates the delidding process.
Still, the photos, which you can see via Twitter and Flickr, offer a nice view of what underpins Zen 3. Specifically, the Ryzen 5 5600X consists of a single CCD, housing a single CCX. However, you can see the contacts for a second CCD on the die. There’s also the separate I/O die that contains all of Zen 3’s I/O components.
15:22 | Apple’s Arm-based M1 CPUs
Apple has made good on its promise to deliver its first batch of custom Apple Silicon before year’s end. In transitioning away from Intel CPUs, Apple will be developing its own Arm-based processors for its Mac computers, the same as it has been doing with iOS devices (see: A-series SoCs) for years now.
In its reveal, Apple provided a very top-level overview of its new M1 processors, along with some absurd marketing for good measure. Classic Apple. At any rate, at a high-level, the new M1 CPU looks very much like a beefed-up A14 Bionic SoC — which isn’t a bad thing, as it makes a natural jumping off point for Apple and its Mac-oriented CPUs. Apple previously showed signs of aiming somewhat low in the beginning of its transition to Arm, opting to outfit entry-level Mac devices with Apple Silicon first, and scale upwards from there. Thus, that’s what we have: The MacBook Air, the 13” MacBook Pro, and the Mac Mini will be the first Macs endowed with M1 chips.
Much like Apple’s A14 Bionic, Apple’s M1 is manufactured on TSMC’s EUV-based N5 process, and is comprised of Apple’s “big” Firestorm cores and “little” Icestorm cores. Specifically, the M1 will have 4x Firestorm cores and 4x Icestorm cores, but we really can’t wait until the Stormblood cache and Inferno Fabric that will complete the naming scheme. Some of those words weren’t real products. We’ll let you guess which ones.
Additionally, the M1 will make use of Apple’s 16-core machine learning processor, or “Neural Engine,” as Apple calls it. The M1 will also feature a unified memory architecture, with increased L2 cache and 8x 16b LPDDR4X DRAM channels. An 8-core GPU takes up a considerable amount of the 16-billion transistor budget for the SoC, and all of the chip components have access to a shared SLC buffer.
While Apple and its products aren’t typically within the purview of our audience, this move is an important one. It echoes Apple’s 2005-2006 departure from IBM’s PowerPC ISA to adopt x86-class offerings from Intel. Now, Apple is leaving Intel in favor of the Arm ISA. While Arm has made a few appearances in x86 dominated segments — like servers and workstations — the ecosystem has never had such a broad push into client computing outside of smaller devices and smartphones. If Apple succeeds here, it will have big implications for the CPU landscape and its reigning x86 regime.
18:19 | Intel Announces First Add-in Server GPU Card
Intel has achieved an important milestone in its dGPU journey, as the chip designer is officially announcing its first discrete GPU carved from Intel Xe LP silicon: The Intel Server GPU. This GPU, which takes the shape of an add-in card, is aimed at “high-density, low-latency Android cloud gaming and media streaming,” per Intel’s wording.
The Intel Server GPU is built with 4x Intel Xe-LP Iris Max GPUs, which Intel formerly called DG1. The GPUs are packaged onto an SoC with a 128-bit pipeline and 8GB of on-board memory. The GPUs are built on Intel’s new 10nm SuperFin process, and the Intel Server GPU, also known as XG310, comes in a full-height, 3/4 length AIC form factor courtesy of H3C, one of Intel’s manufacturing partners.
According to Intel, the XG310 is now shipping.
19:23 | AMD Offers Zen 4, RDNA 3 Update
In an interview with The Street, AMD’s EVP of Computing and Graphics Business Group, Rick Bergman, dished on a number of topics. The topics ranged from demand for AMD’s mobile CPUs, AMD’s venture into Chromebooks, AMD’s partnership with TSMC, as well as Ryzen 5000 (Zen 3) supply and demand. Interestingly enough, Bergman offered something of a tease for the brewing Zen 4 and RDNA 3.
In response to a question regarding how much of Zen 4’s improvements will be based on TSMC’s 5nm process, or clock speed and core count increases, Bergman had the following to say.
“[Given] the maturity of the x86 architecture now, the answer has to be, kind of, all of the above. If you looked at our technical document on Zen 3, it was this long list of things that we did to get that 19% [IPC gain]. Zen 4 is going to have a similar long list of things, where you look at everything from the caches, to the branch prediction, [to] the number of gates in the execution pipeline. Everything is scrutinized to squeeze more performance out.”
“Certainly [manufacturing] process opens an additional door for us to [obtain] better performance-per-watt and so on, and we’ll take advantage of that as well.”
Bergman also fielded a question regarding the upcoming RDNA 3 architecture and if AMD would be targeting similar performance-per-watt improvements that RDNA 2 has brought.
“Let’s step back and talk about the benefits of both. So why did we target, pretty aggressively, performance per watt [improvements for] our RDNA 2 [GPUs]. And then yes, we have the same commitment on RDNA 3.”
“It just matters so much in many ways, because if your power is too high — as we’ve seen from our competitors — suddenly our potential users have to buy bigger power supplies, very advanced cooling solutions. And in a lot of ways, very importantly, it actually drives the [bill of materials] of the board up substantially This is a desktop perspective. And invariably, that either means the retail price comes up, or your GPU cost has to come down.”
“So [there are] actually a lot of efficiencies…if you can improve your perf-per-watt substantially. On the notebook side, that’s of course even more obvious, because you’re in a very constrained space, you can just bring more performance to that platform again without some exotic cooling solutions…We focused on that on RDNA 2. It’s a big focus on RDNA 3 as well.”
Bergman on implementing Infinity Cache into AMD’s RDNA 2 GPUs.
“On Infinity Cache, it’s somewhat linked to that as well, to a certain degree. If you’ve been in graphics for a long time, you realize there’s a pretty good correlation between memory bandwidth and performance. And so typically, the way you do it is you jack up your memory speed and widen your [memory] bus to open up performance. Unfortunately, both of those things drive up power [consumption].
And so we looked at what we actually did in CPUs, where we do kind of have the equivalent of an L3 cache and so, can we bring that type of technology into our GPUs? We want to look forward and see what architecture will scale going forward. [With] Infinity Cache, the performance benefits, the performance-per-watt benefits, the cost benefits [made it] a pretty easy decision to make….I don’t want to talk about our next generation [of products], but as you can imagine, when you get those benefits, it’ll certainly be on the table for our next generation.”
22:12 | Intel’s Cryo Cooling Tech Surfaces in New Coolers
Intel’s Cryo Cooling technology, which is underpinned by the concept of Thermoelectric Cooling (TEC), made its way into a pair of products: A Peltier-based water block from EKWB, and an AIO from Cooler Master.
Thermoelectric cooling works by way of creating a temperature disparity between two different surfaces — one side gets hot, while the other gets cool — and these are often referred to as “accept” and “reject” plates in the coldplate. Voltage is applied to the TEC device to transfer heat from one side to the other. This type of cooling is by no means new, even to the enthusiast overclocking space, but it’s rarely garnished enough support to implement into consumer products. We once tested the Phononic Hex 2.0, which was a smaller cooler of this variety, but that cooler wasn’t effective enough to be worth recommending. It’s more commonly found in industrial applications, and Phononic’s parent company is a good example of this: It designs and makes thermoelectric coolers for digital signage that’s displayed in the heat, for example.
Intel and its initial partners, EKWB and Cooler Master, are touting the use of Intel’s Cryo Cooling to drive temperatures well below sub-ambient. EKWB is offering the EK-QuantumX Delta TEC, a CPU waterblock replete with a TEC plate and a mounted USB-based controller that monitors for temperature and condensation. Condensation build-up is one byproduct of TEC cooling, so special care has to be taken to insulate the cooler and its components. EKWB further states that the heat generated by the TEC plate is removed via a traditional liquid loop.
Cooler Master has the ML360, which is part of its Master Liquid AIO line that should be familiar to most. Cooler Master has opted to integrate its TEC plate into the coldplate, sitting underneath the pump block housing. Additionally, it appears Cooler Master has also incorporated a separate PCB controller into the block housing to monitor temperature and dew point, which seems to have increased the size of the overall block housing. This is likely the reason the ML360 uses a separately front mounted pump, which you can see in Cooler Master’s documentation.
Lastly, Cooler Master is using a new seal, something like a collar, around the cold plate to insulate from condensation.
Pricing for Cooler Master’s ML360 is TBA, while EKWB’s EK-QuantumX Delta TEC waterblock is set at $360. Of course, these coolers are only compatible with Intel’s LGA1200 socket.
Editorial: Eric Hamilton
Additional Reporting: Steve Burke
Video: Keegan Gallick