Bulldozer for Servers: Testing AMD's "Interlagos" Opteron 6200 Series
by Johan De Gelas on November 15, 2011 5:09 PM ESTBenchmark Configuration
Since AMD sent us a 1U Supermicro server, we had to resort to testing our 1U servers again. That is why we went back to the ASUS RS700 for the Xeon. It is a bit unfortunate as on average 1U servers have a relatively worse performance/watt ratio than other form factors such as 2U and blades. Of course, 1U still makes sense in low cost, high density HPC environments.
Supermicro A+ server 1022G-URG (1U Chassis)
CPU |
Two AMD Opteron "Bulldozer" 6276 at 2.3GHz Two AMD Opteron "Magny-Cours" 6174 at 2.2GHz |
RAM | 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0 |
Motherboard | SuperMicro H8DGU-F |
Internal Disks |
2 x Intel SLC X25-E 32GB or 1 x Intel MLC SSD510 120GB |
Chipset | AMD Chipset SR5670 + SP5100 |
BIOS version | v2.81 (10/28/2011) |
PSU | SuperMicro PWS-704P-1R 750Watt |
The AMD CPUS have four memory channels per CPU. The new Interlagos Bulldozer CPU supports DDR3-1600, and thus our dual CPU configuration gets eight DIMMs for maximum bandwidth.
Asus RS700-E6/RS4 1U Server
CPU |
Two Intel Xeon X5670 at 2.93GHz - 6 cores Two Intel Xeon X5650 at 2.66GHz - 6 cores |
RAM | 48GB (12x4GB) Kingston DDR3-1333 FB372D3D4P13C9ED1 |
Motherboard | Asus Z8PS-D12-1U |
Chipset | Intel 5520 |
BIOS version | 1102 (08/25/2011) |
PSU | 770W Delta Electronics DPS-770AB |
To speed up testing, we tested with the Intel Xeon and AMD Opteron system in parallel. As we didn't have more than eight 8GB DIMMs, we used our 4GB DDR3-1333 DIMMs. The Xeon system only gets 48GB, but this is no disadvantage as our benchmark with the highest memory footprint (vApus FOS, 5 tiles) uses no more than 36GB of RAM.
We measured the difference between 12x4GB and 8x8GB of RAM and recalculated the power consumption for our power measurements (note that the differences were very small). There is no alternative as our Xeon has three memory channels and cannot be outfitted with the same amount of RAM as our Opteron system (four channels).
We chose the Xeons based on AMD's positioning. The Xeon X5649 is priced at the same level as the Opteron 6276 but we didn't have the X5649 in the labs. As we suggested earlier, the Opteron 6276 should reach the performance of the X5650 to be attractive, so we tested with the X5670 and X5650. We only tested with the X5670 in some of the tests because of time constraints.
Common Storage System
For the virtualization tests, each server gets an adaptec 5085 PCIe x8 (driver aacraid v1.1-5.1[2459] b 469512) connected to six Cheetah 300GB 15000 RPM SAS disks (RAID-0) inside a Promise JBOD J300s. The virtualization testing requires more storage IOPs than our standard Promise JBOD with six SAS drives can provide. To counter this, we added internal SSDs:
- We installed the Oracle Swingbench VMs (vApus Mark II) on two internal X25-E SSDs (no RAID). The Oracle database is only 6GB large. We test with two tiles. On each SSD, each OLTP VM accesses its own database data. All other VMs (web, SQL Server OLAP) are stored on the Promise JBOD (see above).
- With vApus FOS, Zimbra is the I/O intensive VM. We spread the Zimbra data over the two Intel X25-E SSDs (no RAID). All other VMs (web, MySQL OLAP) get their data from the Promise JBOD (see above).
We monitored disk activity and phyiscal disk adapter latency (as reported by VMware vSphere) was between 0.5 and 2.5 ms.
Software configuration
All vApus testing was done one ESXi vSphere 5--VMware ESXi 5.0.0 (b 469512 - VMkernel SMP build-348481 Jan-12-2011 x86_64) to be more specific. All vmdks use thick provisioning, independent, and persistent. The power policy is "Balanced Power" unless indicated otherwise. All other testing was done on Windows 2008 R2 SP1.
Other notes
Both servers were fed by a standard European 230V (16 Amps max.) powerline. The room temperature was monitored and kept at 23°C by our Airwell CRACs.
We used the Racktivity ES1008 Energy Switch PDU to measure power. Using a PDU for accurate power measurements might same pretty insane, but this is not your average PDU. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.
106 Comments
View All Comments
JohanAnandtech - Thursday, November 17, 2011 - link
1) Niagara is NOT a CMT. It is interleaved multipthreading with SMT on top.
I haven't studied the latest Niagaras but the T1 was a fine grained mult-threaded CPU. It switched like a gatling gun between threads, and could not execute two threads at the same time.
Penti - Thursday, November 17, 2011 - link
SPARC T2 and onwards has additional ALU/AGU resources for a half physical two thread (four logically) solution per core with shared scheduler/pipeline if I remember correctly. That's not when CMT entered the picture according to SUN and Sun engineers any way. They regard the T1 as CMT as it's chip level. It's not just a CMP-chip any how. SMT is just running multiple threads on the cpus, CMP is working the same as SMP on separate sockets. It is not the same as AMDs solution however.Phylyp - Tuesday, November 15, 2011 - link
Firstly, this was a very good article, with a lot of information, especially the bits about the differences between server and desktop workloads.Secondly, it does seem that you need to tune either the software (power management settings) or the chip (CMT) to get the best results from the processor. So, what advise is AMD offering its customers in terms of this tuning? I wouldn't want to pony up hundreds of dollars to have to then search the web for little titbits like switching off CMT in certain cases, or enabling High-performance power management.
Thirdly, why is the BIOS reporting 32 MB of L2 cache instead of 8 MB?
mino - Wednesday, November 16, 2011 - link
No need for tuning - turbo is OS-independent (unless OS power management explicitly disables it aka Windows).Just disable the power management on the OS level (= high performance fro Windows) and you are good to go.
JohanAnandtech - Thursday, November 17, 2011 - link
The BIOS is simply wrong. It should have read 16 MB (2 orochi dies of 8 MB L3)gamoniac - Tuesday, November 15, 2011 - link
Thanks, Johan. I run HyperV on Windows Server 2008 R2 SP1 on Phonem II X6 (my workstation) and have noticed the same CPU issue. I previously fixed it by disabling AMD's Cool'n'Quiet BIOS setting. After switching to high performance increase my overall power usage by 9 watts but corrected the CPU capping issue you mentioned.Yet another excellent article from AnandTech. Well done. This is how I don't mind spending 1 hour of my precious evening time.
mczak - Tuesday, November 15, 2011 - link
L1 data and instruction cache are swapped (instruction is 8x64kB 2-way data is 16x16kB 4-way)L2 is 8x2MB 16-way
JohanAnandtech - Thursday, November 17, 2011 - link
fixed. My apologies.hechacker1 - Tuesday, November 15, 2011 - link
Curious if those syscalls for virtualization were improved at all. I remember Intel touting they improved the latency each generation.http://www.anandtech.com/show/2480/9
I'm guessing it's worse considering the increased general cache latency? I'm not sure how the latency, or syscall, is related if at all.
Just curious as when I do lots of compiling in a guest VM (Gentoo doing lots of checking of packages and hardware capabilities each compile) it tends to spend the majority of time in the kernel context.
hechacker1 - Tuesday, November 15, 2011 - link
Just also wanted to add: Before I had a VT-x enabled chip, it was unbearably slow to compile software in a guest VM. I remember measuring latencies of seconds for some operations.After getting an i7 920 with VT-x, it considerably improved, and most operations are in the hundred or so millisecond range (measured with latencytop).
I'm not sure how the latests chips fare.