GIGABYTE this month introduced its ThunderXStation workstation based on two Cavium ThunderX2 processors featuring Armv8 architecture. The machine is primarily aimed at software developers porting or developing various applications to Armv8 platforms. The ThunderXStation is already available in the US.

GIGABYTE's ThunderXStation (W281-T90) comes in 4U tower and is based on a dual-socket motherboard supporting Cavium’s ThunderX2 SoCs featuring up to 64 custom Armv8 cores with four-way SMT as well as 16 DDR4 memory channels (1DPC, 8 channels per SoC) when two CPUs are installed. A dual-processor ThunderXStation comes with two PCIe Gen3/OCP x16 slots, four PCIe Gen3 slots (two x16 and two x8), four M.2 (NVMe/PCIe 3.0 x4) slots for SSDs, two U.2/SATA 2.5” bays for SSDs/HDDs, two 10 GbE/GbE QLogic NICs, NVIDIA’s GeForce GT 710 GPU, an 800 W redundant PSU and so on.

Image from ServeTheHome from OCP Summit 2018

From hardware standpoint, the ThunderXStation looks rather versatile: it supports various expansion slots, enabling developers to use various add-on cards, accelerators, and storage devices required by their applications. The system also has an Aspeed AST2500 server management chip to bring it even closer to target machines that will run the ThunderX2.

The ThunderXStation ships with Ubuntu 17.10 OS, but can come with CentOS 7.4 or OpenSUSE, if required. It also comes with preinstalled software development tools, including gcc 7.2, LLVM, gdb, Golang, OpenJDK 9.0, HHVM, Python, PHP, Ruby etc. The OS supports KVM and Docket to enable developers to test their apps in hypervisor-based or containerized environments. The machine ships with open sources graphics drivers, primarily because it comes with a GeForce GT 710 graphics card (so, open source drivers should be reasonably good for such an old architecture).

GIGABYTE's ThunderXStation
  Preliminary Specifications
CPU Dual or single Cavium ThunderX2 2.2 GHz CPU with 32 cores
Memory 8 × DDR4-2666 DIMM slots per CPU
Storage 2 × M.2 slots for PCIe/NVMe SSDs
2 × 2.5" SATA/U.2 bays
Wireless unknown/none
Ethernet 1 × Gigabit Ethernet, can be outfitted with 10 GbE cards
PCIe 1 × PCIe Gen 3/OCP x16 per CPU
2 × x16/x8/x4/x1 and x8 per CPU
Display Outputs 1 × D-Sub for management
Others via discrete graphics
Audio unknown
USB 4 × USB 3.0 Type-A
PSU 800 W redundant
OS Ubuntu 17.10. CentOS 7.4 or OpenSUSE

One of the reasons why contemporary Armv8 SoCs are rarely used in datacenters is lack of software support. Developers need to recompile their programs for Arm, but since very few have access to hardware and appropriate tools, the software porting process is proceeding very slowly. GIGABYTE’s ThunderXStation will enable more parties to work on server programs for Arm platforms. This is similar to how Intel approached the Xeon Phi ecosystem, by launching a tower equivalent with a Knights Landing machine with the Xeon Phi chip as the host for development work.

The Armv8-based workstation for software developers is now available from PhoenicsElectronics in the U.S. GIGABYTE does not publish pricing of the ThunderXStation, but we have reached out to PhoenicsElectronics and will update the story once we get more information on the matter.

Related Reading

Sources: Cavium, ThunderXForums, Liliputing

Comments Locked


View All Comments

  • ZeDestructor - Tuesday, March 27, 2018 - link

    Yes.. desktop case....

    That case is a 4U (?.. could be 5U) rackmount case (complete with PCIe riser and everything!) with a lid and some feet. Get the right rails and you can rackmount it just fine
  • tipoo - Tuesday, March 27, 2018 - link

    Do we know much about the ThunderX2? How wide, how pipelined, cache, etc?

    That's some hefty cooling for an ARM part, but with 32 cores maybe each core isn't that extraordinary on its own?

    I'd love for this to be dug into!
  • Kevin G - Tuesday, March 27, 2018 - link

    It isn't just a massive amount of cores but IO has significantly scaled upward compared to the mobile ARM parts.
  • tuxRoller - Wednesday, April 4, 2018 - link

    There's been a little data here and there.

    On spec2017, the 32 core thunderx is as fast as the xeon 6148 (20 cores) when they're both using gcc (when using icc Intel has about a 30% advantage).
    The core is thought to have the following (all cache is 8x):
    2048 tlb
    180 reorder window
    8 wide fetch
    rename 4 uops to the 60 slot scheduler
    3 ALUs with 2 overloaded with fp/neon and 1 overloaded for branches
    2 load/stores
    1 data store
  • HStewart - Tuesday, March 27, 2018 - link

    My first thought, who would buy such a machine - I am developer for 30+ years and it has absolutely no value for me.

    As for number of cores, I think we going though a silly core war right now. It does not the number of cores that matter - but actually combination of core count and core power. I am sure that box is no where close to a dual 32 x86 style CPU box ( supply your favorite vendor here )

    On GeForce 710, well I have a GeForce 740 in my dual Xeon 5160 - 11 year old machine, it is best graphics card I purchase for box.
  • vanilla_gorilla - Tuesday, March 27, 2018 - link

    If you're not doing ARM development, you're not really the target audience. But if you are doing software development, especially development that can benefit from lots of parallel compilation (make -j 64) then this could be really useful.
  • HStewart - Tuesday, March 27, 2018 - link

    It been about 8 years - most of that was old pocket pc stuff - it was basically done with Cross Compiler. I guess similar stuff could be done on ARM system - but I still think cross compile is used more often.

    For example for Apple iOS development - it is usually doing Apple OS-X
  • ZeDestructor - Tuesday, March 27, 2018 - link

    Debugging is much nicer to do on a local system compared to a thing in a rack far away. Sure, you could use an emulator, but given you're targeting servers, emulator performance sucks, especially when you're trying to debug stuff that doesn't reproduce on anything but the target system (like timing issues, for example)
  • Elstar - Tuesday, March 27, 2018 - link

    Calling it "much nicer" is too kind. Remote debugging is a pain in the butt, especially for low-level stuff. You need all sorts of infrastructure: a "smart" power strip to do hard remote reboots, a second machine to receive serial console output, etc, etc.

    With a "dev box" like this in your office, you can manually force a reboot with your bare hands. And you can opt out of the serial console if you're willing to turn your head and look at the video output of the "dev box". Also, if you're working on a device driver, you really just need physical access at the end of the day.
  • ZeDestructor - Tuesday, March 27, 2018 - link

    I've done it with small ARM devkits in the past, and since I had all the necessary infra already setup (besides a remote power switch) it was quite nice. Plus, since the rack was right at home, if I ever needed to go look at das blinkenlights, it was completely fine to do that too.

    Being in the other room is a far cry fromin the DC though :P

Log in

Don't have an account? Sign up now