So far the servers within Microsoft Azure data centers have been running Intel processors (CPUs). For a long time I’ve wondered if the power efficiency of ARM CPUs could make them more cost effective than Intel x64 CPUs that are more powerful. It’s possible through the use of parallel computing that distributing load across many more ARM CPU cores that consumer lower power could be more cost effective than distributing the same load across fewer more powerful Intel CPUs. Since I first came up with the idea, I’ve assumed that ARM would be more cost effective, however, I haven’t seen anything to back it up. With recent news about Microsoft exploring Windows Server running on ARM, and ARM based cloud server, it looks like they’re dedicating some serious money to this very research effort.
ARM has already revolutionized mobile devices and Internet of Things (IoT). Could the next step for ARM CPUs be to revolutionize the Cloud and server market?
Could the next step for ARM CPUs be to revolutionize the Cloud and server market?
Windows Server + ARM CPUs
We’ve already seen the desktop edition of Windows 8 running on ARM processors (CPUs), even though it was a bit of a flop due to performance and application support. We’ve also seen Windows Phone successfully running on ARM CPUs, even though they’ve had difficulty getting marketshare traction. Additionally, Microsoft is also currently supporting Windows 10 IoT Core running on multiple Internet of Things (IoT) architectures built with ARM CPUs. The real point here is that Microsoft has successfully been running their Operating System on ARM CPUs for YEARS!
It’s been known that Apple works in parallel to ensure their Operating System (OS) runs on multiple CPU architectures; from Intel to ARM. There’s been proof of this in the fact that the core of macOS is shared between iOS, watchOS, and tvOS. On the Microsoft front it’s just been rumored for years that they’re seriously working on supporting both Intel (x86/x64) and ARM CPUs with Windows Server. We saw a glimpse of this from Microsoft with Windows 8, Windows Phone, and Windows 10 IoT Core, however, they’ve been reluctant to make any claims towards Windows Server.
Given this history of market competition and Microsofts own products, it really seems perfectly obvious that they are running Windows Server on ARM CPUs internally within Microsoft. It also would appear they’ve been doing it for a long time already. This week Microsoft finally released some information into what they are doing with ARM CPUs in relation to Windows Server!
In the recent announcement, Microsoft admits to porting a version of Windows Server to ARM CPU Architecture for internal testing purposes of running Azure services! Things included in the ported code include language runtime systems and middleware components. They’ve even ported some undisclosed applications to run on the ARM Architecture version of Windows Server, and evaluated them running side-by-side with Production workloads.
Microsoft admits to porting a version of Windows Server to ARM CPU Architecture for internal testing purposes of running Azure services!
It’s possible if they advance these efforts that one day we could be choosing between Intel (ala x86/x64) and ARM Architectures when choosing an Azure Virtual Machine Instance Size! However, since all the published hardware designs for Cloud server used within Microsoft Azure are based on Intel (x86/x64) architecture, they will definitely need to start designing ARM Architecture based server hardware. And, this is exactly what they’ve been doing too!
ARM CPUs + Azure Cloud Servers
There are many projects Microsoft is working on in secret, and until this week at 2017 Open Compute Project (OCP) U.S. Summit, Cloud servers using ARM Architecture was one of them. Microsoft announced they are working on developing innovation with ARM server processors for use in their datacenters.
Among the beneficial aspects to using ARM CPU Architecture to run server workloads, the factor of energy savings wasn’t really mentioned. They have found many other reasons that make ARM Architecture appealing in the datacenter. Much of this is due to the ubiquity of smartphones using ARM chips, and the economic slowdown of Moore’s Law.
Here’s a list of the reasons mentioned by Microsoft of why ARM server stands out as a economically viable option in the future:
- Healthy ecosystem of developers and software that runs on ARM due to the prevalence and ubiquity of the smartphone market. This provided a significant pool of developers that Microsoft has been able to tap into when porting Windows Server and other cloud software to ARM.
- The healthy ecosystem of ARM server vendors has yielded greater development surrounding technical capabilities of ARM CPUs; including multiple CPU Cores, thread count, cache, instructions, connectivity options, and accelerators.
- It’s possible to take the existing benefits of ARM Server CPUs, and enhance the architecture by optimizing the hardware to better fit cloud workloads. This could mean modifying the Instruction Set Architecture (ISA), but will help minimize or even eliminate changes to the software that will run on these servers.
Microsoft has been working on designing ARM Architecture server for the Cloud along with a number of ARM suppliers; including Qualcomm and Cavium. They’ve been cooperating together to better optimize their ARM Architectures to better fit server workloads allowing them to better fit the needs of the Cloud and Microsoft Azure.
A demonstration of Windows Server running on ARM was given at the event using a Qualcomm Centriq 2400 ARM Server CPU. The Qualcomm Centriq 2400 ARM Server CPU is based on Qualcomm’s recently announced 10nm process and features 48 CPU Cores, in addition to Qualcomm’s most advanced interfaces for memory, networking, and peripherals.
The Qualcomm Centriq 2400 ARM Server CPU is based on Qualcomm’s recently announced 10nm process and features 48 CPU Cores.
Another demonstration was given at the event using Cavium’s flagship 2nd generation 64-bit ThunderX2 ARMv8-A server CPU SoC (System on a Chip) that’s built for data centers, the cloud, and high performance computing applications.
Qualcomm, and Cavium alongside leading server supplier Inventec, have each developed an Open Compute-based ARM server motherboard with Microsoft’s Project Olympus. This allows these ARM Servers to be easily deployed to Microsoft’s datacenters alongside their existing servers.
Microsoft Project Olympus
Microsoft first introduced Project Olympus last October, as a new, next generation Open Source Cloud Hardware project. Among the hardware designs of Project Olympus server are a new universal motherboard design, high-availability power supply with built-in batteries, a 1U/2U server chassis, high-density storage expandability, and standard compliance rack management. This week at 2017 Open Compute Project (OCP) U.S. Summit Microsoft shared further development of Project Olympus.
Microsoft is aiming to create a truly compatible and swappable system that can support ANY CPU Architecture in the datacenter.
The initial work on Project Olympus was centered around Intel CPUs and their next generation Xeon Processors, codename Skylake. Also future updates could include accelerators via Intel FPGA (Field-Programmable Gate Array) or Intel Nirvana solutions.
AMD has also been working to bring innovation to the Project Olympus server design through support for their new “Naples” CPU by enabling application demands of high performance datacenter workloads.
Another long-term effort surrounding Project Olympus has been working alongside Qualcomm, Cavium, and other ARM suppliers to add support for ARM64 cloud servers. By supporting ARM CPUs with Project Olympus, in addition x86/x64 CPUs (via Intel and AMD), Microsoft is aiming to create a truly compatible and swappable system that can support ANY CPU Architecture in the datacenter.
The new HGX-1 AI accelerator provides the scalability to meet the high performance demands of Machine Learning and AI workloads.
In addition to supporting multiple architectures of CPUs, Microsoft announced with NVIDIA and Ingrasys a new design to accelerate Artificial Intelligence! They are doing this through the use of a new Project Olympus compatible chassis design. The Project Olympus hyperscale GPU accelerator chassis (HGX-1) is designed to support 8 of the latest “Pascal” generation of NVIDIA GPUs and NVIDIA’s NVLink high-speed multi-GPU interconnect technology, in addition to providing high-bandwidth interconnectivity for up to 32 GPUs by connecting 4 HGX-1 chassis’ together. The new HGX-1 AI accelerator provides the scalability to meet the high performance demands of Machine Learning and AI workloads.
Cross Platform Workloads across Intel and ARM CPUs
While there are certainly many changes from a software developer perspective to building code that can run just as interchangeably as Project Olympus chassis’ can be swapped out within a datacenter, it may turn out to be easier than we think. There are current technologies that are built to function as “compile once, run anywhere”. These technologies are mainly Microsoft’s .NET Framework, even more so with .NET Core, and Java. There even interpreted languages like PHP that would be able to be easily ported to these new server platforms once the underlying language runtimes and other Operating System components are running; as it seems Microsoft may already have pretty close to done by now.
it’s been a lofty goal since the beginning of Java back in the 1990’s to be “compile once, run anywhere”. Plus the original design of the .NET Framework back in 2001 had this design as well. However, they’ve both struggled with it over the years, but there’s been great strides in all aspects of software development in recent years that have proven fruitful in these efforts. Many of these can be attributed to Smartphones. (Smartphone hardware and software design has seemingly been influencing all aspects of computer and server design.)
Developers can focus on current technologies for cross-platform workloads while Microsoft and other hardware manufacturers and vendors are figuring out the specifics of how to support x86, x64, ARM, FPGA, and other aspects of server hardware architectures. The current ecosystem of developer tools are already designed to support x86, x64, and ARM CPU architectures, so modern software developers already have the cross-platform skills necessary for the future, next-generation Cloud ecosystem whether they know it or not.
The current ecosystem of developer tools are already designed to support x86, x64, and ARM CPU architectures, so modern software developers already have the cross-platform skills necessary for the future, next-generation Cloud ecosystem whether they know it or not.
There are certainly advancements in FPGA, AI, and Machine Learning development that will lead to the necessity of learning new technologies and gaining new skills. However, these are advancements that will likely come at a much faster pace than previous software development innovations, changes, and advancements. The rate of innovation in the IT world is increasing, and it’s proven to be very beneficial to existing cross-platform development and support, so it’s only logical that it’ll make the future, next-generation cross-platform requirements easier to meet as well.
The Cloud has been an exciting realm, and that excitement only keeps persisting; even possibly increasing as the scale of the Cloud grows!