News

Intel, ARM, and Nvidia jointly released draft specifications for Artificial Intelligence

Chip companies Intel, ARM, and Nvidia jointly released a draft specification for the so-called Common Interchange Format for Artificial Intelligence, which aims to make the process of processing artificial intelligence faster and more efficient for machines.

Intel, ARM, and Nvidia have proposed in a draft that AI systems use the 8-bit FP8 floating-point processing format. They say that the FP8 floating-point processing format has the potential to optimize hardware memory usage, thereby accelerating the development of artificial intelligence. This format is suitable for both AI training and inference, helping to develop faster and more efficient AI systems.

JOIN XIAOMI ON TELEGRAM

When developing artificial intelligence systems, the key problem facing data scientists is not just collecting large amounts of data to train the system. In addition, it is necessary to choose a format to express the system weights. Weights are an important factor that artificial intelligence learns from training data to affect the prediction effect of the system. The weights allow AI systems like GPT-3 to automatically generate entire paragraphs from a long sentence cue, and DALL-E 2 AI to generate photorealistic portraits based on a specific title.

The commonly used formats of artificial intelligence system weights are half-precision floating-point number FP16 and single-precision floating-point number FP32. The former uses 16-bit data to represent the system weight, and the latter uses 32-bit.

Intel ARM Nvidia FP8Half-precision floating-point numbers and lower-precision floating-point numbers can reduce the memory space required to train and run artificial intelligence systems, while also speeding up calculations, and even reducing bandwidth resources and power consumption. However, because there are fewer digits than single-precision floating-point numbers, the accuracy will be reduced.

However, many companies in the industry, including Intel, ARM, and Nvidia, use the 8-bit FP8 floating-point processing format as the best choice. Shar Narasimhan, director of product marketing at Nvidia, noted in a blog post that the FP8 floating-point processing format is comparable in precision to half-precision floating-point numbers in use cases such as computer vision and image generation systems while having “Significant” acceleration.

Nvidia, ARM, and Intel say they will make the FP8 floating-point processing format an open standard that other companies can use without a license. The three companies describe FP8 in detail in a white paper. Narasimhan said that these specifications will be submitted to the technical standardization organization IEEE to see whether the FP8 format can become a common standard in the artificial intelligence industry.

“We believe that a common interchange format will enable rapid advancements in hardware and software platforms, improving interoperability, and thus advancing advances in AI computing,” Narasimhan said.

Of course, the reason why the three companies spared no effort to promote the FP8 format as a general exchange format is also due to their own research considerations. NVIDIA’s GH100 Hopper architecture has implemented support for the FP8 format, and Intel’s Gaudi2 artificial intelligence training chipset also supports the FP8 format.

But the common FP8 format will also benefit competitors such as SambaNova, AMD, Groq, IBM, Graphcore, and Cerebras, all of which have experimented with or adopted the FP8 format when developing AI systems.

Simon Knowles, co-founder and CTO of AI systems developer Graphcore, wrote in a blog post in July, “The advent of 8-bit floating-point numbers is a huge plus for AI in terms of processing performance and efficiency. Computing brings huge advantages”. Knowles also called it “an opportunity” for the industry to identify a “single open standard”, which is far better than competing with each other in multiple formats.

(via)

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top