Intel has revealed a new weapon in the battle to dominate the artificial intelligence market -- the latest Xeon Phi chip -- which aims to get machines thinking on their own. The chipmaker can't let the AI market, which Nvidia already has a head start in, slip through its fingers, as it did with mobile phones.
Codenamed "Knights Mill," the third-generation Xeon Phi that was announced on Wednesday at the Intel Developer Forum, is a server processor made specifically to tackle artificial intelligence.
It features the ability to handle floating point calculations, which improves machine learning. Machine learning allows computers to learn on their own, without the assistance of developers.
"While less than 10% of servers worldwide were deployed in support of machine learning last year, machine learning is the fastest growing field of AI and a key computational method for expanding the field of AI," explained Intel.
Intel says it can take weeks to train machines to learn to recognize patterns and connections between complex data. This long learning curve means they are unable to make real-time decisions. Boosting the floating point calculation in the Xeon Phi improves how machines handle the algorithms needed to make accurate and useful decisions in a more realistic time frame.
More important, the Xeon Phi can target deep learning, a branch of machine learning that uses neural networks to handle random and complex bits of data for image and speech recognition, natural language processing, and other tasks. Deep learning "emulates neurons and synapses in the brain, learning through iteration and the formation of complex pathways in the neural network," according to Intel.
Right now, the market is larged owned by Nvidia. The company's GPUs handle multiple calculations at once in a process called parallel computing. Intel argues the use of GPUs won't work over the long haul.
The Intel Xeon Phi processor family can offer up to 1.38 times better scaling than GPUs, the company claims. The big issue here is that GPUs are add-ons to CPUs. It takes time, however little, to send the calculation set from the CPU to the GPU and back. Xeon Phi handles the calculations without sending them to the GPU, which provides a speed boost.
Intel plans to push Xeon Phi further once it finalizes its acquisition of Nervana Systems, which it announced earlier this month.
"Nervana's Engine and silicon expertise will advance Intel's AI portfolio and enhance the deep learning performance and total cost of ownership of Intel Xeon and Intel Xeon Phi processors," Intel said in a statement about the acquisition.
Intel says it expects to ship the latest Xeon Phi in 2017.