News

NNP-T 1000 is expected to accelerate the AI efforts at Baidu by complementing the Xeon Scalable Processor infrastructure by significantly speeding up deep learning training jobs.
Taking the form of the Habana Gaudi 2 Training and Habana Greco Inference processors the new hardware has been purpose-built for AI deep learning applications using 7nm technology.
Much of Panda’s work focuses on the optimized MPI stack, called MVAPICH, which was developed by his teams and now powers the #1 supercomputer in the world, the Sunway TaihuLight machine in China. He ...
At the O’Reilly Artificial Intelligence conference earlier this week, Baidu Research announced DeepBench, an open source benchmarking tool for evaluating the performance of deep learning ...
These sentiments are echoed by Naveen Rao, CEO of Nervana Systems, a deep learning startup that has put its $28 million in funding to the TSMC 28 nanometer test with a chip expected in Q1 of 2017.With ...
Deep neural networks (DNNs) can be taught nearly anything, including how to beat us at our own games. The problem is that training AI systems ties up big-ticket supercomputers or data centers for ...
Deep learning applications. There are many examples of problems that currently require deep learning to produce the best models. Natural language processing (NLP) is a good one.
Deep learning models are typically powered with graphics processing units (GPUs), specialized chips, and other infrastructure components that can be quite expensive, especially at the scale that ...