Advancements in High Performance Computing are enabling researchers worldwide to make great progress in AI. In this video, Bryan Catanzaro, senior researcher in Baidu Research’s Silicon Valley AI Lab, talks about AI projects at Baidu and how the team uses HPC to scale deep learning.

Some key points made by Bryan:

  • Progress in AI depends on reducing the time it takes to test ideas, which we do by training new models.
  • Baidu has built hardware and software systems for training models that reduce training time by scaling to multiple GPUs.
  • Once you have a good model, the next step is to take it to users, which is also computationally intensive, so we use GPUs for this as well.

The talk was delivered at SC15, the International Conference for High Performance Computing, in Austin.