Notes from NIPS 2015

Sharan NarangThe annual NIPS (Neural Information Processing Systems) Conference was held last week in Montreal, drawing a record 4000 attendees. Baidu Research sent a team of people, including Sharan Narang from the SVAIL (Silicon Valley AI Lab) systems team. This was Sharan’s first time at NIPS and he shares notes about his experience below:

Hi, I’m Sharan. I have been at Baidu Research for about three months. I’m part of the Silicon Valley Artificial Intelligence Lab (SVAIL) and I work on our deep learning framework.

NIPS offered a variety of tutorials, talks, spotlights and workshops. Here are some highlights:

I found Rich Sutton’s Introduction to Reinforcement Learning to be enlightening since I don’t have a background in this area.

The poster sessions provided an opportunity to talk to researchers and learn more about the reasoning behind their work. The following stood out to me:

Semi-Supervised Learning with Ladder Networks, Rasmus, et al
This paper presented an architecture that can learn representations with very few examples. The authors achieved results comparable to other supervised learning architectures.

Learning Both Weights and Connections for Efficient Neural Networks, Han, et al
The authors demonstrated that zeroing out weights in a neural network after training doesn’t significantly impact the performance of the network. This leads to significant memory savings, which has become the bottleneck for several modern day AI algorithms..

During the Deep Learning symposium on Thursday, Jesse Engel and Shubho Sengupta of SVAIL presented a poster on Baidu’s Deep Speech 2, which was announced at the conference. The paper shares the progress made by our lab over the past year to develop an end-to-end deep learning approach that can be used to recognize either English or Mandarin Chinese speech.

Thursday’s symposium on the Societal Impacts of Machine Learning included panel sessions with leaders like Andrew Ng and Yann LeCun regarding the future of AI. Andrew brought up points about the potential impact of AI on employment, a topic I believe needs to be discussed more often.

On Friday and Saturday, I attended the Multimodal Workshop and the Reasoning, Attention and Memory (RAM) Workshop. Li Deng’s keynote on Cross-Modality Distant Supervised Learning for Speech, Text, and Image Classification and Alex Graves’ talk on Smooth Operators: the Rise of Differentiable Attention in Deep Learning were the highlights for me. Li Deng demonstrated a novel approach that allows learning from different modalities like speech, image and text by embedding them into vectors. Alex Graves presented various forms of attention and how differentiable attention can be used to improve performance of supervised learning algorithms.

Baidu hosted an evening reception which featured a live, interactive Q&A with Andrew Ng. The audience asked a lot of good questions. Andrew commented that high performance computing for AI, speech recognition and self-driving cars are topics that are particularly exciting for him. By coincidence, the Baidu autonomous car was announced on the same day as the reception and Andrew revealed that it had demonstrated full autonomy under mixed road conditions in Beijing. I’m personally excited about this news because I grew up in India, where driving is quite chaotic and many people spend a lot of time commuting. I think what we learn in China could be applicable in other places, like where I grew up. I’m looking forward to seeing how AI and machine learning will positively impact billions of lives in the future!

2017-05-22T04:01:05+00:00 December 17th, 2015|
Skip to toolbar