PyTorch is one of the main tools used for machine learning research these days. It’s been developed in beta mode for 2 years, but this December, a for 1.0 version has been finally released! In this talk, I’ll briefly introduce the library, and then move on to showcase the cutting edge features we introduced recently.
The talk will be divided into multiple sections. First, an extremely quick introduction to what PyTorch is, and what can it be used for (including use cases outside of machine learning!). Then, I will cover a number of topics that are interesting in the current context of the library, including:
– Hybrid frontend (JIT compiler)
– Path from research to production
– C++ API and inference
– Caffe2 merger
– New distributed backend
Adam Paszke is an author and maintainer of PyTorch. Despite being early in his career he already has a few years of experience working with large organizations like Facebook AI Research and NVIDIA. Currently, he pursues two majors — Computer Science and Mathematics — at the University of Warsaw. His general interests include graph theory, programming languages, algorithmics and machine learning.