Deep learning with only a few examples
One of the biggest limitations of deep learning is its dependency on a large number of training examples to attain a satisfactory level of accuracy. Approaches like clever data augmentations and aggressive regularization help partially relieve the issue, but the problem is far from being solved. In this talk we’ll go through some methodologies that help apply deep learning in data-sparse applications, with a focus on several recent techniques developed for the one-shot learning paradigm – that is, learning representations from one, or just a few, training examples.