Gait laboratory in your pocket: Measuring disease progression from single-camera videos.
Recent achievements in machine learning are subverting foundations of many research disciplines. In this presentation, I will show how these developments affect collection, modeling and prediction of human movement, enabling unprecedented scale of research and clinical applications. I will present our recent paper* where we show how deep learning and computer vision enables data collection for clinical decisions using equipment 100x cheaper than previously.
* Kidzinski et al. “Deep neural networks enable quantitative movement analysis using single-camera videos” Nature Communications (2020) https://www.nature.com/articles/s41467-020-17807-z
Łukasz Kidziński is a research associate in the Neuromuscular Biomechanics Lab at Stanford University, applying state-of-the-art computer vision and reinforcement learning algorithms for broadening our understanding of human movement and performance. He is a co-founder and CTO of Saliency, a Stanford spin-off, where he automates imaging tasks in clinical trials. Previously he was a researcher in the CHILI group, Computer-Human Interaction in Learning and Instruction, at the Ecole Polytechnique Federale de Lausanne in Switzerland, where he was developing methods for measuring and improving the engagement of users in massive online open courses and in physical classrooms. He obtained a Ph.D. degree at Université Libre de Bruxelles in mathematical statistics, working on time series of multimodal data.