Shownotes
Have you already encountered a model that you know is scientifically sound, but that MCMC just wouldn’t run? The model would take forever to run — if it ever ran — and you would be greeted with a lot of divergences in the end. Yeah, I know, my stress levels start raising too whenever I hear the word « divergences »…
Well, you’ll be glad to hear there are tricks to make these models run, and one of these tricks is called re-parametrization — I bet you already heard about the poorly-named non-centered parametrization?
Well fear no more! In this episode, Maria Gorinova will tell you all about these model re-parametrizations! Maria is a PhD student in Data Science & AI at the University of Edinburgh. Her broad interests range from programming languages and verification, to machine learning and human-computer interaction.
More specifically, Maria is interested in probabilistic programming languages, and in exploring ways of applying program-analysis techniques to existing PPLs in order to improve usability of the language or efficiency of inference.
As you’ll hear in the episode, she thinks a lot about the language aspect of probabilistic programming, and works on the automation of various “tricks” in probabilistic programming: automatic re-parametrization, automatic marginalization, automatic and efficient model-specific inference.
As Maria also has experience with several PPLs like Stan, Edward2 and TensorFlow Probability, she’ll tell us what she thinks a good PPL design requires, and what the future of PPLs looks like to her.
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !
Links from the show: