Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!
Visit our Patreon page to unlock exclusive Bayesian swag ;)
Takeaways:
Chapters:
08:44 Function Estimation and Bayesian Deep Learning
10:41 Understanding Deep Gaussian Processes
25:17 Choosing Between Deep GPs and Neural Networks
32:01 Interpretability and Practical Tools for GPs
43:52 Variational Methods in Gaussian Processes
54:44 Deep Neural Networks and Bayesian Inference
01:06:13 The Future of Bayesian Deep Learning
01:12:28 Advice for Aspiring Researchers
01:22:09 Tackling Global Issues with AI
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen.
Links from the show:
Transcript
This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
Let me show you how to be a good Beezy.
Speaker B:How do we bring rigorous uncertainty into modern machine learning without losing scalability?
Speaker B:Today I am joined by Maurizio Filippone, Associate professor at KAUST and leader of the Bayesian Deep Learning Group whose path from physics to machine learning has been guided by a single obsession function estimation Done the Bayesian way we dive into the frontier where GP's meet deep learning, deep Gaussian processes, Bayesian neural networks trained with stochastic gradients and pragmatic tools like Monte Carlo Dropout for uncertainty quantification.
Speaker B:Along the way we tackle trade offs between interpretability and flexibility, when to reach for a GP versus a neural net, and how Bayesian ideas improve optimization, experimental design and even generative models.
Speaker B:Finally, we look ahead to the future where uncertainty isn't an afterthought, but a first class citizen of AI, integrated, efficient and indispensable.
Speaker B: ,: Speaker A:Let me show you how to be a good peasy and change your predictions after taking information in.
Speaker A:And if you're thinking I'll be less than amazing, let's adjust expectations.
Speaker A:What's a Bayesian?
Speaker A:Is someone who cares about evidence.
Speaker C:Welcome to Learning Bayesian Statistics, a podcast about Bayesian inference, the methods, the projects and the people who make it possible.
Speaker C:I'm your host Alex Endora.
Speaker C:You can follow me on Twitter lexendora Like the country for any info about the show, learnbatestats.com is laplacetobie show notes becoming a corporate sponsor Unlocking Bajan Merch Supporting the show on Patreon Everything is in there.
Speaker C:That's learnbased stats.com if you're interested in one on one mentorship, online courses or statistical consulting, feel free to reach out and book a call at topmate IO Alexandora CEO Round folks and best patient wishes to you all.
Speaker C:And if today's discussion sparked ideas for your business, well our team at PMC Labs can help bring them to life.
Speaker C:Check us out@pmc labs.com.
Speaker B:Hello my dear.
Speaker D:Patients, just a quick word to remind.
Speaker B:You that I'm running my first ever live workshop and it's going to be a live cohort.
Speaker B:We're going to kick this off with athletics and we're going to do hierarchical models in PC and Bambi on November 5 and 6 and in two sessions you will beat the and interpret a working multi level model with posterior checks and stakeholder ready visuals.
Speaker B:It's going to be short live code.
Speaker B:First, thanks to athletics, we're going to have pre authenticated GCP VMs so you can model without set of frictions.
Speaker B:And that means that if you want to come learn live with me also join the discord that we have with the learn based stats patrons.
Speaker B:Well, now is the time to join.
Speaker B:We're going to learn with sports analytics examples and few other examples.
Speaker B:Of course we're pretty much at capacity but there are still a few spots left.
Speaker B:So I would love to see you in a few weeks November 5th and 6th.
Speaker B:All the details are in the show notes and well, if you have any questions, feel free to reach out.
Speaker B:Otherwise I'll see you very soon.
Speaker B:Thank you folks.
Speaker B:And now let's talk about what is a deep Gaussian process with Maurizio Filipponi.
Speaker B:Maurezio Filipone Benvenuto I Learning Bayesian statistics.
Speaker E:Well, thank you so much, Alex.
Speaker E:Thank you so much.
Speaker E:Great, great to be here.
Speaker B:Thanks a lot again to Hans for, for putting us in context.
Speaker B:You're like the, the podcast matchmaker of LBs.
Speaker D:Because we already had some colleague of yours actually, Maurizio.
Speaker B:I'll put the link in the, in the show notes we had Harvard Rue and Janet Van Nieckerk.
Speaker B:They were in episode 136.
Speaker B:So I'll put that into the show notes.
Speaker B:That was a really, really fun episode.
Speaker B:Talks everything about Bayesian inference at scale, everything about inla.
Speaker B:So lots of good nuggets in this episode, folks.
Speaker B:We talked about penalized complexity priors which are available out of the box in our package.
Speaker B:I guess they will be in the Python package.
Speaker B:If you're interested in inline Bayesian inference at scale, this is definitely an exciting time.
Speaker B:I think it's very good for us to tackle the idea that Bayesian is not available to scale and we'll keep doing that today, I guess with you, Maurizio.
Speaker B:We'll talk about gp, about deep learning, all the very fun stuff you do.
Speaker B:But first, let's start with your origin story.
Speaker B:Can you tell us what you're doing nowadays?
Speaker B:And also how did you end up working on that?
Speaker B:You know, because you're an Italian in Saudi Arabia.
Speaker B:How did that happen?
Speaker E:Yes, that's a long journey.
Speaker E:So first of all, I would like to thank you for the great service you're doing for the community.
Speaker E:I think this is very important and I'm really happy to be here and to be part of this long list of great speakers that you had before big important guests that you had before me.
Speaker E:So I started with a master's in physics, so that's where I started in Italy.
Speaker E:I got interested in dynamical systems at the time and I took a course on neural networks.
Speaker E:This is many years ago, where we were trying to understand whether we could predict time series using neural networks without knowledge of the physics.
Speaker E:So even if I was studying physics, we were trying to avoid having to know physics to make these predictions.
Speaker E:And it turns out that there's some nice mathematics behind the theory of dynamical systems, allowing you to.
Speaker E:Even for chaotic systems, so systems that are relatively simple to write in terms of differential equations, but then the trajectories really evolved in a very seemingly random way.
Speaker E:But actually it's not random, it's just that the characteristics of this differential equation is so that there is this emergence of chaos.
Speaker E:You can still predict this time series really well.
Speaker E:And then I got interested in machine learning.
Speaker E:And at the time people were not believing that this would be a smart move because it was, you know, more than 20 years ago.
Speaker E:And I believed that this would be something interesting to pursue.
Speaker E:And then I started a PhD in computer science and that led me to move into the uk.
Speaker E:So one of my reviewers was Mark Girolamy at Glasgow at the time.
Speaker E:So I got interested in exploring the uk.
Speaker E:So I first did the postdoc in Sheffield and then eventually I moved to Glasgow with Mark Girolamy and then from there we moved to UCL for a year while I was doing a postdoc, because he got a Chair of Statistics in the statistics department at ucl.
Speaker E:Then I got a lectureship back in Glasgow.
Speaker E:And after a few years in computer science at Glasgow, I decided to move to France.
Speaker E:There was a big opportunity to develop machine learning at scale in this institute and build something new there.
Speaker E:So it was an exciting opportunity.
Speaker E:And then after eight years that I successfully built something there, I decided to explore something new where there was this opportunity here at kaust.
Speaker E:I knew a lot of great people working here, so I decided to give it a shot.
Speaker E:And now I'm here.
Speaker E:And in this journey I started from time series prediction, going through clustering, anomaly detection, and eventually now we're working on various applications in.
Speaker E:Well, I've worked in some applications in neuroscience, fraud detection, industrial applications of various kinds, and now here there is stronger focus on environmental sciences.
Speaker E:And, yeah, it's really exciting.
Speaker D:Yeah, yeah, I agree.
Speaker B:And before diving to the technical details, can you also give us an idea of what your group's main goals and research themes are?
Speaker B:Because you lead the Bayesian Deep Learning group there at Kaust.
Speaker E:Yeah.
Speaker E:So in terms of themes, I think There is one big theme that is central to everything, which is function estimation.
Speaker E:So a lot of the things I do every day, a lot of the things that most people do in machine learning every day and statistics, I think it's really function estimation.
Speaker E:So I started working on kernel methods back when I was doing my master's and Ph.D. and then eventually through the postdoc, we started working on these non parametric models, so Gaussian processes with probabilistic nonparametric models.
Speaker E:So Gaussian processes.
Speaker E:And then eventually deep learning started to become quite popular and very powerful.
Speaker E:So naturally it felt like we had to think about these extensions to deeper models.
Speaker E:And so the natural thing for me was to take a Gaussian process and make it deep.
Speaker E:Right.
Speaker E:And so we started studying these deep Gaussian processes which were already been proposed a few years back, but of course we started thinking about approximations to make them scalable and so on.
Speaker E:And then eventually today I don't have any, let's say preference or I don't see a distinct line between deep Gaussian processes and Bayesian deep neural networks.
Speaker E:In the end, you can sort of view deep Gaussian processes as special case of a Bayesian neural network.
Speaker E:So for me now, a lot of the techniques that we've always used to do scalable inference for gps, we sort of port them to Bayesian deep learning.
Speaker E:And yeah, and so this is an exciting space because there is so much development going on and we're part of this.
Speaker E:So it's really great.
Speaker B:Yeah, yeah, that's super fun.
Speaker D:And yeah, I'm glad that you already established the connection there is between Bayesian neural networks and Gaussian processes.
Speaker B:Like in the end, everything is a Gaussian process.
Speaker B:And so I'm curious if you can define what a deep Gaussian process is, because I think my audience has a good idea of what a Bayesian neural network is.
Speaker B:And I've had especially recently, Vincent Fortwin talk about that on the show.
Speaker B:So I'll put that also in the show notes.
Speaker B:So these Bayesian deep learning I think people are familiar with.
Speaker B:Can you tell us what a deep Gaussian process is?
Speaker B:Because I think people see what a Gaussian process is, but what makes it a deep one.
Speaker E:Great episode, the one with Vincent, by the way.
Speaker E:I checked it out.
Speaker E:Thank you.
Speaker E:Because I guess he would say a lot of things that I would probably say also in my episode.
Speaker E:So it was great to see it.
Speaker E:So, yeah.
Speaker E:So a Gaussian process, there's many ways which you can see it.
Speaker E:The easiest way is probably to start from a linear model.
Speaker E:I think I really like the construction from a linear model.
Speaker E:So if we start from a linear model and we make it Bayesian, so we put a prior on the parameters, then we have analytical forms for the posterior, the predictions, Everything is nice and Gaussian.
Speaker E:And so now one nice thing we can do is to start thinking about linear regression, but now with basis functions.
Speaker E:So we start introducing linear combinations not of just the covariates or features, if you want to call them that, but you have a transformation.
Speaker E:Let's say sine and cosine could be trigonometric functions of any kind, could be polynomials.
Speaker E:And it turns out that you can use kernel tricks to be able to say what the predictive distribution is going to be for this, the model is still linear in the parameters.
Speaker E:But now what we can do is to take the number of basis functions to infinity, so we can make an infinitely large polynomial.
Speaker E:And now the number of parameters will be infinite.
Speaker E:But what we can do is to use this so called kernel trick to actually express everything in terms of scalar products among this mapping of inputs to this polynomial.
Speaker E:And so if you do that, then what you can do is to, instead of working with polynomials or these basis functions, now you can define a so called kernel function, which is the one that takes inputs features and it spits out a scalar product of these induced polynomials in this very large dimension, infinite dimensional space.
Speaker E:So this kind of trick allows you to just then work with something which is infinitely powerful in a way, because it's infinitely flexible in a way that you have an infinite number of parameters now.
Speaker E:But the great thing is that if you have only n observations, all you need to do is to care about what happens for these n observations.
Speaker E:And so you can construct this covariance matrix and you know, it can do.
Speaker E:And everything is Gaussian again, it's very nice.
Speaker E:So the first time you generate a function from Gaussian process, it's beautiful because you get these nice functions that look beautiful and it's just a multivariate normal really.
Speaker E:And it's just, that's all it is.
Speaker E:So I still remember the first time I generated the function from a GP because it was Eureka moment where you realize how simple and beautiful this is.
Speaker E:So then you can think that now this represents a distribution over functions.
Speaker E:So if you draw from this gp, you obtain samples that are functions.
Speaker E:And now what you can do is to say, well, what if I take this function now?
Speaker E:And instead of just observing this function alone, I just put it inside as an input to another Gaussian process.
Speaker E:So in a gp, you have inputs which are your input data, where you have observations.
Speaker E:So now you're mapping into functions, and then this function can become now the input to another gp, for example, you know, and then you can even say, okay, let's take these inputs and map them not just to a univariate Gaussian process, where we have just one function, but maybe we can map it into 10 functions, and then these 10 functions become the input to a new Gaussian process.
Speaker E:And so this would be a one layer, deep Gaussian process.
Speaker E:Right.
Speaker E:So you have now one layer, which is first hidden functions that then enter as input to another Gaussian process.
Speaker E:What's the advantage of this?
Speaker E:Why do we do this?
Speaker E:Well, you know, with Gaussian process, that's.
Speaker D:Going to be my question.
Speaker E:Yeah, so with Gaussian processes, the.
Speaker E:The characteristics that you observe for the functions that you generate are determined by the choice of the covariance function.
Speaker E:So if you take a covariance function, which is rbf, you're going to have infinitely smooth functions that you generate.
Speaker E:And the way these functions are going to be the length scale of these functions and the amplitude, they're going to be determined by the parameters that you put in the covariance function.
Speaker E:And of course, there might be problems.
Speaker B:Where.
Speaker E:You have no stationarity.
Speaker E:So in a part of the space, functions should be nice and smooth, and in other parts of the space, maybe you want more flexibility.
Speaker E:And then a Gaussian process with a standard covariance function cannot achieve that.
Speaker E:And so in order to increase flexibility, you either spend time designing kernels that actually can do crazy things, which is possible, but relatively hard, because now you have a lot of choices.
Speaker E:You can combine kernels in multiple ways, and if you have a space of possible kernels you want to choose from, combining them becomes a combinatorial problem.
Speaker E:So you may say, instead, let's just compose functions.
Speaker E:And composition is very powerful.
Speaker E:And this is why deep learning works, because in deep learning, you essentially have function compositions.
Speaker E:And so even if you compose simple things, the result is something very complicated.
Speaker E:And you can try it yourself.
Speaker E:You know, take a sine function and put it into another sine function.
Speaker E:If you play around with the parameters, you can get things that oscillates in a crazy way.
Speaker E:And this is very simple, but very powerful.
Speaker E:And so the idea of deep Gaussian process is exactly this, to try to enrich the kind of class of functions you can obtain by composing functions, composing Gaussian processes.
Speaker E:And of course now the marginals in a Gaussian process, all the marginals are nice and Gaussian.
Speaker E:If you compose, these marginals become non Gaussian.
Speaker E:And this is really getting to the point where you start thinking, well, why should we then restrict ourselves to composing processes that are Gaussian?
Speaker E:Maybe we can do something else.
Speaker E:And then, you know, maybe thinking about other ways in which you can be flexible in the way you parameterize these complicated conditional distributions.
Speaker B:Okay.
Speaker B:Yeah, yeah.
Speaker B:Damn, this is super fun.
Speaker B:So it sounds to me like Fourier decomposition on steroids, basically.
Speaker B:So it's like decomposing everything through these basis functions and plugging everything into each other.
Speaker B:So like, you know, like these mamushkas of Gaussian processes, basically.
Speaker B:So, yeah, and I can definitely see the power of that.
Speaker B:It's like, yeah, it's, it's like having very deep neural networks, basically.
Speaker B:So I see, I definitely see the connection.
Speaker B:Why that would be super helpful.
Speaker B:And that helps, I'm guessing that helps uncover very complex nonlinear patterns that are very hard to express in a functional form.
Speaker B:That functional form would be.
Speaker B:Well, you have to choose the kernels and sometimes, as you were saying, the out of the box kernels can't express the complexity you have in the data.
Speaker B:And then having basically the machine discover the kernels by itself is much easier.
Speaker E:Yeah, and it's really also about the marginals.
Speaker E:If you believe that your marginals can be Gaussian and you're happy with that, then it's all fine.
Speaker E:You can do kernel design.
Speaker E:You can spend a bit of time trying to find a good kernel that gives you good fit to the data, good modeling, good uncertainties.
Speaker E:But then there's still going to be this constraint in a way that you're working with the Gaussian process.
Speaker E:You know, in the end, marginally, everything is Gaussian.
Speaker E:You may not want that in certain applications where maybe the distributions are very skewed and other things, you know, and then maybe the skewness also is position dependent, input dependent, you know, so this non stationarity also, again, you can encode it in certain kernels, but it's just so much easier to compose, I mean, from the principle of just mathematical composition.
Speaker E:Then of course, computationally, how to handle this.
Speaker E:This is another pair of hands.
Speaker B:Yeah, yeah, no, exactly.
Speaker B:I mean, you're trading basically something that's more comfortable for the user for something that's much harder to compute for the computer.
Speaker B:But yeah, like in the end that also can be something that is more transferable.
Speaker B:Because if you have, unless you're a deep expert in Gaussian processes, coming up with your own kernels, each time you need to work on a project is very time consuming.
Speaker B:So it can be actually worth your time to turn into the deep Gaussian processes framework, throw computing power at it and you know, go your merry way working on something in the meantime while the, while the computer samples.
Speaker E:That definitely makes sense.
Speaker E:But again, the deep aspect carries other design choices.
Speaker E:Now you have to choose how many layers, what's the dimensionality of each layer.
Speaker E:And then there is this other problem of now what kind of inference you choose, which definitely has an effect.
Speaker E:So we've done some studies on this, trying to compare a little bit, various approaches.
Speaker E:We did this a few years ago now because the deep, I think we started working on this right after TensorFlow came out.
Speaker E: So this was: Speaker E:So we started doing, we did our deep GP with a certain kind of approximation that is not very popular.
Speaker E:I mean, the community seems to have agreed that, you know, inducing points methods are very powerful to do approximations.
Speaker E:And you know, I've also done some work on that with some great people, particularly James Hansman, who has developed GP flow with some other great guys.
Speaker E:But random features is what you said before, you mentioned Fourier transform on steroids.
Speaker E:I mean, the idea is really for certain classes of kernels, you can do some sort of expansions and sort of linearize the Gaussian process.
Speaker E:So before I was talking about going from a linear model to something which is infinite number of basis functions.
Speaker E:And now the idea is just truncate this number of basis functions.
Speaker E:You can do it in various ways.
Speaker E:There is a randomized version that we do when we do these random features and then you sort of truncate.
Speaker E:And so now instead of working with this, you turn a Gaussian process into a linear model with a, a large number of basis functions.
Speaker E:And then linear models are nice to work with.
Speaker E:And then if you compose them, then that's when you get the deep Gaussian process.
Speaker E:Essentially you get a deep neural network with some stochasticity in the layers and that's all there is to it.
Speaker E:And so when we did this, we implemented it in TensorFlow because it was the new thing and it was very scalable.
Speaker E:We took some competitors and we really were really fast at converging to good solutions and getting good results, you know, so.
Speaker E:And we have an implementation out there in TensorFlow, unfortunately, I mean we, we should now maybe port it to Pytorch, which has become what we work on more.
Speaker E:Yeah, but.
Speaker F:Hugo Bown Anderson here, data and AI scientist, consultant and educator.
Speaker F:I'm a friend of Alex and I was on episode 122 of Learn Bayesian Statistics talking about learning and teaching in the age of AI.
Speaker F:If you're building with LLMs and AI and especially if you've hit that wall where your prototype works sometimes but isn't reliable enough to ship, I've got something for you.
Speaker F:I'm teaching a four week course called Building AI Applications.
Speaker F:We focus on the actual software development, life cycle, agents, evals, logging rag, fine tuning, iterating, debugging and more.
Speaker F:I teach it with Stefan Krachik who is currently working on AI agent infrastructure at Salesforce.
Speaker F:Students get over $1,200 in cloud credits from Modal Pedantic, Logfire, Chroma Cloud and more to build with immediately.
Speaker F:We're excited to offer you all 25% off the link is in the show notes.
Speaker F:You can also go to Bit Ly Lbs friends.
Speaker F:Class starts November 3rd.
Speaker F:Would love to see you there.
Speaker B:No, for sure.
Speaker B:But I mean, yeah, that's definitely linked to that, to that TensorFlow implementation that you have because yeah, I'm very big on pointing people towards how they can apply that in practice and basically making the bridge between frontier research, as you're doing and, and helping people implement that in their own modeling workflows and problems.
Speaker B:So let's definitely do that.
Speaker B:And yeah, I was actually going to ask you.
Speaker B:Okay, so that's, that's a great explanation and thank you so much for laying that out.
Speaker D:So.
Speaker B:So clearly I think it's awesome to start from the linear representation, as you were saying.
Speaker B:And basically, yeah, going to the very big deep gps which are in a way easier for me to represent to myself because they, you know, it's like an in the infinity in the limit.
Speaker B:It's easier, I find to work with than deep neural networks, for instance.
Speaker B:But yes, like can you give us a lay of the land of how.
Speaker B:What's the field about right now?
Speaker B:Let's start with the practicality of it.
Speaker B:What would you recommend for people?
Speaker B:In which cases would these deep GPS be useful?
Speaker B:First and second question, why wouldn't they use just deep neural networks instead of deep gps?
Speaker B:Let's start with that.
Speaker B:I have a lot of other questions, but let's start with that.
Speaker B:I think it's the most general one.
Speaker E:Yeah, yeah, I think, I mean it's a great question.
Speaker E:It's a, it's the Mother of all questions, really.
Speaker E:I mean, what kind of model should you choose for your day data?
Speaker E:And I think that is going to be a lot of great work that is going to happen soon where we're going to maybe be able to give more definite answers to this.
Speaker E:I think we're starting to realize that this overparameterization that we see in deep learning is not so bad after all.
Speaker E:So for someone working in Bayesian statistics, I think we have this image in mind where we should find the right complexity for the data that we have.
Speaker E:So there's going to be a sweet spot of a model that is sort of parsimonious in looking at the data and not too parameterized.
Speaker E:But actually deep learning is telling us now a different story, which is not different from the story that we know for non parametric modeling, for Gaussian processes.
Speaker E:In Gaussian processes we push the number of parameters to infinity, right?
Speaker E:And in deep learning now we're sort of doing the same, but in a slightly mathematical, different form.
Speaker E:But the.
Speaker E:So where we're getting at is a point where actually this enormous complexity is in a way facilitating certain behaviors for these models to be able to represent our data in a very simple way.
Speaker E:So the emergence of simplicity seems to be connected to this explosion in parameters.
Speaker E:And I think Andrew Wilson has done some amazing work on this and it's recently published and I can link it to that paper which says deep learning is not so mysterious.
Speaker E:And it's something I was reading recently.
Speaker E:It's a beautiful read.
Speaker E:And I think to go back to your question, so today, what should we do?
Speaker E:Should we stick to a gp?
Speaker E:Should we go for a deep neural network?
Speaker E:I think for certain problems we may have some understanding of the kind of functions we want.
Speaker E:And so for those that if it's possible and easy to encode them with gps, I think it's definitely a good idea to go for that.
Speaker E:But there might be other problems where we have no idea, or maybe there is too many complications in the way we can think about the uncertainties and other things.
Speaker E:And so maybe just throwing a data driven, I mean if we have a lot of data, maybe we can say, okay, maybe we can go for an approach that is data hungry and then, you know, we can leverage that.
Speaker E:And deep learning seems to be like maybe a right choice there.
Speaker E:But of course now there is also a lot of stuff happening in other spaces, let's say in terms of foundational models.
Speaker E:So now there is this class, this breed of new things, new models that have been trained on A lot of data.
Speaker E:And then with some fine tuning on your small data, you can actually adapt them.
Speaker E:You know, this transfer learning actually works and we've done it for.
Speaker E:So there's this paper again by Andrew Wilson on predicting time series with language models.
Speaker E:So you take ChatGPT and you make it predict, you discretize your time series, you tokenize and you give it to GPT and you look at the predictions, you invert the transformation so you get back scalar values.
Speaker E:And actually this seems to be working quite well.
Speaker E:So we tried now for, with the multivariate versions of this probabilistic multivariate and so on.
Speaker E:So we've done some work on that also.
Speaker E:But just to say that, I mean now this is something also kind of new that is happening, you know, because before maybe it was really hard to train these models at such a large scale.
Speaker E:But now if you train a model on the entire web with all the language, language is Markovian in a way.
Speaker E:So you know, these Markovian structures are sort of learned by these models.
Speaker E:And now if you feed these models with the stuff that is Markovian, it will try to make a prediction that is actually going to be reasonable.
Speaker E:And this is what we've seen in the literature.
Speaker E:And all these things are I think are going to change a lot of the way we think about designing a model for the data we have and how we do inference and all these things.
Speaker E:So as of today I think maybe still is relevant to think about.
Speaker E:Okay, if I have a particular type of data, I know that it makes sense to use a Gaussian process because I want certain properties in the functions, I want certain matern for example, gives us some sort of smoothness up to a certain degree and it's easy to encode length scales of these functions for the prior of the functions.
Speaker E:And this is great for neural networks.
Speaker E:This is very hard to do.
Speaker E:So we've done some work trying to map the two.
Speaker E:Right.
Speaker E:So we tried to say okay, what can we make a neural network?
Speaker E:Imitate what Gaussian processes do so that we gain sort of the interpretability and the nice properties of a Gaussian process.
Speaker E:But then we also inherit the flexibility and the power of these deep learning models so that they can really perform well and also give us sound uncertain quantification.
Speaker B:Yeah, okay.
Speaker B:Yeah, yeah, yeah.
Speaker D:So many things to unpack here.
Speaker D:I love it.
Speaker D:This is, this is super exciting to me because I love, I, I, I.
Speaker B:Love working with these methods but I also end up working with them a lot.
Speaker B:Gps, GPS of Course, as, as my listeners are tired of hearing.
Speaker E:But.
Speaker B:And yeah, everything you just said here is something that resonates because what I love in GPS is their composability and their interpretability.
Speaker B:Especially because you can, and thanks to that you can, you can impose prior structure on the functions you're going to get.
Speaker B:And I find this is extremely useful.
Speaker B:Yeah.
Speaker B:So two questions on that.
Speaker B:First, do you still have the interpretability of DPs?
Speaker B:If you have deep DPs, like, does the length scale still mean something?
Speaker B:And the amplitude if you have like an exponential family kernel?
Speaker B:And second question, what are the state of the art packages that you would recommend people check out right now, both in R and Python, or maybe just in Python, because deep learning is mostly Python centric.
Speaker B:But like, let's say I'm a listener.
Speaker B:I find what you're saying very interesting.
Speaker B:For my use case, I want to check out how to do deep GPS for my project and put that in competition with deep neural networks and hopefully put that into in competition with deep Bayesian neural networks.
Speaker B:But we talked with Vincent about the fact that for now there is no real out of the box package that helps you do that invasion neural networks.
Speaker B:So yeah, then two big questions.
Speaker D:Sorry.
Speaker D:But like, I think it's going to be super interesting.
Speaker E:Yeah, well, I think so.
Speaker E:In terms of the code, in terms of code, I think gps, I think that GP Flow is probably one of the most accessible ones.
Speaker E:James is a good friend.
Speaker E: when we were chatting @nerifs: Speaker E:And so he said, okay, I'm going to do a software package for GPS in TensorFlow.
Speaker E:And this is something that he then developed over the years.
Speaker E:He moved to a startup company called Prowler for a few years.
Speaker E:He had a good team of developers helping him out.
Speaker E:So he did a really great job in that.
Speaker E:And I think GP Flow is a really good starting point.
Speaker E:I think for some projects also with my students in the past, we relied on that.
Speaker E:And I think you can also put.
Speaker B:That in the show notes.
Speaker B:And James should come on the show.
Speaker B:It sounds like.
Speaker E:Absolutely, yeah.
Speaker B:Be a great guest for that.
Speaker D:Like a GP Flow episode.
Speaker E:Yeah.
Speaker E:He's also a great cook, by the way.
Speaker E:He invited me and Alex Matthews for dinner once in Sheffield for pasta.
Speaker E:And I thought, okay, you know, he's gonna make some normal pasta.
Speaker E:No, he made pasta from scratch.
Speaker D:Non Italian pasta.
Speaker D:Overcooked.
Speaker E:Yeah, I was very impressed.
Speaker E:He Did a fantastic job.
Speaker E:It was really nice.
Speaker B:Oh, damn, that's quite the endorsement.
Speaker D:Yeah, yeah, yeah, that's cool.
Speaker B:So then, no, like he needs to come on the show, but like for a live show, then I need to do a live show in Sheffield, it sounds like.
Speaker E:Yes, and, and so.
Speaker E:So, yeah, so I think there are also deep GPS you can easily do there.
Speaker B:With GP flow.
Speaker E:With GP flow, yes.
Speaker E:And I think the type of approximation you can use is based on most of James's work, which is based on inducing points for random features, which is another way in which you could approximate.
Speaker E:So for inducing points, instead of expressing a full process with N data points, you select M. Inducing points, we call it, that allow you to express the entire process, but having to do computations only with this M, let's say.
Speaker E:So having to deal with matrices which are M by M, essentially.
Speaker E:So you have M cube complexity rather than N cube that you would have with a full gp.
Speaker E:And these work really well.
Speaker E:And you have a nice beautiful variational sort of treatment for these models.
Speaker E:You can optimize the position of the inducing inputs and everything is really nice and beautiful.
Speaker E:There is a nice stream of papers by James.
Speaker E:I contributed to a couple of these, where we also did some mcmc and later on, also with my group, we did some full fledged MCMC where we also sample the inducing locations, which was something that people typically optimize.
Speaker E:But just to say, I think in GP flow you can start with a lot of great, you know, examples that can take you very far, as Vincent was saying.
Speaker E:You know, you had Vincent here in another episode.
Speaker E:He's right that it's a bit of a pain point, not having, let's say, I don't know, an accepted and widely used toolbox for Bayesian deep learning.
Speaker E:So I think that's something that we should work as a community.
Speaker E:There are many events that we are trying to participate, to get together, to reflect on what is the role of Bayes in the current state of AI.
Speaker E:So we had one in Dagestu last year and we're going to have one in Abu Dhabi coming up soon at the end of this month.
Speaker E:And I think we should talk about this specifically.
Speaker E:How can we lead an initiative for co development?
Speaker E:But I think it's not easy because each one of us as professors, as academics, we have to serve certain priorities which are in our case publications and maybe in my case also, engagement with applications here in the kingdom is something very valued.
Speaker E:And so the effort of developing a software package I think goes a bit beyond that, right so there needs to be some nice conditions to be able to have a team of developers available to do something like that for a long time.
Speaker E:And I think that's a challenge, at least for us.
Speaker E:And people working in the industry also have, you know, have certain priorities for coming from constraints, coming from their company.
Speaker E:So I think that's a difficult one for everybody, but definitely very valuable.
Speaker E:Yeah, and there was another part of your question that I think I missed.
Speaker D:Yeah, yeah, yeah, we'll come back to that, don't worry.
Speaker B:Yeah, yeah.
Speaker B:Just to piggyback on what you were saying.
Speaker B:Yeah, for sure.
Speaker B:In industry, I would say it's mostly you need to tie that to a project you have at work.
Speaker B:Like if you need that for work, then that's definitely something that can make things happen very fast and much, much faster because then you can get some budget to finance an open source solution to that, to that problem, which will, which will make the development cycle much faster than if you have to do it internally alone.
Speaker B:So yeah, for sure.
Speaker B:But that's very good.
Speaker B:That GP flow already has all this support for inducing point for deep GPS is very good.
Speaker B:I would say that PMC also has very good GP support.
Speaker B:I use that all the time for my Bayesian gps.
Speaker B:Not only the vanilla gps, but the inducing point GPS too.
Speaker B:We have that in PMC and PYMC extras, both for marginal likelihood gp, so for normal likelihood and for latent gps.
Speaker B:So if you have a non normal likelihood.
Speaker B:And of course the HSGP approximation has been a real game changer for using GPS in the wide and we have that in PYMC two out of the box.
Speaker B:The great thing here, compared to GPflow, I would say, is that you can compose that with other parts in your Bayesian model.
Speaker B:So it doesn't have to be a pure GP model, it can be combined to other random variables that you have in the model.
Speaker B:So you could have like a classic linear slope added to a GP with a.
Speaker B:The baseline.
Speaker B:So this is very interesting too.
Speaker B:And you get the difference inference, different inference methods that you get with, you know, a classic ppl.
Speaker B:So not only mcmc but ADVI Pathfinder soon in lab.
Speaker B:So yeah, this is great.
Speaker B:So I encourage people checking that out.
Speaker B:Definitely encourage people checking gpflow out.
Speaker B:I think this is, as you were seeing, great baseline, very useful, great API.
Speaker B:And we definitely, definitely need James on the show to dive deeper into that.
Speaker D:Because I really want to dive deeper.
Speaker B:Into that and I've never done a show about GP flow, so I'll keep that in mind.
Speaker B:But there is that we'll come back to the inference part afterwards.
Speaker B:But I asked you a question about the interpretation.
Speaker B:Do we keep the benefit of interpretability of the kernel parameters when we're using DGPS then?
Speaker E:Yeah.
Speaker E:Well, in the composition, obviously things become more obscure in a way because now a length scale parameter for the first GP is a length scale for functions that become hidden variables, latent variables for a new Gaussian process.
Speaker E:So I think it's possible to think a little bit about the implications of this.
Speaker E:But, you know, you can start thinking about maybe how many oscillations you may expect by doing certain, excuse me, certain length scales over a certain domain.
Speaker E:You know, you can start thinking, okay, you know, if I take derivatives, maybe I can start looking at how many zeros I may expect from this.
Speaker E:But you know, it becomes much harder, I think, the deeper you go.
Speaker E:And I think, of course, in the end there is a lot of other beautiful theory that tells you that if you start pushing now the number of Gaussian processes, so the dimensionality of the Gaussian process to infinity, then you go back to something which is again a Gaussian process.
Speaker E: nice work by Radford Neel in: Speaker E:And there's been a lot of follow up work on that, showing that convolutional neural networks with a lot of, when you take the number of filters to large values, then they become Gaussian processes and so on.
Speaker E:So central limit theorem there kicks in, in a way, and then a lot of these things become Gaussian again.
Speaker E:So I think maybe you may recover some interpretability again when you start pushing things to some limits, but then again in the output you get Gaussians.
Speaker E:So then you lose, in a way the flexibility that you wanted by introducing the composition.
Speaker E:So it's a trade off.
Speaker E:Right.
Speaker E:So how much you want to be flexible and how much you want to be interpretable.
Speaker E:I think.
Speaker B:Yeah, okay, yeah, that makes sense.
Speaker B:That makes a ton of sense.
Speaker B:So let's go back to the inference part.
Speaker B:Now.
Speaker B:Can you give us.
Speaker B:Yeah, the lay of the land of what are the approximations and scalable GP methods.
Speaker B:Also, feel free to talk about Bayesian or non Bayesian deep neural networks.
Speaker B:You know, how can people sample from these models?
Speaker B:And if you can walk us through the most promising techniques.
Speaker E:Yeah, great.
Speaker E:Well, maybe I break up the answer into maybe GPS first and then we move on to maybe deep neural networks.
Speaker E:I think for gps there hasn't been much development in the last few years, I would say.
Speaker E:I mean, there are still papers submitted and accepted in the major conferences, but I think they're really a small fraction compared to anything else that is happening.
Speaker E:So I think a lot of people kind of settled now to some approximation methods and some inference methods.
Speaker E:Variational seems, I mean, remains one of the nice formulations to be able to treat these models when you start introducing inducing points.
Speaker E:So with inducing points it becomes kind of nice to work with these variational approximations.
Speaker E: t work by Michal Listizias in: Speaker E:And, and so I think, I would say variational methods for treating the latent variables in Gaussian processes I think is very predominant now to be able to handle scalability and any likelihoods you want.
Speaker E:We've done some work on MCMC which also works quite well.
Speaker E:I spent a lot of time doing MCMC a long time ago when I was trying to sample parameters of the covariance along with latent variables.
Speaker E:So there's been nice works by Ian Marray, for example, Ryan Adams, Dave Mackay himself also has done some work with these guys.
Speaker E:And at the time I was trying to do sampling.
Speaker E:And there is this problem of being a hierarchical model.
Speaker E:It introduces some complications.
Speaker E:You have hyperparameters, latent variables and data.
Speaker E:And because of this structure, sampling latent variables becomes quite tricky.
Speaker E:Sorry, sampling hyperparameters becomes quite tricky because they're tightly coupled to the latent variables.
Speaker E:So when you sample from the posterior of latent variables, you're conditioning on data, but also on the hyperparameters.
Speaker E:And so imagine you have a length scale parameter.
Speaker E:It means that you're sampling your latent functions to be compatible with the length scale you have.
Speaker E:And then if you sample then length scale, given the latent variables, the length scale is not going to change much because the latent variables have a certain length scale which was informed by the length scale before.
Speaker E:So you have this very, very slow convergence process for this mcmc.
Speaker E:So you have to break it up in a way.
Speaker E:So there has been a lot of work on these ancillary parameterizations, non centered parameterization people call it, in many different ways.
Speaker E:And so you can start thinking about reparameterizing the Gaussian process in a way that you view latent variables from a Gaussian distribution with Covariance K, you start saying, okay, K, I decompose into ll transpose, where L is a Cholesky decomposition.
Speaker E:And then you say, I write my latent functions F as L times nu, where nu now are variables that are standard normals.
Speaker E:And if you do that now, you kind of decouple a little bit in the prior, at least you decouple the dependence between the hyperparameters, which affect the Cholesky of the covariance, and nu, which is now independent variables.
Speaker E:And so now you can sample a bit more efficiently.
Speaker E:Then people came up with even better ways of doing this kind of decoupling.
Speaker E:I've done some work on pseudo marginal Markov chain Monte Carlo where you sort of use important sampling or adaptive importance sampling to integrate out latent variables approximately.
Speaker E:So you can really sample much faster with faster convergence for the hyper parameters.
Speaker E:So MCMC for the hyper parameters is possible.
Speaker E:And I think now with the computing becoming more and more available and cheaper, I think this is something that is definitely something worth considering, especially because a lot of times people work on applications, especially for, you know, these expensive computer simulation problems where you have these simulations that run, you know, for, you know, hours, if not days, and then you have to fit a Gaussian process on these expensive observations to construct an emulator that then you use to sort of calibrate certain parameters of these computer models.
Speaker E:So these things are very expensive.
Speaker E:So mcmc, maybe it's not that expensive after all if you do that.
Speaker E:And for me, like, when I started doing this, I was working on some neuroimaging applications.
Speaker E:We were handling 68, I think, images from patients, and we're trying to do a classifier for Parkinson with this data.
Speaker E:And we said, you know, we want to do a good job in quantifying uncertainty in our predictions.
Speaker E:So we ran this MCMC for like a week.
Speaker E:And yeah, we got, you know, long chains, good convergence, and yeah, we just did it, you know.
Speaker E:So this is just what I wanted to say, maybe about gps.
Speaker E:So we have implementations for this MCMC also.
Speaker E:Now, for when you want to handle everything in a Bayesian way, you want to sample everything.
Speaker E:You want to sample inducing inputs, inducing variables, hyper parameters, everything.
Speaker E:And this was an ASTATS paper.
Speaker E: We had: Speaker E:But also in GP Flow, again, going back there, you see a lot of nice code there that you just use to optimize some of these parameters.
Speaker E:And I think in many applications this may work quite well.
Speaker B:Okay.
Speaker B:And so for this, GP flow is a very good option.
Speaker B:And the paper you talked about, did you implement that in GPFlow or is that a custom implementation in TensorFlow?
Speaker E:So, yeah, we started from GPFlow as a code base.
Speaker E:Yes.
Speaker B:Okay.
Speaker E:Y.
Speaker B:So if people want to replicate, for instance, your paper, they can do that.
Speaker E:Yeah, yeah, but code is available.
Speaker E:They can download the code and.
Speaker B:Yes, yeah, yeah, yeah, yeah.
Speaker E:Also, then pretty much every paper we do, we try to also release code to make it reproducible.
Speaker B:Yeah, yeah, yeah, yeah.
Speaker B:So that's awesome.
Speaker B:But that's also great that it's reproducible with GP flow, because it's a package that's evolving all the time, that's curated, and then people can safely use that in an industrial production setting.
Speaker B:And that's.
Speaker B:That's extremely helpful.
Speaker B:And I find that also very, like, that's a very.
Speaker B:That's a very good piece of news because, like, that's also been my experience that you can actually do a lot of MCMC sampling with GP's, even with big data sets.
Speaker B:So, yeah, like, it's usually the.
Speaker B:The bad priors that people have about it is usually usually not warrantied when you actually trying to do that.
Speaker B:Yeah.
Speaker E:So a few years ago, I gave a talk.
Speaker E:I gave a talk in Cambridge at one event there, and o' Hagan was there, so I was presenting these deep GPS with random features, and then there was a plot that I didn't like so much when I gave a presentation.
Speaker E:So people were actually giving me.
Speaker E:Asking me questions about that that were not so clear.
Speaker E:So then while we were.
Speaker E:We went for lunch after my talk, and while we were in the queue, I just took my laptop and I just ran the code again to replicate that figure in a better way.
Speaker E:And I was showing this beautiful function.
Speaker E:So while we were queuing, you know, I showed this to Professor Hagen and he was very impressed that the code was running so fast, you know, so.
Speaker E:And this was almost 10 years ago now, so.
Speaker B:Yeah, yeah.
Speaker B:And now we have even better MCMC samplers, we have better personal computers.
Speaker B:So, yeah, I've definitely ran very big hierarchical GP models on my laptop in like running in 15 minutes in that PI.
Speaker B:So definitely encourage people trying much more of that because, I mean, if you see all these huge LLMs which are running, imagine that you can run a much more efficient GP model on your computer for this paper, actually, do you remember the size of the data set to give people an idea?
Speaker E:So, yeah, I mean, we can run our.
Speaker E:You know, we were running MNIST.
Speaker E:I think this was already 10 years ago almost.
Speaker E:We're Running a NIST on a laptop for a couple of hours or something.
Speaker E:I don't know.
Speaker B:Okay, yeah, yeah.
Speaker B:So it's millions of data sets.
Speaker E:Sorry, no, that was.
Speaker E:It's only 60,000.
Speaker E:But we also run on this MNIST 8 million, which is 8 million MNIST images, you know, so we ran it on that, and again, you know, we could run it, you know, on a laptop.
Speaker E:So.
Speaker B:Yeah, yeah, okay.
Speaker B:Yeah.
Speaker B:With GP fuel.
Speaker E:No, this was our implementation of these deep GPS with random features.
Speaker B:Okay, but now is that available in GP Flow or.
Speaker E:No, actually.
Speaker E:So GPFlow focuses exclusively on these inducing points methods and not random features, at least as far as I know.
Speaker E:Maybe they.
Speaker E:I don't know if they've evolved that part, but as far as I know, that was not something there in their priorities.
Speaker E:So.
Speaker B:Yeah, okay.
Speaker B:Okay.
Speaker B:Yeah.
Speaker B:Actually, can you.
Speaker B:I don't think we made the clear distinction between inducing points and random features.
Speaker E:So can you do that with inducing points?
Speaker E:So you select a number of inputs that you can then optimize after if you want, and then introduce new random variables that allow you to express the full process as a function of only this small set of random variables with random features.
Speaker E:Instead, you think of an expansion of your model as an infinite number of basis functions, and then you truncate this expansion to a fixed number.
Speaker E:So for certain kernels, for example, the RBF kernel, these random features actually are random Fourier features.
Speaker E:So you can express the GP just as a combination of a weighted combination of sine and cosine with different frequencies sampled appropriately.
Speaker E:So in one way, in one case, you approximate in space.
Speaker E:So this would be the inducing points method.
Speaker E:And in the random feature, if you think about random Fourier features, you're doing some approximation of the spectrum of these processes, if that makes sense.
Speaker E:One is in space, the other one is in frequency.
Speaker B:Yeah, okay.
Speaker B:Yeah, the random features sounds a lot like HSGP actually.
Speaker E:Yeah, it probably has some connections.
Speaker E:Yeah.
Speaker B:Okay, interesting.
Speaker B:Okay, that's cool.
Speaker B:So if people want to use random features, this is going to be your implementation from the paper.
Speaker E:Yeah, we have that.
Speaker E:I mean, it's a bit old now, I think it was on TensorFlow, a very old version, so we should probably try to maintain it or maybe release a Python Pytorch version soon.
Speaker B:Okay, yeah.
Speaker B:And otherwise inducing points, then try that on GPFlow.
Speaker E:Yes, but then of course, there is other approximations.
Speaker E:So again, Andrew Wilson has done some work on this KISS gp, which is a way to do this scalable kernel approximations, which are with pretty powerful.
Speaker E:So, yeah, there are different ways, you know, then.
Speaker B:Yeah, yeah, exactly.
Speaker B:And again, inducing policies in pmc HSGP is an awesome approximation too.
Speaker B:You folks should give it a try if you want.
Speaker B:That's in PMC also.
Speaker B:So yeah, no, definitely a lot of, a lot of great options.
Speaker D:So as for gps, let's turn to deep neural networks.
Speaker D:Yeah, can you give us a lay.
Speaker B:Of the land there?
Speaker E:So, well, I mean one of the main things about deep learning is the possibility to do mini batch.
Speaker E:So one of the great things about training a big neural network is that you just feed it small batches of data and then you keep updating the model using stochastic gradient optimization.
Speaker E:So what is the problem with doing something like this for inference?
Speaker E:So if we do a Bayesian neural network, we want to get a posterior over these parameters of the neural network.
Speaker E:We want to sample from this.
Speaker E:How do we do it?
Speaker E: ually it turns out that since: Speaker E:And so then there is a beautiful paper by Max Welling and EY T on stochastic gradient Langevan Dynamics sampler.
Speaker E: I think it's a: Speaker E:And then there is hybrid Monte Carlo, Hamiltonian Monte Carlo if you want version using stochastic gradient by Emily Fox and her group, which is also quite powerful.
Speaker E:And there is some nice theory around this.
Speaker E:Also we work a little bit on the theory as well in our group to try to understand a bit more about the properties.
Speaker E:And essentially there is a way to show that you have, even if you're using stochastic gradients, which are not exact, these trajectories, somehow you dampen them with some friction.
Speaker E:And then you can show that if you do things right, you can avoid having to compute the entire likelihood when you accept or reject.
Speaker E:And therefore you can really be scalable.
Speaker E:And we tried this with pretty big models.
Speaker E:I mean, of course if we talk about LLMs, we're still very small.
Speaker E:But we've done this with the convolutional neural networks with my students some time ago.
Speaker E:We could sample easily models with a few tens of millions of parameters.
Speaker E:We were doing convergence checks, R hat statistics, all the stuff that you need to do when you sample to make sure that you're really sampling from the posterior.
Speaker E:Of course, because the parameter space is so the models are non identifiable.
Speaker E:So we actually do the convergence checks on the predictions.
Speaker E:So on some sort of projection of the parameters onto something that we can actually meaningfully understand.
Speaker E:Because, you know, if you sample from multiple modes that represent the same kind of configurations, of course the Markov chains are very far away from each other when you do multiple chains, but actually you're sampling from the same configuration.
Speaker E:Okay, yeah, so MCMC is possible.
Speaker E: re was a paper by Alex Graves: Speaker E:It doesn't work so well because people haven't spent enough time working on good priors.
Speaker E: ddressed a bit in our work in: Speaker E:But then we tested this mostly with MCMC rather than variational.
Speaker E:And then a lot of people in the community are really excited about Laplace methods.
Speaker E:So Gaussian approximations with looking at the Hessian and so on.
Speaker E:But I think for deep learning, I don't know, this is maybe my outlier voice here in the community, but I don't think that's the right way of doing things because posteriors are not Gaussian at all and we're in lots of dimensions.
Speaker E:There is a lot of redundancies in the parameter space so that this non identifiability creates ridges in the parameter space where the likelihood is the same.
Speaker E:So I don't think that Gaussian approximation would do particularly well.
Speaker E:But of course it's a very popular way of doing things and the community is really pushing that a lot.
Speaker E:But I don't think that's the right way of doing things.
Speaker B:Okay, okay.
Speaker B:So what to you would be, would.
Speaker D:Be the right way of doing things?
Speaker D:Like let's say listeners want to try.
Speaker B:Deep learning models right now, again, the.
Speaker D:Bayesian version is not very easy, but.
Speaker B:Let'S say they want to try deep learning models.
Speaker B:What should they look at first?
Speaker B:Which packages, which methods, which inference methods?
Speaker E:I think the easiest thing, I mean, when I have students coming, maybe for a short project, you know, the first thing I tell them, you know, try Monte Carlo dropout.
Speaker E:It's a very simple thing.
Speaker E:I mean, I know that a lot of people would disagree with me, but it's a very practical way of doing things and there is connections with additional inference.
Speaker E:So yeah, you retain some principle.
Speaker E:Let's say, although the posterior now is very degenerate because you're just doing, switching off and on some weights.
Speaker E:But It's a very intuitive way of doing things, very practical.
Speaker E:You can take pre trained models and just introduce some dropout at test time, maybe a fine tune first with dropout and training time and then do it at test time.
Speaker E:It's a beautiful idea.
Speaker E:It's very simple.
Speaker E:I think it's a perfectly valid way to start, at least to get some uncertainties and then, you know, what do you do with that?
Speaker E:Of course, depends on the problem you have, but I think it's a good starting point.
Speaker E:Otherwise I think variational has a good potential if you make the class of posterior distributions quite flexible.
Speaker E:And now we're seeing these diffusion models or other powerful generative models being used for variational inference.
Speaker E:I mean this was the way sort of normalizing flows were proposed for variational inference by Rezend and Mohammed and then you know, it was ported to just density estimation.
Speaker E:And now, you know, we have diffusion models that do a wonderful job at diffusion at density estimation and now people are starting to use them for posterior sampling.
Speaker E:And so I think, you know, having sort of these approxim, this flexible posteriors could be like a good way forward.
Speaker E:And I think we're going to see more and more of that.
Speaker E:Because if your variational, the class of distributions that you can represent with your variational, with your model is very large, you can really make the bound very tight.
Speaker E:So the variational really is going to give you the true marginal likelihood.
Speaker E:So eventually I think it would be nice to go in that direction.
Speaker E:Of course, for these huge models it's very challenging.
Speaker E:But yeah, there is a lot of great work now that people are doing on partial stochasticity.
Speaker E:So you may not need to be stochastic about the entire network, but just a few parameters in your model.
Speaker B:Yeah.
Speaker B:Okay.
Speaker B:And to do that, what's a great first bet, Mike, is are all of these methods available in PyTorch or TensorFlow so that people can come up with their neural network model and use these inference engines?
Speaker B:Or is this too much of a frontier method so far?
Speaker E:So I think for the Monte Carlo dropout is really almost.
Speaker E:You don't need any skill to do it.
Speaker E:You know, you just take a model that is already there code or you know, you just switch on and off, actually you switch on drop out layers at training and test time.
Speaker E:That's it.
Speaker E:And for variational, I think in terms of implementations, I think, I think Pyro has maybe a lot of these things already sort of embedded in the way they do things.
Speaker E:I've never used it myself.
Speaker E:I Mean, we tend to develop a lot of code ourselves because we have to break stuff and to try stuff.
Speaker E:So we try to have code that we have under control ourselves.
Speaker E:So that's why I tend not to use too many packages myself.
Speaker E:But I guess Pyro has maybe a lot of things already sort of implemented for doing this.
Speaker E:Yeah.
Speaker B:Okay.
Speaker B:Okay.
Speaker B:So I'll put a link to Pytorch and Pyro documentation in the show notes for this episode and then people can give it a try.
Speaker B:But it's great that.
Speaker B:Yeah, from what you're saying, it sounds like it's pretty easy to implement for practitioners and to try these methods out.
Speaker E:I think these days, I mean, when I teach my class, what I do, I say take an MNIST tutorial for deep learning and just, just turn it into variational.
Speaker E:You know, what you need to do is to add a few extra variables and, you know, it's a good exercise.
Speaker E:People can usually do it relatively easily and.
Speaker B:Yeah, and do you have actually some.
Speaker B:Like, are your public, Are your courses public?
Speaker B:Can, can we put something in the show notes that people can study what you're teaching?
Speaker B:Actually over there at, at Kaust, maybe you have the course, the exercises?
Speaker E:Yeah, we record everything, but we keep it private for now.
Speaker E:I don't think we can open it easily.
Speaker E:Also, I record.
Speaker E:I mean, it's been 10 years now that I've been recording my courses, even when I was in France before, when I was in Glasgow.
Speaker E:But, yeah, they remain within, let's say, the usage for the students.
Speaker E:I think I put in the notes a link to a tutorial we gave on Gaussian processes.
Speaker E:I think there is another tutorial that I should probably also include there.
Speaker E:I'm not sure I put the link to that that we gave at IJCAI on Bayesian deep learning.
Speaker E:So we did a couple of tutorials, one on Gaussian processes, another one on Bayesian deep learning.
Speaker B:Yeah, let's definitely add that.
Speaker B:And I will add my own tutorial about GPS that I taught at PI Data New York last year.
Speaker E:Awesome.
Speaker B:I did that with Chris Von Speck.
Speaker B:He went into the different methods, the different algorithms that you can use actually to feed GPS mainly in Prime C, so in the Bayesian framework, so vanilla GPS inducing points and HSGPs.
Speaker B:And the last half of the tutorial was myself going through an example tutorial for people trying to infer player performance in soccer with GPS on three different timescale, the days, the month and the years, and pulling the GPS hierarchically across players.
Speaker B:So while sharing the coherent structure.
Speaker B:So it's a pretty advanced use case and you'll see it fits very fast on the laptop folks.
Speaker B:So yeah, I'll put the, the link to the, to the GitHub repo and you have the link to the YouTube video in like at the beginning of the GitHub repo.
Speaker B:So yeah, like let's, let's put that in there.
Speaker B:Mauricio.
Speaker B:I think it's going to be super, super interesting for people.
Speaker B:And something I'm also curious about is like how how do Bayesian ideas integrate into modern deep learning?
Speaker B:You know, especially in terms of uncertainty quantification.
Speaker B:You talked a bit about that earlier.
Speaker B:We were like, it's actually a good question right now.
Speaker B:Where does base fit into that new AI and especially gen landscape?
Speaker B:I'm curious to hear your thoughts about that.
Speaker E:Yeah, so I think one of the practical sides, I think many times we tried to do this to start thinking about how do we put a prior over the parameters and quickly realized that it's very difficult to do because of this composition and everything.
Speaker E:So it makes a lot of sense to think about priors over functions that you can represent with your model.
Speaker E:So this is also something that Vincent talked about because he worked on this, we worked on this also in parallel.
Speaker E:And so the idea is really that if we start thinking in that direction, then I think it's much more powerful to think about the kind of functions you can represent.
Speaker E:And I think it goes a lot in the spirit of the things that we were discussing at the beginning of what kind of complexity would you allow for your functions?
Speaker E:So are you happy with functions that have certain degree of complexity?
Speaker E:And this idea of complexity is very profound because complexity is not just number of parameters, complexity is more about simplicity.
Speaker E:And Kolmogorov complexity in a way tells you a lot about that.
Speaker E:And here at Kaust, I'm interacting a lot with the professor Schmid Huber, who is here, is one of the greatest minds in AI and he's been thinking about this stuff for a long time.
Speaker E:And whenever I get coffee with him I get a lecture on Kolmogorov complexity.
Speaker E:And so I've been thinking about this a lot myself now, and also again, Andrew Wilson has done some work on that, talking about these type of things.
Speaker E:And I think in the end we were making progress in a way in understanding how much stochasticity we need in the networks to be able to represent at least any distributions we want.
Speaker E:But then we have to disentangle that in a way from the Complexity of the functions that we can represent.
Speaker E:So there is these two aspects, I think, complexity of the functions and how crazy you want the uncertainties to be or the distributions that you can represent a priori before you're looking at any data.
Speaker E:And this is how you design a model, right?
Speaker E:And so there's this work by the work the group at Oxford, Tom Rainforth and Eric Nalisic, who did this work on partial stochasticity, which I think is very fundamental because it really gives you a practical way to say how many neurons in your neural network should you pick to be stochastic and how many you can just optimize.
Speaker E:And this gives you already guiding principle on how to think about these Bayesian neural networks in the future.
Speaker E:I think they're not very excited about this work.
Speaker E:When I talked with Tom, I saw him a couple of weeks ago in Denmark at a workshop and also at Dijkstu last year.
Speaker E:I was telling him like, Tom, this is great.
Speaker E:This is one of the best things that happened in our community in a long time.
Speaker E:And he was like, oh, come on, I don't think that this is so great.
Speaker E:He was downplaying a lot, this contribution, which I think instead is very important because imagine now if you can do MCMC on a much smaller dimensional space and still achieve the same representation power of a full blown stochastic neural network with millions and millions of parameters.
Speaker E:And instead maybe if your output is only 10 dimensional, you can get away with 10 neurons being stochastic.
Speaker E:This is very powerful, you know.
Speaker E:And so I got excited about this stuff and I started working on crazy things like Gans that nobody looks at anymore because guns are now this generative other side of networks are out of fashion.
Speaker E:But actually they're based on neural networks themselves, you know, and they are partially stochastic.
Speaker E:So it fits perfectly in the narrative of the kind of things I was looking at.
Speaker E:And I got sucked into this.
Speaker E:And it's been a pain because optimizing these models is extremely difficult.
Speaker E:But at least now we have an understanding of this in a Bayesian way.
Speaker E:And it's very nice because we can view not only now Gans, but pretty much any generative models where you take a set of random variables and you have a complicated neural network mapping it into something complicated, like complicated P of X.
Speaker E:And this is the mother of all problems.
Speaker E:If you estimate P of X, you solve any problems you want.
Speaker E:And X can be, you know, if you have a supervised learning problem, it can be labels and inputs.
Speaker E:If you're doing Unsupervised learning.
Speaker E:It's just your inputs, you know, so if you can do this well, you can do a lot of things.
Speaker E:And so this forces us to think a lot about regularization, model complexity and all these things.
Speaker E:And I'm really excited about this and this is really what we're working on at the moment with my group.
Speaker B:Yeah, this is fascinating and I agree, GANS are amazing.
Speaker D:I mean this is, this is fascinating.
Speaker D:I really love the, I mean it's a generative model, so of course I love it.
Speaker D:But I really love this idea of.
Speaker B:You know, having two networks competing against each other.
Speaker B:This super, super interesting.
Speaker B:And can help you in cases of rare, like, of sparse data actually.
Speaker B:So it can be extremely, extremely powerful.
Speaker B:And I see that you, you put a video tutorial of.
Speaker B:About gans.
Speaker B:Precisely.
Speaker B:And how they secretly.
Speaker E:Yeah, so.
Speaker E:So yeah, I was invited to give a presentation on this.
Speaker E:Yeah.
Speaker B:So yeah, yeah.
Speaker D:Well definitely check that out and encourage.
Speaker B:People to do that.
Speaker B:Thanks.
Speaker D:I see you, you put indeed a lot of lectures already in the show notes.
Speaker B:That's fantastic for myself and for listeners.
Speaker B:It's going to be a great episode for show notes also folks.
Speaker B:So definitely check them out and.
Speaker B:Well, I'm going to start playing this out, Mauricio, because I could, I could.
Speaker D:Keep talking with you for a long time because I'm really passionate about these topics and we work on very similar kind of models, so that's awesome.
Speaker D:But I need to respect your bedtime.
Speaker D:It's already late for you.
Speaker B:I'm curious, more generally, in the context of the current gen AI developments, where do you see Gaussian processes in Bayesian deep learning heading in the next few years?
Speaker B:And what advice do you give your young students, researchers, practitioners who want to dive deeper into Bayesian deep learning or deep learning in general?
Speaker E:Yeah, making predictions about what's going to happen is very difficult.
Speaker E:But I mean, I think a lot of this amortization through foundation models is happening really fast and I think we're not realizing how fast this is going.
Speaker E:And so now.
Speaker B:So mean.
Speaker B:Amortize Bayesian inference, for instance.
Speaker E:Yes.
Speaker E:Amortize everything, you know, predictions, inference, everything.
Speaker E:And through these big models that have learned from other data and so on.
Speaker E:It's a very powerful idea.
Speaker E:You know, you learn from lots of data sets and then you get a new data set.
Speaker E:You know what to do.
Speaker E:Right.
Speaker E:In a way that's, that makes a lot of sense in terms of gps.
Speaker E:I think they still play a pretty powerful role in.
Speaker E:I think there was a paper not long ago showing that GPS actually for Bayesian optimization still perform pretty well compared to Bayesian neural networks of all kinds.
Speaker E:So they still have a place there for Bayesian optimization, experimental design, incremental adaptive experimental design, and also for these computer models, calibration computer models.
Speaker E: is paper by Kennedy O', Hagan: Speaker E:I think there's still a lot of design choices you can make about the GPS that somehow allow you to model, emulate the code with uncertainty that is meaningful.
Speaker E:I think o' Hagan has done tremendous amount of work on eliciting priors for these computer models.
Speaker E:And you know, this is still very, very powerful and relevant, I think.
Speaker E:And this is going to stay for some time.
Speaker E:I think GP is for spatial temporal models also thanks to people like Howard here, Kaust.
Speaker E:I mean, they're going to stay for a long time.
Speaker E: remember when I met hover in: Speaker E:And so he invited me for a keynote and one of these latent Gaussian models workshops and they had 120 seats, I still remember.
Speaker E:And he sold out in like an hour, you know, that was like a rock concert, you know.
Speaker E:And everybody wants to use this because so many people have problems in spatial temporal that involves some spatial temporal data and they want to do it fast and they want to try stuff out, they want to change models, they want to change assumptions.
Speaker E:And the only way to do this fast is to have something that does the inference fast and accurately.
Speaker E:And what they developed is just tailored for that and works brilliantly, you know.
Speaker E:So just to say that I think for these type of data, I think it's going to be pretty hard to beat Gaussian Markov random fields.
Speaker E:We tried a bit with neural networks to do things to make the models more flexible, more non stationary.
Speaker E:We've done some work on this.
Speaker E:But you know, still, I think the advantage of having such something so fast and so plug and play, really.
Speaker E:I mean, you can just plug your data in, make a few assumptions, you know, about what you want, and then you just get the result.
Speaker E:That's very powerful.
Speaker B:Yeah, yeah, no, for sure.
Speaker D:And I'll put again into the show.
Speaker B:Notes an episode I did with Marvin Schmidt about amortized page and inference and the work they do on the base flow package.
Speaker B:If you Maurizio have some links also you want to add on, amortize anything especially practical Python packages people can use.
Speaker B:Definitely, definitely add that, please.
Speaker E:And in terms of the future of Bayesian deep learning, I think that's a much bigger question.
Speaker E:I think as a community, what we're trying to identify, I mean there have been some nice works and also Vincent was mentioning some nice works on various applications in healthcare, self driving cars.
Speaker E:But I think we're still missing the kind of application that goes in the news.
Speaker E:Something like, like a killer application, something that, you know, AlphaGo type thing, you know, where people are going to talk about it and BBC News or something like that, you know, something that is going to convince everybody that and ourselves perhaps that what we're doing is actually very meaningful.
Speaker E:I think we rely a lot on other type of applications like computer vision problems because people work a lot on these or now LLMs have become popular.
Speaker E:So some of my friends, colleagues are actually showing that you can do also LLMs a bit Bayesian with some Bayesian low rank Laplace for example, you know, and I think, yeah, so we're testing ourselves in these grounds.
Speaker E:But actually ultimately uncertainties is what matters for decision making.
Speaker E:And so I think ultimately this is, I think the kind of ground where we have to compete and compete and try to evaluate ourselves and how well we can do with this.
Speaker E:And this is really also the difference between everybody talks about AI, but AI really is thinking about an agent that interacts with an environment, senses reasons and then acts on the environment.
Speaker E:And machine learning is the reasoning and then all this pipeline is AI.
Speaker E:And at the moment there is no real AI.
Speaker E:You know, like AI would mean that we have an agent that actually interacts with this and intervenes on the environment, you know, So I think I tried to talk about this a lot with my students when I give lectures about machine learning, statistics, AI, what is everything, you know.
Speaker E: going on in the back then in: Speaker E:So anyway, it will be material for another episode.
Speaker D:Yeah, yeah, yeah, no, for sure, for sure.
Speaker B:And so do you have, before I ask you the last two questions, do you have any advice you give to people who want to start working in that field, whether they Are students researchers, practitioners?
Speaker E:Yeah, you probably have heard this a lot.
Speaker E:But I think working on the foundations is very important.
Speaker E:A deep understanding of the foundations is always what gives you unparalleled advantage because you really can think in a very profound way about certain problems and what kind of problems we want to solve at a larger scale.
Speaker E:You know, many times it boils down to, you know, have a deeper understanding of the fundamentals.
Speaker E:And many times for me, I find it very useful to go back to linear models, you know, whenever we develop new theory, new algorithms, new methods.
Speaker E:So we try to get some good grounding on linear models.
Speaker E:So what does it mean for a linear model to have this?
Speaker E:So we were studying now recently singular learning theory to try to explain some scaling laws for uncertainty.
Speaker E:People ask me all the time, I have so much data, why do I need to be Bayesian?
Speaker E:And now I can tell you we did work on the scaling laws.
Speaker E:We know when epistemic uncertainties become small, as number of data increases.
Speaker E:And now for ResNet18, I can tell you that you need 10 billion images before this uncertainty becomes on the second digit of your probabilities, something like that, just to give practical advice to people to understand these things.
Speaker E:So we were trying to study a theory behind this and we think that singular learning theory can give us some intuition about this.
Speaker E:And so Watanabe has done a lot of work on this and nice book.
Speaker E:And so we were looking at this quantity generalization error, which was very mysterious.
Speaker E:And so we sort of derived it for linear models and we understood what it means for real.
Speaker E:So many times this grounding on something that is tractable is really important, I think.
Speaker D:Yeah, yeah, completely agree.
Speaker B:This is fascinating.
Speaker D:Yeah, we need to have you back on the show at some point, Maurizio, to talk about these other topics because otherwise it's going to be a three.
Speaker B:Hour episode and I'd be fine with that, but have a plane to catch and you have a bed to be in.
Speaker D:So let's play us out and.
Speaker D:Well, for the last two questions, if you have listened to the show and you know them, first one is, if you had unlimited time and resources, which.
Speaker B:Problem would you try to solve?
Speaker E:Awesome question.
Speaker E:I love this question and I would say nutrition is a, is a huge interesting problem for me that, you know, we see the statistics about the number of people with diabetes worldwide and it's insane.
Speaker E:Like we're talking hundreds of millions just in the US or we're talking, you know, even in India, I think it's 200 million people with adults with diabetes.
Speaker E:This is serious stuff.
Speaker E:And, and I think now we have the tools to understand all this.
Speaker E:I mean, the food industry has done this experiment on all of us, right?
Speaker E:And now we see the effect.
Speaker E:So I think it's possible to draw some conclusions about all these things and understanding optimal health based on what we eat.
Speaker E:And I think there are some people doing this.
Speaker E:There is a famous guy that is spending millions on this.
Speaker E:And I think I would probably spend time and energy on this if I had unlimited resources, because it would need a lot of resources to go against the common wisdom against food industry government regulations and so on.
Speaker E:But I think there's something definitely we can optimize and now we have more and more tools to measure something about ourselves.
Speaker E:So.
Speaker D:Yeah, yeah, no, completely agree.
Speaker B:And I think it's also related to these incredible ability.
Speaker B:I mean weakness we have as humans, which is like our ability to entertain.
Speaker D:Us to death, which is definitely not.
Speaker B:One of our best instincts on that.
Speaker B:I have actually at least two episodes to recommend.
Speaker B:The latest one is the one just before yours, actually, maurizio.
Speaker B:It's episode 143 with Christoph Bamberg and he does research exactly that about appetite, how it's related to cognitive processes and how it's related to self esteem and things like that.
Speaker B:And second episode that is in the show notes for episode 143, which is the one I recorded with Eric Trexler, who is much more focused on weight management and exercise and how that relates to appetite and the environment that you're in is extremely important basically, to put it shortly.
Speaker B:So second question, Maurizio.
Speaker B:If you could have dinner with any great scientific mind, dead, alive or fictional, who would it be?
Speaker E:Another great question.
Speaker E:I think it's very easy to overthink this.
Speaker E:As an Italian, I would say Leonardo da Vinci has been one of the greatest scientists, artist, architect, philosopher.
Speaker E:It was just.
Speaker E:And so much ahead of his time.
Speaker E:And I think whenever you interact with these people that are so much ahead of their time, you really see it something new.
Speaker E:You really get so much inspiration.
Speaker E:It happened to me a few times.
Speaker E:One last one of the latest ones was when I interviewed here at Kaust.
Speaker E:I had a three hours dinner with Jurgen Schmidt Huber.
Speaker E:And I can tell you that was there was something, an experience that I will never forget.
Speaker E:And it was a great three hours of talking about wonderful things and being challenged about thinking about things that I've never thought about, you know, and it was, this is the kind of things, I think as scientists we need, you know, be challenged and get out of the comfort zone.
Speaker E:And I like doing that a lot, getting out of the comfort zone, you know.
Speaker B:So, yeah, yeah.
Speaker B:I mean, I can tell it from your work for sure.
Speaker B:And I think that's something, yeah.
Speaker B:A lot of researchers have in common, for sure.
Speaker B:Because you have to be comfortable being.
Speaker D:Uncomfortable because you're always at the frontier.
Speaker B:And so by definition, you don't know the answers.
Speaker B:You don't even know if you'll get there.
Speaker D:So this is definitely something that's hard any type of research you do.
Speaker D:And definitely that's very awesome to have people like you in these kind of jobs because, well, you help us advancing in all the, the domains you're touching.
Speaker B:Maurizio.
Speaker D:So thank you so much and thank.
Speaker B:You so much for being on this show.
Speaker B:I think it was, it was a great one.
Speaker B:It's time to wrap up now, but we'll have you on the show next time you have a fun paper or code or package to share with us.
Speaker B:Thanks again to Hans.
Speaker B:I think it's Hans Moncho.
Speaker B:I may be butchering your name.
Speaker B:Sorry about that.
Speaker D:But yeah, thank you so much for putting us in contact.
Speaker B:And Mauricio, thank you so much for.
Speaker D:Taking the time and being on this show.
Speaker E:Well, thank you so much.
Speaker E:It's been a huge pleasure and yeah, I hope this has been interesting for your audience and for you and happy to be back on the show whenever you want.
Speaker E:You're doing a great service and thank you so much for that.
Speaker D:Yeah, definitely was super fun and thank you for your kind words.
Speaker D:Definitely appreciate it.
Speaker C:This has been another episode of Learning Bayesian Statistics.
Speaker C:Be sure to rate, review and follow the show on your favorite podcaster.
Speaker C:And visit learnbasedstats.com for more resources about today's topics as well as access to more episodes to help you reach true Bayesian state of mind.
Speaker B:That's learn based stats.
Speaker C:Our theme music is Good Bajan by Baba Brinkman, fit MC Lars and Megaran.
Speaker C:Check out his awesome work@BabaBrinkman.com I'm your host Alexandora.
Speaker C:You can follow me on Twitter lexandora.
Speaker C:Like the country, you can support the show and unlock exclusive Benefits by visiting patreon.com learnbased dance thank you so much for listening and for your support.
Speaker B:You're truly a good Bayesian.
Speaker A:Change your predictions after ticking it information in.
Speaker A:And if you're thinking I'll be less than amazing, let's adjust those expectations.
Speaker A:Let me show you how to be a good daisy.
Speaker A:Change calculations after taking fresh data in those predictions that your brain is making, let's get them on a solid foundation.