Artwork for podcast Learning Bayesian Statistics
#135 Bayesian Calibration and Model Checking, with Teemu Säilynoja
Behavioral & Social Sciences Episode 13525th June 2025 • Learning Bayesian Statistics • Alexandre Andorra
00:00:00 01:12:12

Share Episode

Shownotes

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!

Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!

Visit our Patreon page to unlock exclusive Bayesian swag ;)

Takeaways:

  • Teemu focuses on calibration assessments and predictive checking in Bayesian workflows.
  • Simulation-based calibration (SBC) checks model implementation
  • SBC involves drawing realizations from prior and generating prior predictive data.
  • Visual predictive checking is crucial for assessing model predictions.
  • Prior predictive checks should be done before looking at data.
  • Posterior SBC focuses on the area of parameter space most relevant to the data.
  • Challenges in SBC include inference time.
  • Visualizations complement numerical metrics in Bayesian modeling.
  • Amortized Bayesian inference benefits from SBC for quick posterior checks. The calibration of Bayesian models is more intuitive than Frequentist models.
  • Choosing the right visualization depends on data characteristics.
  • Using multiple visualization methods can reveal different insights.
  • Visualizations should be viewed as models of the data.
  • Goodness of fit tests can enhance visualization accuracy.
  • Uncertainty visualization is crucial but often overlooked.

Chapters:

09:53 Understanding Simulation-Based Calibration (SBC)

15:03 Practical Applications of SBC in Bayesian Modeling

22:19 Challenges in Developing Posterior SBC

29:41 The Role of SBC in Amortized Bayesian Inference

33:47 The Importance of Visual Predictive Checking

36:50 Predictive Checking and Model Fitting

38:08 The Importance of Visual Checks

40:54 Choosing Visualization Types

49:06 Visualizations as Models

55:02 Uncertainty Visualization in Bayesian Modeling

01:00:05 Future Trends in Probabilistic Modeling

Thank you to my Patrons for making this episode possible!

Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık and Suyog Chandramouli.

Links from the show:

Transcript

This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Transcripts

Speaker:

Today, I'm excited to have Teemu Sainiounoria on the show, a doctoral researcher and data

scientist at Aalto University in Finland.

2

:

Teemu works within the Bayesian workflow research group led by none other than Aki Betari,

where he focuses on model validation through calibration assessments and predictive

3

:

checking.

4

:

In our conversation,

5

:

Temu dives deep into his research on simulation-based calibration, that you may know under

the acronym SBC, and visual predictive checking.

6

:

He explains why these methods are essential tools for validating vision models, how

visualizations complement numerical metrics, and common pitfalls to avoid in interpreting

7

:

these visuals.

8

:

We'll also explore his recent work on posterior SBC, a novel approach designed to ensure

models are calibrated specifically for datasets at hand, particularly useful when data

9

:

collection is expensive or limited.

10

:

A word of caution, you will hear some construction noise on my end for this episode.

11

:

This is really Murphy's Law in action.

12

:

I unfortunately had no control over this and I'm sorry about it.

13

:

But I still kept the episode because, well, Teemu had really great stuff for us, so I hope

that you will still be able to enjoy it.

14

:

This is Learning Basics Statistics, episode 135, recorded May 9, 2025.

15

:

Welcome Bayesian Statistics, a podcast about Bayesian inference, the methods, the

projects, and the people who make it possible.

16

:

I'm your host, Alex Andorra.

17

:

You can follow me on Twitter at alex-underscore-andorra.

18

:

like the country.

19

:

For any info about the show, LearnBasedStats.com is Laplace to be.

20

:

Show notes, becoming a corporate sponsor, unlocking Beijing Merch, supporting the show on

Patreon, everything is in there.

21

:

That's LearnBasedStats.com.

22

:

If you're interested in one-on-one mentorship, online courses, or statistical consulting,

feel free to reach out and book a call at topmate.io slash Alex underscore and Dora.

23

:

See you around, folks.

24

:

and best patient wishes to you all.

25

:

And if today's discussion sparked ideas for your business, well, our team at Pimc Labs can

help bring them to life.

26

:

Check us out at pimc-labs.com.

27

:

Teemu Saillunoja, welcome to Learning Patient Statistics.

28

:

Thank you.

29

:

Glad to be here.

30

:

Thanks for inviting Yeah, I'm super happy to have you here.

31

:

That's great.

32

:

Well, one more people in the Alto University group is now on the show.

33

:

Yes.

34

:

You are really going to collect them all.

35

:

Yeah, exactly.

36

:

It's like Pokemons.

37

:

I guess I hope you guys evolve so that then I can invite you back on the show.

38

:

Welcome to Learning Basin Statistics podcasting.

39

:

Thank you.

40

:

I will edit that and make it like I was the one saying that.

41

:

So, Temu, I have lot of questions for you today because you do so many things.

42

:

uh But first, as usual, we're going to talk about your origin story.

43

:

So can you tell us what you're doing nowadays and how you ended up working on this?

44

:

Yeah, currently I'm...

45

:

Finishing up my doctoral thesis at Aalto University in Akivector's Bayesian workflow

research group.

46

:

uh And I've been focusing on calibration assessments and predictive checking in Bayesian

workflow.

47

:

So the group is kind of structured so that everyone is focusing on some aspect of the

great workflow.

48

:

And also earlier this week, I happily

49

:

joined PiMC Labs, so I'm very excited to see what that adventure is bringing to me.

50

:

Yeah, yeah, that's great addition to the team, obviously.

51

:

So obviously I'm not working there anymore, but I've seen you around.

52

:

So yeah, that's awesome.

53

:

I was happy to see that.

54

:

and I am mostly looking forward to see what magic you'll do first there.

55

:

And actually, how did you end up doing patient's dance?

56

:

Do you remember when and how you were first introduced to them?

57

:

Yeah.

58

:

My first introduction was during my master's studies.

59

:

I did my master's in...

60

:

mathematics at Helsinki University and there was this course for advanced inverse problems

and it just happened to be that year that it was held ah as Bayesian inversion I think was

61

:

the name of the course that year we had a visiting professor, W.

62

:

Hellin who works ah with spatial modeling and does

63

:

just a lot of Bayesian side for that.

64

:

yeah.

65

:

But it was a very theoretical course.

66

:

Didn't do almost any, well, we didn't do anything computational until the course was over.

67

:

And then the course had a project work, which where we then got to do some X-ray

tomography, actually.

68

:

Like we ourselves got to pick an object and went to a lab.

69

:

took the X-ray and got the raw data and then ah had to code up our own MCMC sampler in

MATLAB and uh come up with some, come up with the priors and get to know what was inside

70

:

the object.

71

:

So yeah, but then there was quite a break.

72

:

I continued my masters for a year and then I studied, went to industry, did some.

73

:

data science in an ad tech company and then some time later the opportunity rose for

doctoral studies ah in Aalto and with Akki Vestari.

74

:

So then that brought me back to base.

75

:

yeah, yeah.

76

:

And actually, what drew you specifically to what you're doing nowadays, which is mainly,

as you were saying, calibration assessments and visual predictive checkings for vision

77

:

workflows?

78

:

Yeah.

79

:

Well, of course, it has a lot to do with also your supervisor, but I guess you kind of

gravitate towards where your interests intersect with the expertise of your...

80

:

of your supervisor.

81

:

ah Personally, I'm quite a visual thinker.

82

:

I basically need to have a blackboard by me or a whiteboard or whatever.

83

:

Something easy to draw on and then low threshold to also just wipe everything off if I'm

not happy with what I wrote there.

84

:

So that's where the visual predictive checking comes from.

85

:

ah I really enjoy like a well thought graph.

86

:

and how visualizations can aid in communication.

87

:

But then calibration assessment, so simulation-based calibration, which I'm sure we will

touch on more deeply.

88

:

first it came, this Talz et al.

89

:

paper was when I was working on this graphical test for uniformity and goodness of it.

90

:

paper back in 2020.

91

:

The original SPC paper or the first paper to actually call it SPC was under work and there

you need a uniformity test and that's where the uniformity test that we were making kind

92

:

of met and found a use.

93

:

That was my first touch to calibration checking, so assessing the inference in base air

modeling.

94

:

Nice.

95

:

That makes sense.

96

:

I think indeed it's a good time now to try and dive into a bit more of what you're doing.

97

:

Can you give us an overview of simulation-based calibration?

98

:

SBC is and why it's so important in Bayesian modeling?

99

:

Yeah, yeah.

100

:

So simulation-based calibration, we like to call it simulation-based calibration checking

just to underline that we are not actually doing any calibration, of not calibrating the

101

:

model, but checking if the model would be calibrated or if the inference and the model

implementation together would be calibrated.

102

:

kind of you could in a way think of it as a check for your model implementation and

inference algorithm together that they are working as you would expect.

103

:

What I mean by model implementation especially now with PPLs, probabilistic programming

languages, essentially you don't need to write your own

104

:

samplers anymore but you define your model in something that's a bit more accessible uh

and then the sampling algorithm hopefully is already ah there for you made by some other

105

:

brilliant minds.

106

:

So in simulation-based calibration we essentially um check that the way you

107

:

written your model is actually like the goal is to check that you haven't done mistakes in

model implementation and also your inference is working and how we approach this is is

108

:

that we draw first ah realizations from prior you you come up with usually that's an easy

part you you somehow randomly draw from your prior parameter values and then you generate

109

:

prior predictive data from those

110

:

parameter values, which is also usually an easy step.

111

:

if we step a uh bit further away and look at what we have now that we have these parameter

values and some data, well, ah this would actually be, if you condition on the data, then

112

:

the distribution of these parameter values should just be your posterior.

113

:

So and what now we manage to essentially create a posterior sample without any model

fitting.

114

:

But then if we want to now compare if our inference is working properly, we actually run

the inference algorithm to this prior predictive sample and we again receive a posterior

115

:

from our MCMC samples or from our variational inference or whatever we use for having this

116

:

posterior approximation.

117

:

And now we can compare this to the original parameters that we drew from the prior.

118

:

And these should be from the same distribution conditional on the data.

119

:

We repeat this many times and essentially do some test for this if it holds that they are

from the same distribution and ah

120

:

What has been handy is just to rank the prior draw among this posterior sample that

corresponds to that particular prior and that particular draw and that particular prior

121

:

predictive sample.

122

:

And if our inference is working as we would expect and wanted to work, then this rank

should be uniformly distributed.

123

:

So that's where the uniformity test then comes.

124

:

Council saw in that I was mentioning earlier.

125

:

Yeah, and also...

126

:

Can you hear that?

127

:

I heard, yeah.

128

:

You have a very large bee in your office.

129

:

Yeah, exactly.

130

:

So thankfully that's not that.

131

:

But as Murphy's Law states...

132

:

m Anything that can go wrong will go wrong.

133

:

Well, it's uh exceptional because they are doing some remodeling of my building.

134

:

And today was like super calm.

135

:

Nothing, no noise at all.

136

:

And today, exactly as we started recording.

137

:

they started piercing the wall, like, here, on my balcony.

138

:

So, it's absolutely terrible, I apologize for that, but could not, absolutely could not

control that.

139

:

So what I will do though is I will make my questions as brief as possible, especially when

the noise is here.

140

:

So, thanks in advance for your patience, dude.

141

:

uh

142

:

So uh thanks.

143

:

That's a great uh presentation and summary of what SBC is.

144

:

um To make that clear, maybe two things.

145

:

One, um that can be done on the prior specification of the model, but then also on the

posterior um to see that the model is actually well calibrated.

146

:

And I think you recommend doing both.

147

:

And then you can go into that very new paper that you have out with other authors.

148

:

um

149

:

which is exactly what posterior SPC and how to do that.

150

:

And second, how can people do that?

151

:

Like concretely, listeners who just listened to you and were like, yeah, that sounds good.

152

:

That sounds great.

153

:

I definitely want to try that in my models.

154

:

If I'm using Stan or PIMC or any other PPL, how can I do that?

155

:

Prior SPC or posterior SPC or both?

156

:

Both.

157

:

Both.

158

:

Okay, everything.

159

:

oh

160

:

Well, for prior SPC, it's quite simple.

161

:

If you can code this prior generating, prior predictive generation yourself, then you can

just basically code it yourself.

162

:

But also for both PyMC and for Stan, there are already packages.

163

:

For Stan, there's a package called SPC um by Angie Moon and

164

:

and Shiyoung Kim and Mati Modrak and myself, which essentially ah gives you very good

framework for running SPC and also then assessing the uniformity.

165

:

also it gives you the framework for doing the actual computation, but also then assessing

the results.

166

:

And then for Python, there's a package called Simuk by Arvis developers.

167

:

which also does the same at least for PMC models and BAMBI and I think some other models

too but now I'm not super certain about that but that's also there ready you if you have a

168

:

PMC model you can just give it to the give it to the package and it will do ah run is busy

for you and give you some calibration assessment plots already.

169

:

um This might take a bit while because you need to refit your model multiple times.

170

:

ah In the R package for Stan, you can do parallel inference very easily.

171

:

So essentially SPC, you would be generating these prior predictive samples multiple times

to get an overall assessment for uniformity.

172

:

if you have a cluster or something at hand, can just

173

:

put them there, have these simulation iterations essentially run as parallel as you want.

174

:

For posterior SPC, which I didn't get to even explain, but essentially our new paper, um

this standard way of SPC that I was now describing has a view.

175

:

issues uh or drawbacks.

176

:

Essentially if you have very vague or weak priors and some non-linearities in your model,

you might have regions of the parameter space in the prior that where the model inference

177

:

is somehow not working.

178

:

Like you have very pathological parts of your prior space which you would maybe not even

see in your real data.

179

:

Maybe you would, maybe you wouldn't, but

180

:

Anyway, so prior SPC might give you essentially a false positive saying that, well, in a

way false positive if you then would go and fit your model to data that would not produce

181

:

a posterior in this problematic area.

182

:

So we have been developing posterior SPC, which in its simplest form, you just replace the

prior

183

:

from private specie with posterior and you create posterior predictive samples.

184

:

So you essentially focus in the area of the parameter space that is most of interest to

you with your data set.

185

:

uh And if you're worried about um using all of your data and then creating more data, ah

you can only use part of your data and do essentially

186

:

some kind of cross-validation, so using some random partitions of your data.

187

:

But that's been very short the idea of PostUoSPC.

188

:

For PostUoSPC, you need to make a little bit of extra coding around these packages.

189

:

I think Simuk is not yet supporting PostUoSPC, but I have had talks with Osvaldo Martin,

who is actually the...

190

:

developer for Simuc so that we could add post your SPC option for Simuc also.

191

:

And in the SPC package you can run post your SPC ah by defining your data generating

process so that you create the data not from prior predicted draws but from post your

192

:

predicted draws.

193

:

ah

194

:

bit of hand coding but at least you are given quite clear pieces on where to do the

changes.

195

:

And in our paper I also have a github repository with examples on how to do this.

196

:

Yeah, and I put that so these two packages are in the show notes both for R and Python.

197

:

So if I understand correctly, posterior SBC is a bit more computationally intensive than

prior SBC.

198

:

Is that right?

199

:

Yes, but only by one model fit.

200

:

Yeah, because you need to have the posterior and then you draw posterior predictive

samples and then brief it to that.

201

:

And of course, if you have a very large data set and you have your posterior and then you

create posterior predictive samples and while the posterior updating usually happens so

202

:

that you then just give this essentially double sized data set and take that.

203

:

if you're

204

:

model fitting is scaling badly by the size of your dataset then it might be

computationally more expensive or then you need to not use your whole observation dataset.

205

:

But yes.

206

:

And so do you recommend always doing both prior and posterior SPC?

207

:

Not necessarily.

208

:

Prior SPC, if it's very hard to reason about your priors, like you have a very complex big

model and setting these priors is not necessarily.

209

:

um For example, you have a very large data set, so your data is anyway going to dominate

in your procedure and your prior is kind of...

210

:

not that impactful, then you could consider just running PostU SPC and then using

essentially focusing on your computational time just to the region that you are actually

211

:

interested in.

212

:

Okay.

213

:

Yeah.

214

:

Yeah.

215

:

see.

216

:

That's, that's useful.

217

:

And like these kind of practical, practical advice.

218

:

And I'm curious also what, are some of the challenges you faced um in developing posterior

SBC uh and even prior SBC and how did you overcome them?

219

:

Like when you came up with the methods?

220

:

Well,

221

:

I think one of these challenges is then, ah well, possibly the inference time.

222

:

Like, ah well, you are doing multiple model fits.

223

:

So ah sometimes, especially if you are trying to build a model and it's not necessarily

the final model.

224

:

So you're still also in the process of iterating with your models.

225

:

ah

226

:

stopping to run SPC might not be ah the optimal ah workflow.

227

:

ah So there you might look at running just very small, short number of simulations just to

get like an idea that okay, at least everything is somewhat okay.

228

:

Like you would detect some gross

229

:

grows problems quite quickly.

230

:

But also what we noticed that when you're only fitting your model once, it doesn't

necessarily impact you so much if the very start of the sampling is slow.

231

:

But then when you're 500 or 1000 refits of your model, yeah.

232

:

So for example,

233

:

being smart with how you initialize your MCMC chains might actually come and affect you

quite a lot.

234

:

in the paper for one example, I use Pathfinder to get initial states for the chains and

then only start MCMC.

235

:

So it kind of, requirements for computation ah kind of gave me challenges that I wouldn't

have.

236

:

otherwise considered but they were actually challenges that you would kind of have anyway

if you're fitting your model.

237

:

Yeah, yeah.

238

:

For sure.

239

:

I don't think there is that many more challenges anyways because of simulation-based

calibration.

240

:

mean, the only challenge is you need to feed the model multiple times.

241

:

But that's the main bottleneck, right?

242

:

Because otherwise, if you have sampling problems, it's not because of SPC, it's because of

the model.

243

:

um So yeah, like I can guess that, yeah, if you need um some form of variational inference

to sample the model, then you'll need that also for SBC.

244

:

Yeah.

245

:

But here we were just in like kind of running a very short pathfinder chain to come up

with initial values for the MCMC.

246

:

So we were then anyway, in the end, fitting the model with.

247

:

And how, like, yeah, what's your experience doing that actually?

248

:

Because I know that's what Bob Carpenter intended when he developed um Pathfinder.

249

:

So basically to use Pathfinder as an initialization for Nuts.

250

:

um In your experience, that help a lot?

251

:

And in which circumstances?

252

:

Like this particular example was a ODE model where the, especially with the priors, which

kind of at first look they look sensible.

253

:

So when you're setting the priors, kind of have, you have reasons why you put the priors

you have there.

254

:

But what happens is that you quite often generate these um multimodal posteriors where

there's a very large

255

:

like in area very large posterior mode and then a very small very high peak in the

density.

256

:

That's actually the one that you would find more reasonable for like this was a ODE model

for a lockable terra situation so predator prey model and the very large

257

:

mode was essentially just a posterior mode for where most of the variation in your data

was just measurement error.

258

:

So not necessarily the mode that you would want to explore so much.

259

:

So Pathfinder was very good at getting the change to start from the mode that gives you

260

:

gives you a posterior that actually finds the seasonal trends and dynamics between the

species.

261

:

And also Akivehtari has another case study, this birthday case study, which is in the, if

you look at the cover of the base and data analysis book, the third edition, uh there's a

262

:

picture from that case study and there also

263

:

They have a big GP model where they do also Pathfinder to get good initial values for the

chains.

264

:

Nice, yeah, this is super cool.

265

:

Yeah, I can guess how um challenging the audio sampling must have been.

266

:

uh And is that actually related to why SBC is also interesting uh in amortized patient

inference?

267

:

um Can you maybe elaborate a bit on that and tell us even how that's important in these

settings?

268

:

Yeah, well, I'm not a big fan of The reason why we really like SPC there is slightly

different.

269

:

um We were lucky to have Marvin Smith, who your listeners might already know, um visiting

Aalto University as he actually recently defended his thesis.

270

:

um very successfully.

271

:

um He was co-supervised by Aki and Paul Berkner.

272

:

So he was visiting Aalto and he's of course an expert on amortized space and inference.

273

:

um yeah, in amortized space and inference essentially what SPC is good for there is that

you don't have for amortized space and inference these

274

:

these convergence checks, called standard convergence checks that we have for MCMC, R-HATS

and such.

275

:

how do you know the quality of your posterior for the data set that you then observe and

the...

276

:

When you've gone through all the trouble of training your posterior, for your neural

network to give you a...

277

:

posterior approximation.

278

:

Well then you can run posterior SPC when you have a new data set.

279

:

When you get an observation for an amortized Bayesian, what do you call it, amortized

Bayesian model, something that you have online already ready to run inference.

280

:

And in amortized Bayesian inference, of course, the inference part is almost instant.

281

:

And also making predictions after you have this in inference is also almost instant.

282

:

So then you just, well, you have maybe 500 times something that's almost instant.

283

:

Still, it doesn't take too long.

284

:

So it's very fast to run SPC or posterior SPC.

285

:

kind of we don't have the main drawback of SPC in amortized base and inference.

286

:

And we are also missing some standard good checks for the quality of our posterior.

287

:

So here.

288

:

Post-USPC is actually quite useful for checking if your model is calibrated with the new

data or is this perhaps out of distribution observation and we don't have any guarantees

289

:

of the quality of the post-USPC then.

290

:

Also, prior SBC works with amortized base and inference.

291

:

Before you see any data, you can check how your parameter recovery, for example, is

working.

292

:

Okay, yeah.

293

:

I mean, that makes sense.

294

:

Yeah.

295

:

been a lot.

296

:

No, no.

297

:

All what you're saying makes sense.

298

:

And actually, I think SBC definitely makes a ton of sense.

299

:

ah Sorry, but the noise again.

300

:

ah

301

:

Yeah, SBC makes a ton of sense in the armatization framework for sure because you already

have the ability to just sample once you have trained the neural network you have the

302

:

ability to just sample from the posterior distribution in a matter of seconds.

303

:

It's just for free so you know why not do SBC?

304

:

Yeah, you've already paid the cost so you can just reap the benefits.

305

:

Right, yeah, exactly.

306

:

um

307

:

So actually, also something I wanted to ask you about, because you said you were doing

that a lot.

308

:

And I certainly am also a visual learner and visual thinkers, having a blackboard, a

notepad.

309

:

I always have a notepad with me, blackboard when I can.

310

:

it's really how I think also, when I need to understand some concepts, seeing the code,

seeing the formula is actually extremely helpful.

311

:

to me and so you do a lot of work that I definitely appreciate on visual predictive

checking.

312

:

And you've released recently a set of guidelines for visualizations, which I put in the

show notes.

313

:

I definitely encourage people uh to look into that because that's also my use that all the

time at my work.

314

:

So the work you do and also with Osvaldo, Osvaldo Martin works a lot on that with the new

version of RVs.

315

:

And he also has

316

:

uh He is this project online of an online book about exploratory analysis of patient

models, where he demonstrates all the cool thing you can do with Arviz once you have fit a

317

:

patient model.

318

:

But first, can you tell us why is visual checking so crucial in the patient monitoring

workflows?

319

:

Because there are certain needs, at least in the classical machine learning world, much

more of an emphasis on

320

:

statistical metrics much more than on the plants and

321

:

and even sometimes I've encountered maybe even a distressed of plots because they are

visual whereas metrics seem more objective because you they are numbers.

322

:

you are not the one making the interpretation you just have a number and some threshold.

323

:

Exactly whereas a plot you always like often I've encountered often people who can be

shocked by that

324

:

By the fact that in the Bayesian framework we use plots a lot and that seems very

subjective to them.

325

:

Well, like if you're doing kind of by the book the workflow, you should be having a prior

predictive plot even before you look at your data at all.

326

:

kind of what might be very odd for a more kind of frequentist machine learning side.

327

:

Well,

328

:

Visualizations, like my view on that, why they are so important, I like instead of just

having like some kind of numerical assessments is well a very good example is this

329

:

unscromed squarted where you have essentially very different datasets all giving the same

summary statistics.

330

:

Very classical example where visualizations gives you immediately.

331

:

something you were missing with numbers.

332

:

And then in kind of Bayesian modeling, visual predictive checking comes into two stages of

the workflow.

333

:

Like as I said, first with prior predictive checking.

334

:

So before looking at your data at all, you can do prior predictions and see what you are

actually predicting from the model and assessing your priors.

335

:

by through that.

336

:

And then once you have done fitting, then you can do course, post-re predictive checking,

comparing your post-re predictions to your data, which sometimes is frowned upon because

337

:

you're kind of double using the data ah in that you're not predicting out of sample.

338

:

But if that's very much a worry, you can do live on our predictions quite easily.

339

:

These days you have

340

:

SisLU to give you with one model fit ah good approximations of what you would be

predicting with this LIBONOUT uh predictive distribution.

341

:

And then also kind of on the same SisLU also gives you an estimate of how good this

approximation is.

342

:

that's quite a low hanging fruit for

343

:

for predictive checking if you are worried for predicting on an observation that you've

used in fitting your model.

344

:

I think I don't remember the original question anymore, but yeah.

345

:

were at least the thoughts that the question was raising in me.

346

:

Yeah, yeah, yeah.

347

:

No, that's great.

348

:

that's, I think you...

349

:

You answered my question, which was basically, know, can you explain why visual checks are

important?

350

:

And basically your point is, well, they are important because they don't substitute two

metrics, but they complement them.

351

:

so you might miss things that you, if you only look at metrics, you will miss things that

you won't miss with plots and vice versa.

352

:

Especially assessing your prior predictive distribution with just

353

:

metrics could be, well I've never tried that but that could be quite a challenging task

whereas visual predictive checking is quite...

354

:

Well, it's quite visual.

355

:

You see what you're doing.

356

:

Yeah, yeah, So I think it's less of a problem.

357

:

Like once you're doing Bayesian stance, you have to do prior checks.

358

:

But I think it's less of a problem for new people in the Bayesian world because they don't

have any anchor.

359

:

Because by definition, they didn't have priors before.

360

:

So it's not that they come and they are used to doing something with the prior samples.

361

:

Where the switching thinking might need to happen is when you have the Poiseuille samples,

because then you have a whole zoo in the classic machine learning world of metrics, uh

362

:

which are like, it's a very rich field of the literature.

363

:

And so here people are more anchored and you might not always have, you know, one-to-one

uh

364

:

comparison of the matrix.

365

:

for instance, the calibration of a Bayesian model to me is much more easy.

366

:

much more intuitive to the calibration of very frequentist model because you don't have

the binning to do and so on.

367

:

And actually now we have the new calibration plot in Arviz, in the new version of Arviz

1.0.

368

:

In Arviz plots, there is this new calibration plot that I use all the time.

369

:

this is super useful.

370

:

And you can, since it's based on the Bayesian uh

371

:

ETIs or HDIs, well then you can actually interpret it as, well, 90 % of the data was in

the 90 % interval.

372

:

So this is well calibrated.

373

:

So yeah, in a frequentist model, if I remember correctly, that's not really the

interpretation you can make of it.

374

:

So there is some stuff like that where you need to be careful how you explain it to

stakeholders.

375

:

Yeah.

376

:

And in those cases, maybe that's also kind of the

377

:

You said the mistrust to visualizations.

378

:

If that's the case that your visualization is showing a 95 % interval and then it's not

actually what it's supposed to show or the interpretation is not the first one that you

379

:

would think of.

380

:

That kind of gives you, might make you mistrust visualizations a bit more.

381

:

That's true.

382

:

Yeah, that's a point.

383

:

Yeah, I like that.

384

:

um And actually, could you summarize your main recommendations on choosing visualization

types based on data characteristics and that?

385

:

Like that question is directly basically making you summarize the blog post of yours that

I've...

386

:

Well, it's not even a blog post.

387

:

I think it's a paper you've submitted uh that's still in review, uh to the best of my

knowledge.

388

:

So that's in the show notes.

389

:

uh

390

:

But yeah, free to um tell us the summary of that.

391

:

You can also share your screen and share the paper if you want for people watching on

YouTube.

392

:

But yeah, basically, you give us the rundown for that?

393

:

Yeah, yeah.

394

:

I don't have the paper right now, so I'm trying my best.

395

:

I think the summary in short is that we look at...

396

:

and that what people usually use for visualizations and where we find the most common kind

of pitfalls or chances for issues.

397

:

And as you said, the recommendations are kind of based on what your data characteristics

would be.

398

:

ah If you're looking at continuous data, very often you would use kernel density, just

density plots, or maybe a histogram, which might be very fine.

399

:

ah

400

:

But if your data is bounded, usually the default density plot implementations might give

you a little bit of a...

401

:

They don't do very well with bounds, like strict bounds in the most common implementation.

402

:

There are packages like in R-site, there's ggdist that automatically tries to detect if

your data is bounded and adjust your KDE to actually do this.

403

:

uh boundary correction.

404

:

But if you don't make a boundary correction and you are unlucky enough to use an

implementation that doesn't do that, your visualization is...

405

:

Well basically what you see is not what you get or what you have.

406

:

So your visualization is misleading you a bit.

407

:

The model you're fitting...

408

:

Or the model...

409

:

If you think of your visualization of a model of the data to summarize what you're seeing.

410

:

what you're having.

411

:

That model is biased or miscalibrated in some aspects.

412

:

And then, so kind of, my most important recommendation was just be to think of your data a

bit and perhaps use ah two different, for example, two different visualization methods and

413

:

see if the conclusion you draw would be different.

414

:

Because, yeah.

415

:

uh thinking a bit more.

416

:

Then if you go to discrete data, discrete is a bit challenging.

417

:

You have rootograms which would be a good visualization for count data, especially if you

have a large range of counts.

418

:

If you have a very large range of counts, you could almost very often you can just use a

continuous visualization to give you a summary of the data.

419

:

But then if you have discrete data with small number of individual states, then most

visualization packages, especially for predictive checking, em because if you're just

420

:

looking at the data, then a bar graph is usually what you would use for just kind of a

summary of the discrete data if you're using just the 1D.

421

:

visualization.

422

:

But then once you're doing predictive checking the bar graph is not anymore ah very

useful.

423

:

The only information you gain is essentially that is your model doing as well or worse

than an intercept only model.

424

:

which we saw in the paper ah also as an example.

425

:

Yeah, this is a very good and practical paper.

426

:

uh Really the kind of paper I really love.

427

:

So thank you for doing that.

428

:

um That's super helpful.

429

:

And I definitely encourage people to take a look at it because...

430

:

That's how to make justice here on the podcast.

431

:

Yeah, it does actually look like a blog post, like you said, a bit.

432

:

Because it's done in Quarto, it's an HTML page.

433

:

It's for this journal of visualization and interaction, which has a totally open review

process.

434

:

The review of the paper is a GitHub review.

435

:

Everything is happening in GitHub through issues.

436

:

So we thought that this is an excellent thing to pursue this open review where and this

kind of more ah like not to be chained to PDFs essentially because especially for

437

:

visualizations and if you would have any interactions ah these days

438

:

It's quite rare to anyway print your paper or actually read a journal, like a paper

journal.

439

:

So why not use something a bit more feature rich.

440

:

Really, people don't read journals anymore.

441

:

That's weird.

442

:

but and actually something I recommend people to do is like printing because the paper is

organized um around different types of data.

443

:

like any like every types of data are a new section.

444

:

So something I recommend people to do that I'm going to do at work actually, because it's

the kind of paper you want to have your favorite tab.

445

:

So, but even better, you can like print in A3 each section, you know, so that you have the

example of the plots.

446

:

And so that way you can have that on the walls of your office.

447

:

then like, each time you work with, you know, order data, boom, have the poster right here

and you can use that.

448

:

Or normal data, boom, you have it here.

449

:

So, and I think this is like, at least for me who

450

:

really learn like that, that will be super helpful.

451

:

So I recommend people to do that.

452

:

I'm definitely going to do that.

453

:

So anyway, it's going to end up in a physical form.

454

:

Yeah, exactly.

455

:

Yeah.

456

:

Yeah.

457

:

And so like for your future papers, think about that.

458

:

They will like, how will people consume the paper?

459

:

Yeah, I'll try to have a poster format also.

460

:

Yeah, exactly.

461

:

Actually, I think I remember you mentioning, em not here, but I think I've seen you maybe

write in this paper that visualizations are kind of like models themselves.

462

:

uh Can you explain this perspective and how does it help improve how you think about

predictive checking?

463

:

Yeah, yeah.

464

:

Well,

465

:

This is most clear when you're looking at the density plot because I would be very

surprised if it wouldn't be the case, but when you're making a density plot, you're

466

:

essentially fitting a KD kernel density estimate with Gaussian kernels uh running some

heuristic of deciding the bandwidth for the kernel and then plotting the density of that

467

:

density estimate.

468

:

So you are actually

469

:

literally fitting a model to your data.

470

:

ah But also if you think of a histogram, you could think of this as a step function to

approximate your density.

471

:

Also, your data density is just a step function.

472

:

ah So in that sense, once you think of visualizations as models fit on your data, then you

have goodness of fit tests and you can actually assess that is this visualization

473

:

representing my data trust like truthfully or is there some bias or something that's

missing.

474

:

For example in the case of boundedness or maybe you have a data set that's otherwise

continuous but you have some point masses or something like this and then KDE would not do

475

:

well with steps or point masses in your data set.

476

:

So then this is more, we give a recommendation in the paper that's more for kind of people

developing the packages for visualizations because you have quite lightweight checks for

477

:

goodness of fit.

478

:

So you could have when implementing a visualization, you could just have under the hood a

goodness of fit test and give the user a warning if there's something very bad.

479

:

Let them know that, hey,

480

:

We saw that you're trying to visualize something that it might be the case that your data

is actually bounded or discrete and you're using a continuous visualization.

481

:

So take this into account.

482

:

of proceed with caution.

483

:

Right.

484

:

Yeah.

485

:

Yeah.

486

:

I saw as well though, did that for instance, in the new RVs when you do some plots and

these are binary data, for instance.

487

:

I don't remember exactly which plot, but then it will give you a warning if it sees its

binary outcome data.

488

:

It will output a warning and tell you, maybe you want to...

489

:

We see you have binary outcomes.

490

:

Maybe you want to use plot-pattern calibration plot instead of that one.

491

:

I think it's actually the calibration plot.

492

:

I don't remember, but this warning is Most likely it is.

493

:

Yeah.

494

:

We were lucky to have Osvaldo join the BASEM workflow group in January.

495

:

So we've had discussions on Arvis and Baseplot, which I'm then developing also.

496

:

I've been contributing to Baseplot quite a lot recently.

497

:

ah yeah, these are kind of being developed in uh a uh kind of somehow parallel, but also

having discussion with each other.

498

:

Yeah, and what we've seen as kind of the most common mistake, so in Aalto we have this

annual Bayesian data analysis course for I think roughly three and a half hundred, like

499

:

350 students every year start the course.

500

:

So it's quite a large course and then we have a project work for the course where the

students go through Bayesian workflow.

501

:

and give a presentation of their data analysis essentially.

502

:

And then ah the default post-seo predictive check in currently that you get, which is

partly my fault because I need to change it in baseplot, but what you get for example BRMS

503

:

is a KDE plot where you have overlaid your data as a KDE and then your

504

:

post your prediction samples, a couple of those KDE's.

505

:

And then a lot of these projects have binary response variables.

506

:

So you just have, you would have just zeros and ones and you have a KDE for that.

507

:

So you are not getting, you're getting a very odd choice of visualization.

508

:

And on top of that, you're not getting any additional information aside from just, are you

doing better than an interceptor on the model?

509

:

And are you doing worse than an interceptor numeral actually?

510

:

Yeah, so in that case, we also plan to have for base plot a warning that, now this might

not be what you want to do.

511

:

Yeah, I think that's cool.

512

:

And I think in the future, a warning to try and improve would be the one on the Pareto K

shape issue in the compareLueCv function.

513

:

Because it's very.

514

:

It's very technical, I think when users see that, they don't really know what to do as the

alternative.

515

:

It's like, but I don't even know what that part of K shape means.

516

:

Yeah, that is...

517

:

Yeah, and this is one thing that we also want to do is have warnings that then...

518

:

The warning itself would be quite short, but it would have a link, but hey.

519

:

For more information, here's a vignette.

520

:

Go look at the documentation page, and this is where we explain it.

521

:

Give an example that this is what's happening.

522

:

Yeah, Fisher.

523

:

That would be helpful.

524

:

In general, I'm curious, how do you approach uncertainty visualization in your projects?

525

:

And why do you think, and do you think it's overlooked by practitioners or not?

526

:

uh What is the state of visualization around uncertainty so far in your eyes?

527

:

ah That's a very good question.

528

:

It's of course very central for Bayesian modeling, but it's also for, especially if you

have a user that's coming from

529

:

from, like, is not trained in Bayesian statistics or doesn't have a lot of experience with

probabilistic modeling.

530

:

So then this might be something that kind of comes as a surprise or as something that's

hard to interpret.

531

:

that's when thinking of visualizations, it's...

532

:

in general when thinking of visualizations, it's very important to think of your target

audience.

533

:

Like for example in the paper we don't talk of ECDFs as visualizations for your data, uh

but these are quite commonly used in some fields and then for that audience that would be

534

:

a good visualization.

535

:

So in that sense uh

536

:

thinking of visualizations in general is very much thinking of your audience and what you

want to convey and then uncertainty visualizations.

537

:

Yeah, there are some kind of basic or not very basic actually, some mistakes that you do

very easily.

538

:

Like for example, you have a predictive model and you show them

539

:

for continuous predictions, you show the posterior mean and then some central interval

around it.

540

:

And then there was this very recent example of a hurricane in the States and you do that.

541

:

And it looks like this massive cone that's going to go over the land.

542

:

The visualization is not conveying that actually what the model is trying to say is that

we have multiple possible

543

:

paths of these predictions and it's going to be one of these.

544

:

So you should be using instead of this kind of natural default you should be possibly just

giving a collection of lines, individual lines and showing that one of these it could be

545

:

any of this.

546

:

um So it's not easy the topic of uncertainty visualization.

547

:

I think Matthew Kay, Jessica Halman, they do excellent work on this topic.

548

:

uh So, yeah, and I believe at least Jessica Halman, think you've had as a guest also.

549

:

Yes, yeah, I did.

550

:

And I will put her episode in the show notes.

551

:

uh

552

:

May have had day and met you I think at least we were in contact I have to check if I

already had him on the show.

553

:

Yeah, he's a very busy person Hey, yeah, we were at least in contact.

554

:

That's what I can tell you about and um Actually, what do you so I'll start playing this

out here because I know it's late for you, but I want to pick your brain on

555

:

the trends that you see shaping the near future of probabilistic modeling and also where

would you like your research to go next?

556

:

Yeah, if only I could see the future.

557

:

think amortized inference is definitely going to become popular, even more popular ah than

it is now.

558

:

Now, for example, for base in experimental design, that's a...

559

:

you can essentially adjust your experiment on the fly when you're getting data, which

would not be possible with MCMC.

560

:

So there what we have done with SPC and hopefully other people also come up with good

diagnostics for validating then these um

561

:

posterior affirmation in amortized patient inference.

562

:

What other cases do you see for amortized patient inference?

563

:

like here, that would be like online learning and change of the design analysis, if I

understood correctly.

564

:

Do you foresee any other cases where it will be particularly helpful?

565

:

Well...

566

:

It's not my expertise, but I would say that, well, essentially, when you need to have a

very fast posterior inference, this, well, which essentially is this online learning or

567

:

something like quick decision making, perhaps some autonomous, I don't know, like robots,

like something where you need to be very fast.

568

:

And it's essentially

569

:

um worth the...

570

:

it's better to pay the cost of computation in advance and then...

571

:

um For me, I would...

572

:

in my mind that sounds like a very very good feature to have in your pocket as a modeler.

573

:

But if needed, if this is my use case, I would have at least the

574

:

some knowledge or some basic ability to also do this, like have a bit amortized model in

my back pocket for that need.

575

:

Yeah.

576

:

Yeah.

577

:

So cases where it's worth paying the cost of inference upfront.

578

:

And then once you have trained the neural network,

579

:

you will get uh posterior uh samples for free.

580

:

You still have to pay the cost of training the neural network, which can be more costly

than training M-SimC for a lot of models.

581

:

Here, have to see if that works in your case, folks, because sometimes it will be even

longer to use our anticipation inference than M-SimC.

582

:

You have to feed the requirements basically.

583

:

And on the MCMC side also like, what like with NathPy for example now, advances on the

MCMC side are also not to be discounted.

584

:

And also like there's this very recent, there's a lot of interesting stuff happening with

running a lot of very short chains in parallel.

585

:

which also sounds quite promising and interesting.

586

:

Like, usually the default is running for change, but what if you run 400?

587

:

Yeah, for sure.

588

:

That's interesting.

589

:

And actually now you can do...

590

:

em

591

:

adaptation of the chains with normalizing flows in NutPy, and then NutPy will use that as

the initialization for MCMC.

592

:

So these can be somewhat similar, like, you know, in the idea to amortized inference.

593

:

It's not exactly the same, but yeah, like basically you would need to train normalizing

flows and then use that for MCMC.

594

:

NutPy does that out of the box for you.

595

:

and earn other food.

596

:

For both Stan and Pintsy, again!

597

:

it needs to be a model that has a particularly complicated posterior geometry because you

need to try a neural network first.

598

:

that's not a complicated enough model for MCMC.

599

:

It will still be much faster to run MCMC than run the neural network and then MCMC.

600

:

it's not going to always be useful, but for some cases, it's going to be a game changer.

601

:

Yeah, knowing which tool to use for which case.

602

:

Now essentially the toolbox is getting more and more tools in it.

603

:

Yeah, that's true.

604

:

uh

605

:

Yeah, and actually I already had Matthew Kay on the show, can confirm.

606

:

There's this episode 66, that's in the show notes, folks, so if you wanna listen to that.

607

:

And Jessica Hullman was also on the show, episode 73, and that's also in the show notes.

608

:

yeah, show notes are big for these episodes, so that's awesome, I'm happy about that.

609

:

That's a good sign.

610

:

em

611

:

Awesome.

612

:

um Temu, anything to add before I ask you the last two questions?

613

:

The traditional two questions?

614

:

those ones.

615

:

Yeah.

616

:

No, think you've had very, very good questions.

617

:

Thank you for those.

618

:

uh

619

:

Thank you.

620

:

I tried, but you know, I've had five years of training, so that's a good sign that I made

progress on that front.

621

:

See you, podcast host.

622

:

Exactly.

623

:

Awesome.

624

:

Well, Teemu, that was great.

625

:

But of course, before letting you go, I'm going to ask you the last two questions I ask

every guest at the end of the show.

626

:

So first one, if you had unlimited time and resources, which problem would you try to

solve?

627

:

Yeah, I've been because I've been listening to the podcast for quite a while now.

628

:

And I've been thinking that if I would be put on this spot.

629

:

ah I would probably come up with a very boring answer of word piece or something.

630

:

But then now, actually today, ah earlier today, I was walking my dog and this came to my

mind that it's not necessarily such a boring answer to say that I would like to, we have a

631

:

lot of problems at the moment in the world, but I would like to solve communication

between people.

632

:

And I think this is a very m

633

:

in a way also a very Bayesian problem because you have the receiver has a latent model of

the world, their understanding and you as a communicator you should be understanding you

634

:

should be kind of able to assess what's that model and then fit your communication to that

also so perhaps yeah communication misunderstandings understanding other people

635

:

um I wouldn't shoot anything lower than that with unlimited resources.

636

:

For sure.

637

:

Yeah, No, I love that.

638

:

Love that.

639

:

Yeah.

640

:

And for sure it's related to priors and...

641

:

how to elicitate priors from people, yeah all of that related to basically...

642

:

uh update the beliefs of recipient.

643

:

Yeah, related to the Socratic methods in a way for sure.

644

:

Street epistemology and all that good stuff.

645

:

um

646

:

Love that, love that Tim.

647

:

uh And second question, if you could have dinner with any great scientific mind, dead,

alive or fictional, who would it be?

648

:

ah It would be dead.

649

:

ah So, I have a background in mathematics and especially in the 19th century and before

that, mathematicians had this bad habit of dying very young.

650

:

So...

651

:

So I would pick, I went with this in mind and I found Gotthold Eisenstein who was a

mathematician in the 19th century, a German mathematician who worked on analysis and

652

:

number theory and actually kind of what he managed to do before meeting his untimely death

at the age of 29 was he solved issues that then allowed Gauss.

653

:

to further his research.

654

:

I would, and also was a very interesting person, spent a time in prison for some political

opinions and things like this in Germany.

655

:

then having a dinner and trying to maybe obtain some knowledge from a person who would

probably have had a lot more to give to the world.

656

:

I love that, yeah, yeah.

657

:

Great.

658

:

I love that.

659

:

And you're the first one to answer that.

660

:

So congrats, Tim.

661

:

Thank you.

662

:

Yeah, I also thought that it can't be some very obvious scientific mind.

663

:

Must be some slightly niche.

664

:

It can be, can be, that's fine.

665

:

is judging you.

666

:

Awesome, well, thank you so much Teemu.

667

:

I'm gonna let you go to sleep because it's late for you in Finland.

668

:

You've been kind enough to stay up for me to accommodate my American schedule.

669

:

thank you so much.

670

:

problem, it was a pleasure.

671

:

Sorry again.

672

:

to you and all of you folks for the construction noises that you must have heard from time

to time.

673

:

They seem to have stopped now, but yeah, like you know.

674

:

As uh Epictetus said, this is

675

:

a thing in life I cannot control.

676

:

I tried to keep my calm.

677

:

That was not easy but I kept my calm through the construction noises so I'm happy with

that.

678

:

um And I hope you still could enjoy the episode.

679

:

Thankfully Temu...

680

:

uh

681

:

was the one with many more things to say than myself so not too much construction noises

at your head.

682

:

uh As usual, I'll put resources and links to your website and socials in the show notes.

683

:

Teemu, feel free to add anything in there also if you think I missed some.

684

:

And thanks again for taking the time and being on this show.

685

:

Thank you.

686

:

This has been another episode of Learning Bayesian Statistics.

687

:

Be sure to rate, review, and follow the show on your favorite podcatcher, and visit

learnbaystats.com for more resources about today's topics, as well as access to more

688

:

episodes to help you reach true Bayesian state of mind.

689

:

That's learnbaystats.com.

690

:

Our theme music is Good Bayesian by Baba Brinkman, fit MC Lass and Meghiraam.

691

:

Check out his awesome work at bababrinkman.com.

692

:

I'm your host.

693

:

Alex and Dora.

694

:

can follow me on Twitter at Alex underscore and Dora like the country.

695

:

You can support the show and unlock exclusive benefits by visiting Patreon.com slash

LearnBasedDance.

696

:

Thank you so much for listening and for your support.

697

:

You're truly a good Bayesian.

698

:

Change your predictions after taking it from

699

:

And if you're thinking I'll be less than amazing Let's adjust those expectations Let me

show you how to be a good base here Change calculations after taking fresh data in Those

700

:

predictions that your brain is making Let's get them on a solid foundation

Chapters

Video

Watch

More from YouTube

More Episodes
135. #135 Bayesian Calibration and Model Checking, with Teemu Säilynoja
01:12:12
132. #132 Bayesian Cognition and the Future of Human-AI Interaction, with Tom Griffiths
01:30:14
130. #130 The Real-World Impact of Epidemiological Models, with Adam Kucharski
01:09:04
127. #127 Saving Sharks... with Python, Causal Inference and Aaron MacNeil
01:04:08
121. #121 Exploring Bayesian Structural Equation Modeling, with Nathaniel Forde
01:08:12
120. #120 Innovations in Infectious Disease Modeling, with Liza Semenova & Chris Wymant
01:01:39
119. #119 Causal Inference, Fiction Writing and Career Changes, with Robert Kubinec
01:25:00
117. #117 Unveiling the Power of Bayesian Experimental Design, with Desi Ivanova
01:13:11
114. #114 From the Field to the Lab – A Journey in Baseball Science, with Jacob Buffa
01:01:31
113. #113 A Deep Dive into Bayesian Stats, with Alex Andorra, ft. the Super Data Science Podcast
01:30:51
112. #112 Advanced Bayesian Regression, with Tomi Capretto
01:27:18
109. #109 Prior Sensitivity Analysis, Overfitting & Model Selection, with Sonja Winter
01:10:49
102. #102 Bayesian Structural Equation Modeling & Causal Inference in Psychometrics, with Ed Merkle
01:08:53
94. #94 Psychometrics Models & Choosing Priors, with Jonathan Templin
01:06:25
92. #92 How to Make Decision Under Uncertainty, with Gerd Gigerenzer
01:04:45
89. #89 Unlocking the Science of Exercise, Nutrition & Weight Management, with Eric Trexler
01:59:50
84. #84 Causality in Neuroscience & Psychology, with Konrad Kording
01:05:42
83. #83 Multilevel Regression, Post-Stratification & Electoral Dynamics, with Tarmo Jüristo
01:17:20
19. #19 Turing, Julia and Bayes in Economics, with Cameron Pfiffer
01:00:26
27. #27 Modeling the US Presidential Elections, with Andrew Gelman & Merlin Heidemanns
01:00:52
28. #28 Game Theory, Industrial Organization & Policy Design, with Shosh Vasserman
01:03:56
31. #31 Bayesian Cognitive Modeling & Decision-Making, with Michael Lee
01:09:18
34. #34 Multilevel Regression, Post-stratification & Missing Data, with Lauren Kennedy
01:12:39
40. #40 Bayesian Stats for the Speech & Language Sciences, with Allison Hilger and Timo Roettger
01:05:32
52. #52 Election forecasting models in Germany, with Marcus Gross
00:58:07
71. #71 Artificial Intelligence, Deepmind & Social Change, with Julien Cornebise
01:05:07
53. #53 Bayesian Stats for the Behavioral & Neural Sciences, with Todd Hudson
00:56:12
57. #57 Forecasting French Elections, with… Mystery Guest
01:21:48
77. #77 How a Simple Dress Helped Uncover Hidden Prejudices, with Pascal Wallisch
01:09:00