Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
This episode will show you different sides of the tech world. The one where you research and apply algorithms, where you get super excited about image recognition and AI-generated art. And the one where you support social change actors — aka the “AI for Good” movement.
My guest for this episode is, quite naturally, Julien Cornebise. Julien is an Honorary Associate Professor at UCL. He was an early researcher at DeepMind where he designed its early algorithms. He then worked as a Director of Research at ElementAI, where he built and led the London office and “AI for Good” unit.
After his theoretical work on Bayesian methods, he had the privilege to work with the NHS to diagnose eye diseases; with Amnesty International to quantify abuse on Twitter and find destroyed villages in Darfur; with Forensic Architecture to identify teargas canisters used against civilians.
Other than that, Julien is an avid reader, and loves dark humor and picking up his son from school at the 'hour of the daddies and the mommies”.
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek and Paul Cox.
Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)
Links from the show:
Abstract
Julien Cornebise goes on a deep dive into deep learning with us in episode 71. He calls himself a “passionate, impact-driven scientist in Machine Learning and Artificial Intelligence”. He holds an Honorary Associate Professor position at UCL, was an early researcher at DeepMind, went on to become Director of Research at ElementAI and worked with institutions ranging from the NHS in Great-Britain to Amnesty International.
He is a strong advocate for using Artificial Intelligence and computer engineering tools for good and cautions us to think carefully about who we develop models and tools for. Ask the question: What could go wrong? How could this be misused? The list of projects where he used his computing skills for good is long and divers: With the NHS he developed methods to measure and diagnose eye diseases. For Amnesty International he helped quantify the abuse female journalists receive on Twitter, based on a database of tweets labeled by volunteers.
Beyond these applied projects, Julien and Alex muse about the future of structured models in times of more and more popular deep learning approaches and the fascinating potential of these new approaches. He advices anyone interested in these topics to be comfortable with experimenting by themselves and potentially breaking things in a non-consequential environment.
And don’t be too intimidated by more seasoned professionals, he adds, because they probably have imposter-syndrome themselves which is a sign of being aware of ones own limitations.
Automated Transcript
Please note that the following transcript was generated automatically and may therefore contain errors. Feel free to reach out if you’re willing to correct them.
Ju Car Ju is an honorary associate professor at ucl. He was an early researcher at Deep Moines where he designed its early algorithms. He then worked as a director of research at Element ai, where he built and led the London office and the AI for Good Unit. After his theoretical work on base methods, he had the privilege to work with V NHS to diagnose eye diseases with Amnesty International, to quantify abuse and Twitter, and to find destroyed villages in Darfur.
rchitecture to identify tear [:Welcome to Learning Based in Statistics, a fortnightly podcast on basin in front Methods project. In the people who make it possible. I'm your host, Alex Andora. You can follow Twitter and Ann underscore andora like the country. For any info about the podcast, learn base stats.com is lab to be show notes becoming corporate sponsor supporting LBS pat and unlocking based merge, everything is there.
l that info a model is still [:Let me show you how to be a good, crazy and change your predictions after taking information. And if you thinking now be less than amazing, let's adjust those. Expect. Patient what Abassian is someone who cares about evidence and doesn't jump to assumptions based on intuitions and prejudice. Abassian makes predictions on the best available info and adjusts the probability cuz every belief is provisional.
n. Hello my dear Bence. Yep, [:So very officially, thank you so much. Or Duet P Dusek and Paul Cox for joining the full posterior and good beers you made my day and. Make sure to send a picture when you get your exclusive lbs mer. Okay, now let's dive into the science of our generation with Julian Car Bees,
Julian Car Bees. The Learning Beijing Statistics. I said
gonna learn a lot. So let's [:How did you come to the math and stats worlds? Oh boy. Well, let, let me try and leave up to the, to, to the high bar you've set up. Basically, I trained originally as a computer science engineer, a coder. I've been coding since I'm 12, Uh, was into assembly x a six, removing shareware limitations back in high school.
So I decided I wanted to go to go that way. I went to an engineering school in France, specialized in computer science, went to algorithm competitions, the a m Icpc, loved coding, loved formalizing problems, uh, problem solving. But then I realized, hey, this is all really fun. But the world is really noisy and stochastic.
ng up on the math a lot that [:So computational statistics in sequential Monte. And that's, uh, that's how I was, uh, I was in the field. So not entirely straight, but not completely senior either. Yeah, you are on the, on the side of the people who started coding, like extremely early . Not as young as I would've wanted, but it, it was fun. I like coding and I keep thinking in of math and um, and statistics in mostly algorithmic viewpoint.
As a result. That's interesting. Now I think you are the second French person to come on the show. The first one was re and uh, he started very early, also his programming career. So I kinda feel like the black sheep here, like I started when I was 27, so, you know, Well actually congrats on you. You know, it's much harder to do it that way than when you're just run in really young.
So, yeah, well done. . [:Yeah, actually it was during university where I got involved in a project with a, with a professor There. We were working on. Instant coffee and detecting fraud in instant coffee. It was a joint project with, uh, Nestle food manufacturer and they were trying to establish norms to prevent, well, to detect when instant coffee has been unreached with extra par or extra sugar.
measure the glucose and the [:So it was really fun actually to go and take, you know, just a random, random, usual object, put it through. And discover a lot about the, the world behind it of trying to dro coffee, of trying to, and all that through, you know, a two dimensional vector and uh, and a bunch of oceans and, uh, patient analysis.
eep Mine for, uh, four years [:Yeah. 2012 I joined, actually. Oh, okay. So 2010s, damn. We can already say the 2010s that they are done already. We're in the roaring twenties. Other crushing 10 twenties. Depends. Yeah, exactly. The sneezing, twenties if you will be smaller coffee . So yeah, it's basically you worked at deep mine for four years and in particular you were focused on health research.
uisition and when I left late:The first two years actually I was in the fundamental research team. I was trying to bring some be loft to conventional neural networks and to deep learning. Try to see if we could, you know, bring some, uh, some uncertainty [00:09:00] measurement there and be, uh, a little bit more statistical minded in their approach.
And then the. Google acquisition went through and, uh, well did mine. Founders and I spoke and, well, now we have the, the resource to go into healthcare, which is something that was close to my heart. I had done internships and consulting there in vaccine developments and, uh, experimental design and sequential experimental design in the, like, in parallel was my studies.
So now we have the resource to go into healthcare and that's where I transition a hundred percent to the applied part to create the deep mine health research team and mostly with the Veterans Affairs Deep mine, Uh, partnership with the Morph Fields I hospital and a few others. Wells helpful there.
mal invasion method, was the [:Sorry. When you go into healthcare, you have to be. Extremely aware of the risks and the probabilities of things going well or going wrong. You want, you need to quantify that. You also need to think in terms of decision making. So some of the work we did, decision support to clinicians on enough technology, there's a whole metrics where you weigh the different risks and different type of risk.
And whether you got it, you know, having a false positive versus a false negatives or the different types of degrees of disease and how much worse one error is than another. And this actually ties back to the very roots of, you know, patient analysis, which is deeply rooted in, uh, in decision making. I love Jim Berger's book and the pleasure.
al Decision Theory, which is [:And that prevents throughout in healthcare. I can definitely guess that, that healthcare, um, requires a lot of uncertainty, estimation and also like probably decision making and cost functions are extremely important. Right? Like, because like if, if the cost is well, someone can die, of course it's way, way hard, like way higher cost than in most optimization problems that you can have.
you know, there's different [:Hmm. And I like to think, you know, rather than I did my PhD when the, you know, the war between patient versus frequencies were still, you know, boiling. I really prefer to think in terms of, hey, it's all thinking in terms of taking a probabilistic view. Of things. I did my PhD on patient stats, but in a way I was probably the most frequent test patient in that I was working on multicolors and conversion serums for multicolors, which is where you can force to be entirely, can afford to be entirely frequent test as you're doing centrally thes on, uh, the number of samples you simulate.
So I sometimes feel an imposter syndrome of oh, am I right patient? Uh, even though I was doing, during my PhD, I was actually doing frequentist thinking apply to patient algorithms. So yeah, really think about it in terms of probabilistic view on, on modeling, on statistics, on, you know, algorithms generally.
ous now, The people you were [:Sorry, I mean, I'm, you, the gap, you related the hesitations, , sorry, P ski from, uh, University College London, who's a, a clinician, uh, with a deep, deep interest in machine learning. And, you know, he kept asking for more and more details and understanding more and more is, um, the algorithm that we were developing.
We're working very closely together. To show him every step of, Okay, here's what we're developing, here's what it knows, what it doesn't know. He was giving us constant feedback, Oh, that's really exciting. Ooh, that part, eh, maybe not that useful. Or, Ooh, we have this other type of data. Would that be helpful to you?
oh, actually, you know, with [:He's actually the one who reached out to deep mine originally saying, Hey, I've seen your algorithms out there. Hmm. We have these problematics here and we have these anonymized data here. Could we do some research together? Is there something we could do there? So, yeah, you know, great, great, great Saviness.
ty that he sees every day in [:So that was on the, on the part of the, like the clinicians. Mm-hmm. on your part. I'm wondering, during these whole, you know, experience and projects, which, like, was there a main difficulty that you encountered and what did you learn from it? Let's, as a positive side, I learned a lot . Cause there were many opportunities to learn and, and difficulties encountered.
rking on this project back in:We knew that this new imagery, imagery [00:16:00] technique, which is called optical programs tomography, o c T for short, which is a 3D scan of your eye for very cheap imager, goes for 30,000 pound compared to the millions of pound that go into a regular MRI scanner, for example. So we knew this would be hitting the high street in every optician or every glass cell everywhere in the country, in the UK and in France.
Fast forward:For to aid decision and to it early diagnostics and triage based on this extremely rich new modality of data. But we, and, and we've got, you know, nature paper for that. Hooray. As a researcher, I can tick that box a nature paper check, but the [00:17:00] reality is that this didn't make it to a product. These, all these algorithm that we have, are they not used in the optician feature suite in that we don't have the impact that we set up to have originally.
And for me, the lesson there really about thinking out about, okay, how, what is a organizational environment where you do your work and what within this organization and with the different players that are involved there. What is a path to actually having your algorithms used by people and who is in that path and what are the incentives there?
eated that allow to mobilize [:Manage the visibilities that make people, clinicians come to you and work on this. That's the flip side there. And learning to navigate that has been a, a really, you know, a really big lesson for this kind from these projects and something, you know, I keep learning, learning every day for sure. And I mean that, that does resonate with, with my personal experience also, and.
and something that I, I was interested in when I, you know, read about your profile and, and why I felt it would be interesting to have you on the show, is that, so you, you had this focus on health, like AI for health, and then afterwards, after Deep mind you, you're still focused on AI for, for good, as we could say.
untry, I, I could guess that [:Mm-hmm. like, oh, is like status gonna like, take the human factor out of everything, you know, Uh, things like that, that I'm sure you hear a lot as I do. And so can you tell us what that work looks like actually and how that's helpful? Well, absolutely. I mean, For the term AI for good. You know, we use that because it's what the United Nations use for this whole stream of work, but we've got to be very, very careful about it because when you say AI for good, you can quickly fall into tech, save your syndrome, tech.
So solution easy. Yeah. And also that means that, that if you don't work in that field, that means you're doing AI for bad . That's to be tweak, you know? Yeah. Well the other thing is that ai, machine learning, any technology, heck, even a stick is deeply gel use, you know, a stick, you can use it to lever a rock and help your pelle that's stuck under it.
nymore. So there's this deep [:These tools, whether they be statistics, whether they be machine learning, whether they be anything go to used, and to help the people who know the real problems. And these people, you find them well in hospitals, you found them at Atty International, you found them in NGOs. You know, they are those who really know what's going on and what needs to be solved.
azing qualitative work. They [:What I provide, and me and others work with them, is try to provide the quantitative aspect. So one example that is probably the project of my career that I'm the proudest of is the Troll PET Project with mst, which was to quantify the amount of abuse against women on Twitter. And especially women journalists and politicians and I worked with, you know, human right experts who had documented, interviewed journalists and politicians for the abuse they were receiving.
being defined terms to being [:You know, how do you, how do you even sample that? How do you analyze the result of a crowd? What are the the, the ways you've gotta be careful? What are the results and how much can you trust these results and adding these numbers behind. Behind the story helped characterize the type of abuse they were getting, who it was targeted at.
There's a really prominent journalist who then emailed the organizer at, he said, Oh, thank you. I mean, sometimes I don't, you know, there's obvious violence, direct violence threat, but there's this ES on Twitter, and now with the results, I understand why, because this level of problematic, aggressive content, but that are not abusive in the strictest sense of the world, as in that don't violate the abusive definition in the Twitter terms of surveys, but we see the barrage there.
th of December,:Gets Twitter stock to crash by 5 billion, 16% loss in Twitter stock price. Which again, I'm not there to delete Twitter stock price. You know, that's not what we set up for. And they recovered within a few weeks. But it translated the problem of abuse online into a monetary unit that executives at Twitter and in other companies can really understand and really resonate with.
her project with the numbers [:Video surveil surveillance cameras, public video surveil surveillance cameras in New York City, and thanks again to the southerns of volunteers who labeled these images and these numbers. And the, the statistical analysis I did with a few others helped get, you know, the numbers that both went into visualization to make people realize how much surveillance you might be.
But also went into legal proceedings of amnesty against the New York Police Department, which in August this year, Amnesty won to force the New York Police Department to publish more about their civilians and their civilians capabilities. So again, it's just a matter of bringing stats to help the activists in this case or to other doctors.
kind of analysis are gonna. [:So, um, always all for like more robust and serious statistical job getting percolated into the political science topics. And there's one thing that's really exciting when you, you know, you do your analysis in the most rigorous possible way in such an application in that you also publish the methodology and you're like, Oh.
Usually, you know, if I write a paper, an academic paper, or if I find a flow in it, mm, the reviewer will bash me or I look for a fool. Well, here actually is, mm, if I, if there is a flow in it, the whole, you know, the whole campaign can be derailed in that if, you know, the N Y P D or Twitter is able to say, Oh, look, and NT has made a fool for themselves.
y over inflated this or that [:Actually. Now that you, you talked a bit about, um, About AI topics and things like that. Very recent topic that you just told me about before we record, and I'd like to actually talk about it now because I think it's, it's actually interesting, especially in, in relation to, uh, to basin stats. So there is this new model called Stable Diffusion that you just like told me about before we started the show.
So can you introduce listeners to what that is and why it was such a wow factor for you? Yeah. I was speechless for three hours this morning. I was going through blog posts after blog posts of experimenting with a met on myself. So stable diffusion, I guess it's good we didn't record this morning, right?
actly , because a speechless [:The amount, the precision of the drawings, the amount, the amount of understanding of the, the, the language you use is just mind blowing. What's even more important is that so be, Well actually yeah, let's dwell a little on the technical process there. These are technology that even, you know, two years ago where as you know, there's absolutely no way this can ever be done.
joined DeepMind originally in:Now we're actually generating full resolution images on completely abstract topics. Uh, you can ask it to be, you know, you can be extremely descriptive and say, Oh, I want a city in the sky with a beautiful lit stare sky in the cities, most of these buildings. And you can be extremely descriptive like that.
Or as I, as I experimented with it, you can just ask something much more abstract like, Imagine the most horrific fear inducing picture for a human, and it generated an actually really scary face of some health. Zombie health, uh, well, whatever. And all this is not in realistic, you know, in realistic ways.
Back in:Absolutely any type of description. And I must say this is not something I would know how to generate with a hierarchical patient model. There is a connection in that oftentimes in in patient studies, you know, we will write generative models. I will take a generative approach, describe the mechanism by which a model that, of the mechanism by which the data might arise.
So we are creating new observations now. This is it, but without such an explicit model. But just with gigantic, extremely deep neural networks. We're talking, you know, several billions of parameters. So it's not your 20 dimension. I'm starting to be in high dimension, important sampling type studies like I used to.
oked a bit into that. Like I [:I'm basically what, how, how is net done? You said it's a deep neural network, something like that. So, and this is how we turn the podcast into a 20 hour course. Course brace yourself. Yeah. Well, in a nutshell, in deep learning, generally there is a. And move away from having an explicit model with explicit unknowns and more towards, here's a massive stack of operations match explorations done in our, and keep iterating within a few case, a bit of inducted bias, which is fancy for saying, Oh, we inject a little bit of what we know of the real world.
the L two loss between your [:Uh, that's on the input at the output. You've got the image and you try to just have this gigantic series of operations, which are parameterized by weights. We're talking billions of weights, and you run an optimizer in this billion dimensional space of parameters to find one that does a really good function.
ne that inspires some of the [:So they're similar mathematical concepts behind it. Now, I won't, don't want to go further cuz I haven't read as much in the paper cuz it was out just, you know, recently, a few days ago. But what's really important when we look at, you know, talking about the impact you can have, that this generative model, this stable diffusions, have been released in the entirety.
The whole trained model has been open sourced. Anyone can download the trend model, run the model for any purpose whatsoever, build a business out of it, run it on their laptop, generate whatever, beautiful art, whatever illustration for a magazine, whatever fake image that they want. You know, everything is there for the, for the doing.
quite a few months to a few [:The whole thing is there for anyone to, to download and to use, which has meant that in six days people have already built, you know, Photoshop plugins based on that and new features in software based on that. Cuz there is also a way to do image to image. Do a very crude sketch of what you want, a description of what you would like it to look like, and boom, here you get it.
It is, it is really stunning. Some people are winning talent competitions like, you know, artistic state fair on digital art without knowing how to draw thanks to that. But also you've got illustrators who are starting to say, Hey, hold on. You use the Economist of the Atlantic. I believe you're just run a story with a generated image rather than commissioning a graphist, a graphic artist.
ogy is coming. Yes. But, you [:Well, now actually we are having creativity done by algorithms, and you could still argue, well, actually you have a human describing and doing the imagination of what they want to see and guiding the algorithm. So yeah, the center hypothesis, uh, still holds true, but I don't think many of us were imagining this to this extent.
And actually it gives us, Yeah. Sorry, I, I could ramble for hours about that. Please do stop me . No, that's interesting. And, and yeah, so first, thanks for giving that description of, of the model like that on the fly. And I think listeners will appreciate to hear more about like, yeah, this distinction between.
nova, I think it was episode [:Everything is a GP and, uh, yeah. So I, I, I'll put, um, a link to the, in, in the show notes, but, um, to this episode. Yeah. So thanks for that. And yeah, in general, like, I'm curious about what you think this, the, this changes. Like, you, you talked a bit about that, like from a societal standpoint, and I find that super interesting because with technology, I'm always, there is always, you know, a voice in the back of my head when I hear people making like huge, you know, predictions about what this will change.
e will be one, but you don't [:Mm-hmm. , so Yeah, you talked about that a bit already. Also from a statistical perspective, I'm interested here in your like mathematical and statistical background, What do you think that could change for like the modeling perspective, like the modeling part? Do you think that means the structured models will become less important and it will be more that kind of very free models in a way?
Or do you think both will actually work together or that they actually answer different needs as a good patient? I'll tell you that I know that I don't know, my, my posterior is extremely, extremely vague, uh, posterior distribution, so I'll be careful about do doing, you know, big prediction. I can, I can speak about what I observe already, which is that deep learning requires at the moment a huge amount of.
ns for even for observations [:So really, really, really large data set, which we don't have in other fields, that there is a huge amount of fields where we can't apply these methods. At the moment, there isn't a strong advocacy for trying to go towards more detail efficiency, and that can be done in several ways. Some which.
Incorporating more structure, known structure about the problem into the modeling phase. And well, one of the best ways to do that is precise is to higher models or explicit modeling. So in a way that's where there is an ample room to, to work is when you have much more tailored and, and smaller data ratio.
It's [:I mean, I think this model took something like $600,000 to train, which is still rather small, surprisingly compared to other models. But, you know, $600,000 of compute is not something you can, Oh yeah, let's, let's simply 10 times to get a lot of different, a lot of different apples. I mean you can, but it depends on your bank account, I guess.
ere's no convergent drms on, [:You know, I used to be, Oh, that's not proper. Well, heck, I'm saying that works. We just can't quantify how much works and, you know, if it works, don't this, it, it does think that I would not know how to do, but that means we don't know how to quantify the limits of it and the risks we, we, we are taking with, with them.
So there will be some need for very critical thinking around these models. And I dunno, I'm sure some of your listeners are aware of, for example, the FICO around team need guru being fired from Google last year, Chris, for criticizing. And studying closely the limits of large language models of which stable diffusion is a kind of, um, a kind of offshoot.
sed for, and who uses it and [:And that's where it gets really, really important. And I believe that as scientists, as statisticians, that we have a responsibility. For our tools for how they're used. And we have a, a duty to think about, hey, what could go wrong? You know, if it's being used. And of course with what you go could go wrong.
o follows the links in there [:Uh, funny thing actually, we're seeing from a technical point of view, we're seeing the rise of, uh, prompt engineering. There's a whole field now. . In which words do you formulate your request so that your model gives you the most pretty output, which is something I would never have imagined. This is purely how, you know, what text do you give?
Yeah. Sounds like choosing your priors. Yeah. I mean, yes, but with even less math in it, so Okay. I'm like, why did I go through all these years of learning math to get there? But you know what? It works. So don't not it. It's quite remarkable. It's at the same time, it feels weird because we make models that we then have to learn how to use that.
know, and you, you could be [:Yeah, I love that. And that's something we are really curious about and we think a lot about it at p c labs and in the p c team in com in general, because that's something when you teach people and also when you, when you work with clients and listening, the products is always something that can be complicated and intimidating for beginners.
nction now in, in m c, which [:And then p c tells you, okay, so you want to gamma with alpha equals blah blah and beta equals another thing. But then like the next step would be that , I don't know, I want to gamma. And then you're gonna put all of his patient statisticians out of a job. Don't say so . Well you still, you'll still need to parametize the model afterwards.
So you know, like this, the model structure, you still need it. It's like that's the hard part. But then if you could like parameterize the priors like that, that'd be awesome because then you don't need to know, you want to gamma and in a way you don't really need to know you want to Gemma, right? Like you just need a function with the right constraints.
h yeah, we should, you know, [:I mean, couldn't we have this done automatically and see how much we depend on the, the shape of our priors and on the, you know, the pro environment that we have put there. And of course, how do you do that properly without double dipping in your data? But it in an i entire fashion is, Yeah. We have the tools for patient influenza have also benefited massively from the, the, the growth of, you know, neural nets for example in interesting.
Your, and that has led to development of toolboxes for tenor calculus that are then widely using IMC and PY and makes. More easy than ever to be a statistician modeler that you don't have to code in your, um, sampler anymore. You're actually much better off using one of the existing ones, which, which are getting extremely, extremely efficient.
also very passionate about, [:So, Regretably, I'm not teaching much these days anymore. My, uh, professorship, honorary professorship at UCLA is, uh, is more on the research side. I do a lot of mentoring though of, uh, young researchers and, uh, you know, through. Work with them at, through other different projects and, well, it's mostly a matter of paying it forward.
just, I like to transmit the [:I like to, you know, that it see seasons, the light up of, Oh yeah, that is a really nice trick. You know, the realization on that. That's something I, I, you know, I really enjoy. Yeah, and I definitely agree, like having passion professors. One of the best things that can happen to you. You know, like having that passion inside you really, really helps people.
Well get more passionate to put that and see that. Again, as I always say, science is done by people and it's inherently human. Contrary to like what? A lot of people tell you and think, so, Yeah. Yeah. Like, that's awesome. And so talking with people Yeah. I was asking you like, what are, what are the main skills that you are trying to instill in your students ability to go and fetch information from multiple different source, multiple different angles, really quickly get to the core of it.
seconds [:And when you get to do. And you know, not being intimidated by X or white paper, or not being over focused on one single paper or one, you know, go cross reference the document, you know the information from different angles and that is how you get, you know, you get to really learn by yourself Then. Uh, you know, I can, as a teacher, I can show you that some things exist.
I introduce you to a field, but you know, up to you to run into it and go completely wild exploring it. And if you know how to quickly scan for information quickly, fine. Then you, you'll run all the fastest and explore the deeper. For sure. That, that would've impressed me too. Like going through a paper in 30 seconds,
Yeah, I was blown away. [:But by then, you know, you know enough to realize, okay, this good quality, does it have the different markings of, okay, it looks like someone who knows what they're doing or digging into it and it's when. Uh, she told me, Well, how I'd do that. I remember that professor. And it's just, you know, it's just a skill that you acquire really quickly, even during a PhD or when you're, you know, you just, just go and dive and scan, learn to scan, see enough of them and come pretty quickly.
d is very different now than [:My advice might be somewhat old school in that sense. I think generally I find a lot, I find it extremely good for people who like to experiment by themself a lot. It's something that you can do in any computational angle. You know, you try to see if it crashes. That is much harder to do in say, theory improving where?
Well, there's, well actually now there are some ways to verify, automatically improve, but they're not really easy reach, so you don't see your algorithm crashing their life. So a lot of experimenting by yourself. Or with friends, but you know, actually coding it. A lot of serendipity in that field. Another thing is that with statistics and with machine learning, there are so many different places to apply it that you will be bound to find places if you let the serendipity happen.
, how did I get to work with [:They crowdsourcing was presenting what they do, and I realized, oh, you've got data. You have an expert that has trained a crowd to do a task. Maybe the crowd can train a neural net. Maybe I can help. I've got time and skills. I said, Just want to chat with her at the end. Serendipity plays a huge role. Now I say that, but I realize that I'm extremely privileged to, Well, I live in a big city.
I'm a white male in my thirties. I had time at that moment. You know, all conditions that make my advice. Possibly not as general as I wish it could be. But yeah, the, the course, you know, try these methods by yourself. Don't hesitate to go and make it crash and get information, you know, search for information everywhere.
n't hesitate to ask. I mean, [:Or if they, if they don't, it might be because they put it this way, some having imposter syndrome for me is a markup. Someone who actually knows what they're doing because that also means that they know that there's a lot they don't know. And so, yeah, realize that the person you, you're seeing speaking, they're speaking on their topic of choice and they're a precise expert, but when they hear you speaking about what you do, they're probably equally lost or they were when you know they were at your stage.
So don't hesitate to go and speak and find the people. Some will, you know, some will brush you off, some will actually really want to share their knowledge and will remember being in your shoes. And yeah, go for that. I completely agree. And that does really resonates with me and that's actually an advice that I give all the time to people.
at's what I did. And I think [:So know that you're in training, pick the place where you, where you crash. Do that on the simulator . Okay. And so, yeah, actually I'm curious, This is a, a topic i, I really love talking about and about science communication. And how to com, like how to help the general public understand more about, um, scientific methods and seems like you do that a lot.
the best ways to communicate [:You look at David Spi Halter, a big patient decision himself with, well he's professor of the understanding of is amazing, is just, and you know, is one of the best scientific communicators I've ever met. So I dunno that I have the best way. What I do have is seen a few things that work when I presented them, which is first the passion is really about that.
that they can see that I'm, [:Well actually, now these days, my slides are mostly pictures at this stage because I'm trying to con to get their attention, to get the audience attention, to make them want to follow what is being, what is being said, to feel committed to, to what I'm trying to explain. When you think about it, the measure of uncertainty, the measure of risks, these are deeply fascinating topics and things that everyone, you know, realizes you cross the street.
Well, if it's a busy street, I'm gonna check twice before crossing. If, you know, it's pretty quiet, I'm like, I'm not gonna check. You know, my, So this is something people can really relate to. We, we, we all do that. Deeply if I present it in the, in the way of math, I need the math to make sure. Of what I'm doing.
entific tools I went through [:And you went through the talk with him, you're like, Yeah, I followed every step. Well, you, you went to him, you realized, Well, I think he skipped a few that I really can figure out, but at the moment it was, it was really clear. So, and when it's for, for general audience, don't, don't be down because people you know know if you're, if you're bing them, you, you not.
Carrying the, the, the real intent, but share the passion and share the story. Storytelling is extremely important there, and that's what you want to get through. And if someone wants a mathematical detail and is curious to know more, you know, say open the door at the end, say, Yeah, really always happy to discuss about the, the more technical details and point you in the right directions.
ng I, I talked about already [:I'm not gonna reiterate, but yeah, like for people, episode 50 was with David Big Harter. So I put that in the show notes if you wanna listen or release to it. Also, in episode 67, David keeping with here, he has a fantastic YouTube channel about astrophysics and cosmology and probability of life in the universe, things like that.
And the episode was, was really awesome. So I, I put that in the show notes also because we talked about exactly that kind of things. Like basically telling a story about science and making sure that people know that it's, it's a human story and it's not a, something that. Comes out of nowhere with just dry equations.
s, what do you think are the [:Something that could be done differently and probably better in the basin work for the ba? I guess it depends where you apply it. You know, if you apply to say a lot of which models there, well there's a challenge of how do you make base work at such scales, which is, you know, pretty tricky. And again, there's a very strong field of patient deep learning that there, and that has many different ideas.
And some of these ideas, I mean it was a really great paper by, uh, Bella Lab, Mac Krisna, I'm sorry, Bella. Bella, I'm Buting your last name. For example, looking at different ways to do en sampling and getting a measure of uncertainty without being patient at all. By purely, you know, sampling, training your neural nets different times under certain assumptions on the loss function.
outpatient being patient. I [:And not done routinely. And, uh, so that, I think that is a big, a big hurdle there, to be honest. Then obviously there is a challenge that the community is also somewhat different in, in deep learning. There's a huge community that came from computer science. Many statistics department have, Okay, I'm, I'm not gonna make friends here, but I've missed the boat on machine learning and, uh, oh no, we're more econometrics or we're more, you know, classical st.
ine learning grouping, which [:With the nature of deploying, it's very easy to go and experiment and be empirical. It's harder to get the, the, the theory that goes with, cause we don't even know how to analyze these, uh, these behaviors. You know, Mcmc, okay. You pick up your copy of the main and 2D book, 1991 on the behavior of mark of chains, and then you pick the book back rubber, which applies this.
To mcmc and okay, you've got a, a solid understanding, not so much in deep learning. That's where there, there, there is a gap, which is a challenge for applying patient workflow in deep learning at the moment. Okay. Yeah, yeah, definitely. Interesting. And then can refer people to episode 68 with Kevin Murphy.
volumes, uh, which is making [:Before that. As usual, I'm gonna ask you the two questions. Ask, give yes to the show. So if you had unlimited time and resources, which problem would you try to solve? I, I, full spoiler, I'm really glad you sent me the question yesterday night, so I could not think because, uh, yeah, this, Well, I used to work at Deep Mine, you know, where the idea was solve intelligence and then once you solve that, you can solve everything else.
w to get more empathy and to [:Because once we have that, we change the human forces and then, okay, all tech now can really, really help us rather than destroying us. And second question, if you could have dinner with any great scientific mind, dead, alive, or fictional, who would it be? I did not quite think about the fiction part first.
I don't know whether I would make it a dinner of, or just being able to follow them unseen while they work to see how they work and observe them. You know, kinda like a stalker. Well, yeah. That's question. I'll say a ghost. A ghost . That's, that's creepy for sure. Yeah. Let's don call the big scientist. No, I mean, really personal childhood era.
to have that freedom and to [:I really, I love to work on multiple things, statistics, machine learnings, you know, I work in these pre-sale can, they can be full in so many different things. And the startup, I'm advising Shift Lab. We're trying to, you know, we're taking that, that sense too. We're hiring by the way. So hit me. But the real thing here is I'm lucky right now to have built a job for myself.
I can work on these many, many different fields. I want to see to see more of that. And I'd love to, you know, Speak with a scientist like DaVinci, who's been in and exploring in so many domains. Great choice. And you would probably have the dinner or there's talking happening in Florence, Rome. Yeah, that does sound super cool.
Yeah, that'd be nice. . [:There we go. Problem solved. By the way, just you'll see on Twitter, I just posted an image. I asked, uh, stable diffusion with the prompt, Alexon, Julia and Ironman recording a podcast in Space . So you twitterer you, I put the tweet in the show notes because that, I think it's worth seeing it. And so yeah, as usual, I put resources and into your website in the show notes for those who wanna dig deeper.
has been another episode of [:Let's learn base stats.com. Our theme music is Good. Patient by bba Ringman, MCRs, and mega, check out his awesome work@bbabringman.com. I'm your host, Alex Andora. You can follow me on Twitter at Alex Elders do the Country. You can support the show and unlock exclusive benefits by visiting patriot.com/learn based stats.
those predictions that your [:Let's get the mono solid foundation.