Artwork for podcast Machine Learning Engineered
Catherine Yeo: Fairness in AI and Algorithms
Episode 522nd September 2020 • Machine Learning Engineered • Charlie You
00:00:00 01:03:27

Share Episode

Shownotes

Catherine Yeo is a Harvard undergrad studying Computer Science. She's previously worked for Apple, IBM, and MIT CSAIL in AI research and engineering roles. She writes about machine learning in Towards Data Science and in her new publication Fair Bytes.

Learn more about Catherine: http://catherineyeo.tech/

Read Fair Bytes: http://fairbytes.org/

Want to level-up your skills in machine learning and software engineering? Subscribe to our newsletter: https://mlengineered.ck.page/943aa3fd46


Take the Giving What We Can Pledge: https://www.givingwhatwecan.org/

Subscribe to ML Engineered: https://mlengineered.com/listen

Follow Charlie on Twitter: https://twitter.com/CharlieYouAI


Timestamps:

(02:48) How she was first exposed to CS and ML

(07:06) Teaching a high school class on AI fairness

(10:12) Definition of AI fairness

(16:14) Adverse outcomes if AI bias is never addressed

(22:50) How do "de-biasing" algorithms work?

(27:42) Bias in Natural Language Generation

(36:46) State of AI fairness research

(38:22) Interventions needed?

(43:18) What can individuals do to reduce model bias?

(45:28) Publishing Fair Bytes

(52:42) Rapid Fire Questions


Links:

Defining and Evaluating Fair Natural Language Generation

Man is to Computer Programmer as Woman is to Homemaker?

Gender Shades

GPT-3 Paper: Language Models are Few Shot Learners

How Biased is GPT-3?

Reading List for Fairness in AI Topics

Machine Learning’s Obsession with Kids’ TV Show Characters

Chapters

Video

More from YouTube