Artwork for podcast Kunstig Kunst: Kreativitet og teknologi med Steinar Jeffs
Maya Ackerman: AI, Theory of Mind, and the Rise of Humble Creative Machines
Episode 187th February 2025 • Kunstig Kunst: Kreativitet og teknologi med Steinar Jeffs • Universitetet i Agder
00:00:00 01:07:25

Share Episode

Shownotes

In this episode, we talk with Maya Ackerman, an AI researcher and entrepreneur at the forefront of AI-assisted creativity. She shares how her journey from machine learning research to opera singing led to the creation of Wave AI, a company focused on co-creative AI tools for songwriting.

Maya Ackerman explores how AI can be a true creative collaborator, rather than just a content generator. She introduces the concept of “humble creative machines”, AI that enhances human creativity while keeping artists in control. We discuss AI’s theory of mind, biases in generative models, and the future of AI-assisted songwriting.

Dr. Maya Ackerman is a world-renowned expert in artificial intelligence and creativity, focusing on co-creativity and its transformative potential across industries. Ackerman is a researcher, entrepreneur, and professor of AI at Santa Clara University. She is the CEO of WaveAI, the company behind LyricStudio, a widely used AI songwriting tool. Her work has been featured in Forbes, NBC, and NPR.


Transcripts

Speaker:

So welcome, Maya Ackerman.

2

:

Pleasure to be here.

3

:

Thank you for having me.

4

:

First of all, you have a unique background that combines computer science and music.

5

:

I would like you to maybe begin by telling about your journey into AI and the intersection

with songwriting and kind of what inspired you to explore this blend.

6

:

Yeah, I studied machine learning in graduate school for my master's and PhD theoretical

foundations of cluster analysis, very math heavy.

7

:

Honestly, never intended, it never entered the field of AI because it was popular, it sort

of in a less popular branch of AI back in the time of machine learning, looking very much

8

:

at the mathematical foundations.

9

:

happened that I came across an opera singer who lived in Waterloo where I studied and I

started taking lessons from her and

10

:

opera singing.

11

:

I started performing semi-professionally.

12

:

So this happened in parallel to my PhD, which was really fun.

13

:

And I never really thought that I wanted these two worlds to collide.

14

:

I kind of felt that they needed to stay separate.

15

:

But in 2014, I attended a workshop called Information Theory and Applications.

16

:

It was in San Diego.

17

:

And there was this little session, little three talk session called computational

creativity.

18

:

And I remember making a mental note to attend it.

19

:

And there was Harold Cohen screaming about how people were calling his automated painter

creative.

20

:

But really he's a creative one.

21

:

He's a creative mind.

22

:

and I did not have a side, but I thought it was fascinating that there was a machine

painter and somebody was thinking about whether it was creative.

23

:

And there were a couple of other amazing speakers in that session, Jorain and Mark Riddell

and Jorain Wiggins, and they told me about computational creativity as a field and I just

24

:

fell in love with it.

25

:

I just fell in love with...

26

:

the idea of machines, creating machines, making art machines, helping people make art and

very very quickly came up with the project that will end up becoming my company which was

27

:

originally coming up with vocal melodies given lyrics.

28

:

So if I say something like...

29

:

I'd love to visit Norway, let's say, then how many ways can you sing it?

30

:

I mean, it's a universe, right?

31

:

Like, I'd love to visit Norway or I'd love to visit Norway.

32

:

Right?

33

:

So it helped me explore these possibilities.

34

:

And the rest is history.

35

:

So you kind of came from the technical side first and then into music kind of

simultaneously.

36

:

Yeah, yeah, kind of, you know, some people say that their past found them.

37

:

I definitely tend to feel this way, that my past found me.

38

:

And you were just starting to talk a bit about it that you started the company and started

the made kind of a tool to to generate melodies.

39

:

Could you tell a bit about a bit about that company?

40

:

Yeah, so it started off with that specific model.

41

:

So we built a machine learning model that did that, took lyrics and came up with different

ways to sing them.

42

:

And I found out to be so helpful.

43

:

I still remember we printed it on paper and we had this kind of funky notation because we

just...

44

:

The interface didn't matter, right?

45

:

We didn't need to serve it to users.

46

:

And I remember sitting by my piano and sort of deciphering that notation and playing it

out on the piano.

47

:

And almost instantly, like my songwriting was transformed.

48

:

Suddenly I could see so many more melodic possibilities than before.

49

:

It was clear pretty quickly that we have something here that could help more people.

50

:

But it still took us about three years to actually open up a Silicon Valley startup.

51

:

I remember sitting by my computer and filling out the paperwork to open up Wave AI and it

said company name and I'm like, we didn't even think about the name and I came up with

52

:

Wave AI sort of on the spot.

53

:

in 2017 before all the companies were something AI.

54

:

So that was really fun.

55

:

But overall, it's been a complicated journey.

56

:

It has not been straightforward at all.

57

:

Raising money, running a company, hiring people, advertising.

58

:

There is so much to learn to running a business, even if you have incredible technology.

59

:

It's a very steep learning curve.

60

:

Yeah, I can imagine.

61

:

It's a whole new world.

62

:

But what about the technical and creative challenges when designing an AI to be a

collaborator?

63

:

What kind of challenges did you face?

64

:

You know, I think it really expedited having a business, serving real users really helped

shape how I think about what AI should be.

65

:

I was always into co-creative.

66

:

I loved the idea, like instantly.

67

:

I loved the idea that a machine can be a co-creative partner.

68

:

I heard this phrase at the International Conference on Computational Creativity.

69

:

Somebody said it in passing during the first talk that I attended and

70

:

I was just, I mean was sold at that moment.

71

:

Wow, yes, of course, a machine can be a co-creative partner.

72

:

How did I not think about it myself?

73

:

But that could mean so many different things.

74

:

It could mean that it does most of it and then I change it up a little bit.

75

:

Or it could mean that I do almost everything and it just helped me in those two little

areas.

76

:

Or it could mean everything in between.

77

:

It could mean that...

78

:

Let's say if I'm writing a song, could mean that it helped with mastering and mixing and

composition and lyrics and vocalization.

79

:

Or it could mean that it just helped with one of them.

80

:

It let's say the lyrics.

81

:

Or it did all of those things, but just for the hook, right?

82

:

I it's just like working with another person.

83

:

It could mean so many different things.

84

:

And the more we served real people, the more I started understanding that the person needs

to be in the driver's seat.

85

:

And ultimately, it's all about, if done right, if done right, it's all about elevating

human capability.

86

:

So that even if you step away from the machine, you are now more capable, more creative or

smarter than you were before in that domain.

87

:

Yeah.

88

:

And does Wave AI do that for you?

89

:

So our sort bread and butter, most successful product is Lyric Studio.

90

:

And that was actually our third product.

91

:

So we started off with tools that, well, in different ways, we hit some stuff right, some

stuff not quite right with our first two products.

92

:

And Lyric Studio helps only with lyrics.

93

:

And...

94

:

It does it really well.

95

:

It does it in a way that makes people better pen and paper lyricists.

96

:

And you know, this takes a certain attitude towards your users.

97

:

This takes a certain trust.

98

:

So our product is helpful for people all across the expertise spectrum.

99

:

It's helpful for beginners and it's helpful for people who are...

100

:

stars or at the extreme end of professional because it's a very very flexible system it

can inspire you or it can hold your hand more it's sort of anything in between but we

101

:

don't we sort of don't foster dependence in a way that perhaps other

102

:

some other AI products try to do.

103

:

So we are okay with the fact that some, that all of our users really become better at

lyrics writing.

104

:

And a lot of them choose to stick around even though they're better because the product is

still helpful.

105

:

And it's, it's a certain attitude towards our users where we, we want to build this kind

of, if you will, healthy relationship with them, where we want to empower them.

106

:

We want to make them better.

107

:

And we trust that those who, that enough of them are going to stick around, right?

108

:

That we don't need to.

109

:

make them dependent on us.

110

:

We don't need to make their songwriting impossible without Lyric Studio.

111

:

We don't want that.

112

:

Kind of like a little analogous to a healthy partner.

113

:

An unhealthy partner might want to make you dependent on them, get rid of all your

friends, make it so that you can survive without them or something.

114

:

All kind of abusive patterns.

115

:

Whereas a healthy partner will want you to be strong and independent and capable and

happy.

116

:

and trust that you will keep choosing them essentially and that's kind of a vague analogy

to the approach that we're taking that I really think more of the industry needs to take.

117

:

Do feel like that's the difference between using Lyric Studio and using Chat GPT for

instance?

118

:

So I actually have a very positive attitude towards ChatchebikT just for other use cases.

119

:

I love ChatchebikT.

120

:

I'm an avid user, I'm a paid user.

121

:

And I think it's precisely because for a lot of use cases, you get this kind of

experience.

122

:

I call this broadly Humble Creative Machines.

123

:

So for example, if I'm writing, you know, text and not poetry but prose, it can be really

fun to experiment with different styles, to ask it to modify something.

124

:

It's amazing at coming up with titles.

125

:

And in many contexts, you can have this kind of iterative experience with Chat-Shop-E-T,

but with lyrics,

126

:

It's just not, I guess it just hasn't been a focus area.

127

:

And fundamentally, when you're writing lyrics, if you are doing it as an art, right, then

you need to be creating.

128

:

And Chachi Pt sort of creates a whole thing for you all at once, which is a little too

much.

129

:

Imagine you're collaborating with a person on lyrics and you're saying,

130

:

I want to write about a sunny day, a sunny December day in California, right?

131

:

It reminds me of some childhood experience.

132

:

And your friend just writes the whole thing right away.

133

:

You're not really writing together.

134

:

It's too much.

135

:

And it might not be even the style that you want.

136

:

Chachapiti is very rigid in its lyrical style.

137

:

And again, I don't mean this as a profound criticism of Chachapiti.

138

:

It's not a core use case for them.

139

:

They focus more on business use cases and other things.

140

:

So yeah, it's very different from Chudge GPT.

141

:

Our product is really geared for that lyrics writing experience, whereas Chudge GPT has a

lot of other use cases that it's just phenomenal at.

142

:

Yeah, that's really interesting.

143

:

fact that it could provide too much information as you were saying, it's like, yeah, if

you collaborate with someone and they really loudly and takes kind of takes the word for a

144

:

long time, long time.

145

:

you get, I guess you will in Lyric Studio, you will get

146

:

shorter samples of sentences or words that are more inspirational.

147

:

line at a time model.

148

:

We do have something that can draft something a bit longer for you if you really want that

support, which sometimes you do to kind of like get going.

149

:

But it's a very simple looking experience, right?

150

:

So by the way, users like simple and for good reason, right?

151

:

They already have we have enough complexity in our lives.

152

:

User interfaces need to be simple and straightforward.

153

:

And so you have your notepad, which takes more space than anything else, inviting you to

focus on your own writing.

154

:

And then on the side,

155

:

You suggestions, one-line suggestions that take into account everything you've written,

the topics you've selected.

156

:

And you can tap into them at any time, into these suggestions.

157

:

You can also ignore them at any time, which is really, really important.

158

:

You don't want to feel like at every moment you need to be thinking about the AI, because

once you tap into your own creative process, and our users tell us that a lot, one line

159

:

can sometimes get two thirds of the song done, because it sends them...

160

:

on this

161

:

Does it mean that Alex wrote a song for me?

162

:

Probably not.

163

:

Does it mean that I gave it a prompt and then it wrote a song for me?

164

:

Well, no, that would be really weird, right?

165

:

It also doesn't mean that I'm now dependent on Alex.

166

:

If Alex made me more creative, I became more creative, even if I never see Alex again.

167

:

Which means that there is a certain, Which means that...

168

:

My creative process, my capabilities were front and center.

169

:

And so you need to create this sort of experience with AI if you want to genuinely benefit

people.

170

:

I mean, we are a small company and we are able to have a lot of users in a landscape that

has a lot of AI tools.

171

:

And that really shows how important it is to prioritize your users.

172

:

And really, there are long-term well-being, if you will.

173

:

In terms of AI being a creative collaborator or inspirator, are there any areas where you

find AI to be lacking in terms of creative potential?

174

:

everywhere.

175

:

Everywhere there's possibilities.

176

:

I'm kind of simultaneously in awe of what other people are building.

177

:

And at the same time, I see this this potential to make the system so much more useful by

focusing more on empowering people and less on demonstrating how cool AI is.

178

:

I call it humble creative machines.

179

:

You can have the most brilliant human being, but

180

:

Sometimes it's more appropriate for them, like for example as teachers in the classroom or

as managers.

181

:

to pull their genius back a little bit and utilize it to enrich and help elevate other

people.

182

:

And that's really, really key in human society.

183

:

And we actually get very upset at other people when they're too arrogant, even if their

brilliance, quote unquote, justifies it somehow, right?

184

:

And so that's what's happening with AI.

185

:

mean, it's genuinely amazing what it can do.

186

:

I look at text to image models.

187

:

But...

188

:

If you haven't, by the way, you've got to try mid journey.

189

:

I I just have no connections to that company, but the technology is incredible.

190

:

And I love it.

191

:

I love it so much that I use it.

192

:

I use it even though it's hard to use, but it's really not designed with the idea of

giving a person control.

193

:

They eventually built in in-painting, but even that's still difficult to use.

194

:

And you can sort of take a picture and ask it to modify it, but it modifies it how it

wants.

195

:

More than half the time it doesn't understand what I want.

196

:

And that despite being a wonderful system that I love, I love Mid Journey.

197

:

Again, I'm a paying user of that system, but there's so much left on the table because the

AI needs to be built from the ground up in a way that considers

198

:

that lets the user become the driver.

199

:

And that's missing.

200

:

And as a result, it's missing a lot of commercial opportunity and a lot of opportunity to

enrich human lives.

201

:

Why is that the case, do think?

202

:

I think it's just a way of thinking about things.

203

:

know, I think sometimes as consumers, and I'm sure that happens to me all the time in

industries where I'm not an expert, we just assume that it has to be this way.

204

:

It's this way because there's no other way to build it, but it's just not true.

205

:

It's just not true.

206

:

I'm not saying it would be easy to take something like a text to image model and make it

and put the user in a driver's seat and make it easy to interact with that system and make

207

:

it so the user could realize their vision.

208

:

more completely and be understood by that model.

209

:

I'm not saying it's easy, but it's possible.

210

:

And so from a business perspective too, think business leaders, investors need to begin

appreciating that Chant'ch'ipiti succeeded because it enables these experiences for

211

:

people.

212

:

Before we had Chant'ch'ipiti, we had these other...

213

:

other models called like large language models.

214

:

We had, for example, GPT-2, you would give it a line of text and it would come up with a

paragraph to follow up that line of text.

215

:

And it was fun, but you couldn't really interact with it.

216

:

You couldn't explain your intent.

217

:

You couldn't iterate with it on a block of text.

218

:

And so, you know, it really wasn't that popular.

219

:

It was somewhat popular, but it wasn't chat GPT popular.

220

:

And then they did this alignment process, which was a significant investment.

221

:

Alignment is difficult and complicated, they made a massive investment in that.

222

:

And they made it into something that humans can interact with.

223

:

And it blew up.

224

:

It blew up.

225

:

So there is evidence that making the communication between human and machine front and

center, giving the human control, can lead to massive success.

226

:

And we need more of that.

227

:

Yeah.

228

:

In my own experience with the chat GPT as well, that's really the case after the

possibility of making custom GPTs came out.

229

:

That's really when the usage of it exploded on my part.

230

:

Because for instance, with these podcast episodes, I can now transcribe the episodes and I

upload the text to one of my custom GPTs.

231

:

And for the first 15 episodes, I wrote my own show notes and intros and stuff like that.

232

:

And I also uploaded all those intros and then I can use this custom GPT to generate show

notes for new episodes based on the transcript and my earlier show notes.

233

:

So it's like in my style, but also it takes into consideration all the information of each

episode.

234

:

which is really handy.

235

:

And it also kind of speaks to what you're saying.

236

:

It's like the dialogue and the customization aspect that really makes it interesting.

237

:

And that's probably something that's missing in a lot of the generative AI music tools.

238

:

Although it seems like they're trying though, because both Suno and Udio has in-painting

options.

239

:

Although they don't really work that well in my experience for now at least.

240

:

It's like...

241

:

At least for professional musicians, kind of desire even more control.

242

:

I need to have like fingertip control of every aspect, including like which frequency

should be cut out on the snare, on the bridge, the second time around.

243

:

Like it has to be that detailed.

244

:

So in my own experiences and also my students who have tried out these

245

:

software is the best possibility for now is to use them more as a sample database because

then you still have control afterwards.

246

:

Mm.

247

:

Yeah, I like what you're saying.

248

:

think it's I think what you're saying is the rule, not the exception actually.

249

:

This experience that you're having of like, you're not giving me enough control.

250

:

is the common experience of frustration with AI system, which actually tends to lead to

low retention.

251

:

So the whole AI industry is notorious for low retention, kind of like one hit wonders.

252

:

People use it one, two, three times and then never come back, often just once.

253

:

And so giving users control is how you overcome that.

254

:

And I think companies are not taking this issue of control seriously enough, even though

everyone complains about it.

255

:

Well, everyone, just about everyone complains about it.

256

:

And it needs to be taken so seriously that it should impact how you build your models from

the beginning.

257

:

And when you're not doing it, it's like taking a cooked dish and trying to improve it

afterwards.

258

:

Forget about it.

259

:

Forget about it.

260

:

It's too hard.

261

:

That's why you're saying it's there, but it's not really there, but that's not good

enough, you know?

262

:

And I have so much respect for what Sunoo and Uryu did, just like for Mid Journey.

263

:

So I really mean it in a kind of like, I mean, let's do it even better.

264

:

We have something really cool.

265

:

Now let's make it useful for people.

266

:

Yeah, the cooked dish analogy really hit it with me.

267

:

It's like when you cook a meal and it's too much salt on it and it's like you can't go

back on that.

268

:

Or at least it's really difficult.

269

:

Like, let's think about a baby, right?

270

:

We don't teach the baby how to be smart and then teach it to interact with the world.

271

:

It learns everything by interacting with the world.

272

:

Interaction is central.

273

:

And these models are taught to be smart.

274

:

And then the makers are like, my god, now I need to give it to users.

275

:

And these users keep asking for more stuff, right?

276

:

So kind of, we just need to think about it correctly.

277

:

And when we think about stuff correctly, then we can serve people better.

278

:

How would you go about doing that building up a model from the start that takes this into

consideration?

279

:

We have an approach that we figured out.

280

:

Just to make sure we don't get disturbed again here.

281

:

We have an approach that we figured out for that.

282

:

That's definitely confidential.

283

:

But I think the real kind of thing that's missing in the industry is really just there's

plenty of smart people out there who can figure out solutions.

284

:

I have no doubt that there is sort of a lot to discover also still left and a lot of

different approaches that can be explored on the technical side.

285

:

But it's really mostly the intent and desire to take that seriously enough to make massive

investments in that dimension of the tools that is sort of the missing element and not so

286

:

much the technical know-how is something that's...

287

:

That's the trade secrets.

288

:

Yeah, I can't share it on a podcast, it's unfortunately because it's, you know, this

relates to my company.

289

:

But this is something that I think is something that we as an industry can figure out

across many, many different domains as we, mean, OpenAI figured it out for for ChaiGPT how

290

:

to align a large language model.

291

:

And that was

292

:

That was a massive accomplishment.

293

:

And in other industries, I the complexity level is similar.

294

:

It's not, in some sense, it's a little harder, but in some sense, it's actually a little

simpler.

295

:

Apology.

296

:

I can't provide something meaty for this question.

297

:

I recently spoke to another computer scientist called Cheyenne Dadman, who has made a

framework for this kind of endeavor, which really shortly sums up, has something to do

298

:

with using reinforcement learning and including the user in the loop.

299

:

So that personal preferences also is a part of the selection process when it comes to

training the model on data.

300

:

Okay, that sounds reasonable.

301

:

Yeah, I mean there is...

302

:

it's very tempting for me to just share everything with you.

303

:

No problem, no problem.

304

:

We can talk about it at a later time when it's not that hot potato anymore.

305

:

Maybe a couple of years or something.

306

:

So I've read some articles that you've written and in one of them you discuss something

called a theory of mind.

307

:

which is a of a framework for understanding biases in AI.

308

:

And for those who hadn't read that article, could you sum it up and explain what it is?

309

:

Yeah, this is such a cool pivot, such a fun topic.

310

:

I've spent the past year thinking about, in a more fundamental level, sort of, let's zoom

out.

311

:

Let's zoom out, okay?

312

:

Zoom out of the details of our lives and our personal struggles and look at it, at what's

happening as a moment in humanity, right?

313

:

As a moment in human history.

314

:

So we had, you for a long time, human beings had an oral history, but we would pass on our

knowledge by talking to our children, right?

315

:

And then people started writing down stories and their thoughts and their knowledge

started becoming written down.

316

:

And then we put it up on the internet.

317

:

Okay.

318

:

And then we built a machine brain.

319

:

based on this knowledge that we put on the internet, right?

320

:

And this machine brain is us.

321

:

It is really an embodiment of what Carl Jung called collective consciousness.

322

:

And that's wild.

323

:

That is, that at a conceptual level opens up

324

:

possibilities that were just unthinkable before.

325

:

In some sense, when you interact with Chat-Chapi-Tea or Mid-Journey, you're interacting

with a collective consciousness.

326

:

You're not just interacting with OpenAI or Mid-Journey, with not just a product of this

company, you're interacting with a mind that learned from a huge part of humanity.

327

:

It's essentially found a way for us to bond with each other, to work together in a way

that was just impossible before.

328

:

everything that's amazing about these systems, like their intelligence and their

creativity, is our collective intelligence and creativity.

329

:

No human being can ever hope to consume such amounts of data.

330

:

No human being could ever hope to learn from such amounts of data.

331

:

But now we kind of have it in our fingertips, which is why it's so important.

332

:

that we make these models easier to interact with, because that's sort of the remaining

barrier for some of them.

333

:

But there is a flip side, and I think we sort of, somehow we to open our hearts and be

okay with that flip side, because it's just reality.

334

:

And that's the shadow, it's what Carl Jung called the shadow.

335

:

And each one of us has a shadow.

336

:

We have this...

337

:

This part of ourselves that we prefer to deny.

338

:

It's a part that was unacceptable in our family or in our culture, in our society.

339

:

And it could be even a talent.

340

:

Maybe somebody is amazing at sports, but they're in a family that really devalues that and

only values academic strength.

341

:

And so that person had to put that talent in their shadow.

342

:

Or maybe it's anger.

343

:

Maybe your family didn't let you be angry.

344

:

Maybe you're a strong woman and strong women are not allowed in your society.

345

:

Right?

346

:

So it's kind of like the shadow can have all kind of traits that are not necessarily good

or bad.

347

:

They just weren't allowed.

348

:

They're just not acceptable in your world for whatever reason.

349

:

And it's the same with the collective.

350

:

Right?

351

:

Our world is

352

:

incredibly sexist and racist and ageist.

353

:

mean, it's just a fact.

354

:

If you look at people who study this, this is not debatable in reality.

355

:

But it's also, these are all sound acceptable traits.

356

:

So we think we don't have them.

357

:

We have them while thinking we don't have them.

358

:

So they're in our shadow.

359

:

And because these models are trained on our data, they reveal our biases.

360

:

Very creatively, if you look at the text to image models, it's really fascinating if you

tap into this, how creatively it visualizes our biases.

361

:

And so instead of saying, how dare the machine have biases?

362

:

Who do we blame?

363

:

Let's yell at the developers and the entrepreneurs and the CEOs, right?

364

:

Which is kind of the reflex reaction.

365

:

I think what's even more profound, while improving the AI is a nice goal, but it's very,

very complicated for many reasons.

366

:

The more fundamental thing, really, what we should be thinking about first, thinking about

the AI should come second, we should be thinking about first, is how do we improve

367

:

humanity?

368

:

And it's kind of like an opportunity for us to look in the mirror, recognize the bias

within ourselves, so we can have a better humanity first and a better AI second.

369

:

Yeah, that's

370

:

And do you think, or how do you think this mirror can help humanity get better?

371

:

You know, we want to run some experiments on that.

372

:

I'm trying to think, which example should I give you?

373

:

I've done quite a bit of research on this.

374

:

So one example, kind of the most, kind of the simplest to share, I guess, is our study of

brilliance bias.

375

:

So I mean, I invite you and the audience to think for yourself, who comes to mind when I

say the word genius?

376

:

Most likely most of you thought of Albert Einstein initially, but if not, you probably

thought of some other guy.

377

:

That's because our culture teaches us that intellectual brilliance is a male trait.

378

:

When we think of a genius, we think of a guy.

379

:

Then if we think a bit longer, it can be like, yeah, you know, there are some smart women

too.

380

:

But it's a kind of like afterthought.

381

:

And it actually goes deeper than it appears.

382

:

It developed around age 6, were extensive studies done, the great majority of people

developed brilliant spies at age 6, it's prevalent, it's kind of ubiquitous, it's in the

383

:

water, it's in the air, and it causes serious problems for women trying to enter careers

like musical composition, computer science, math, physics, anywhere where any jobs that we

384

:

think geniuses do.

385

:

We tend to favor men and women are kind of taught that these jobs are not for them.

386

:

So it's real.

387

:

It's a real thing in our society.

388

:

It's not very well known, but it's a very serious bias and it's a really big problem for

women and for humanity that kind of misses out on a lot of intellectual brilliance that

389

:

can come from the other half of humanity.

390

:

Could just jump in there for a second, because you say the brilliance bias starts at about

six years of age, and you've just done studies on this.

391

:

How are the studies done?

392

:

Do you like ask six year olds, which person do you think is the smarter one, the male or

female, for instance, or how was it conducted?

393

:

Yeah, good question.

394

:

I love these studies.

395

:

It's such good work.

396

:

So one thing is pick groups.

397

:

We have a game, like say you're a teacher and you say or the scientist says, we're going

to be playing a game for very, very smart kids, really, really smart kids.

398

:

You get to pick teams.

399

:

So you pick teams for very, very smart kids.

400

:

People mostly pick, boys get picked first for those games for very, very smart kids.

401

:

Another variation was

402

:

I it might have been done at a slightly different age.

403

:

But another variation is asking people to draw very, very smart people.

404

:

And when you look at kids who are about age five, they draw their own gender.

405

:

So girls draw girls as very, very smart people, and boys draw boys.

406

:

And then once you pass the age six, seven barrier, everybody starts to draw boys.

407

:

Yeah, so because it just society teaches us, remember in my own family, you know,

watching.

408

:

taught that at exactly like the age of six?

409

:

That's a valid question.

410

:

What happens?

411

:

There's a lot of...

412

:

Yeah, you start...

413

:

Well, a lot of kids are in kindergarten before that, but I definitely remember with young

people in my own family watching them start off without sexism and watching sexism get

414

:

poured into their heads.

415

:

It's...

416

:

Part of what makes sexism and racism and ageism and all that stuff so effective is that

society pretends it's not there.

417

:

We're told that we've made progress.

418

:

We're told that things are not so bad anymore.

419

:

But it's just not true.

420

:

It's just not true.

421

:

It probably varies a little bit between cultures.

422

:

I don't want to make it completely blanket.

423

:

I you're from Norway, probably one of the places that kind of do the best job at equality

for women.

424

:

But if you kind of look at my own experience in the United States and some other places

where I've lived...

425

:

Kids are taught that the boys are smarter.

426

:

They're taught that very explicitly in many, different ways.

427

:

They get that message from classmates.

428

:

They get that message from teachers in many ways.

429

:

It's just something that just gets hammered in over and over and over and over again.

430

:

And the family, even if you're trying to raise your kids without these biases, it's almost

impossible.

431

:

It's so pervasive in culture and it's and you watch a little child who is born without

this stuff And then it just gets shoved into their heads.

432

:

It's terrible.

433

:

It's terrible.

434

:

There's a reason why These are serious issues these are very very very serious human

issues and it's the more you start studying them and kind of reading the research and more

435

:

you're like my god, it's it's everywhere and it's all the time

436

:

And we don't see it because we're fish in the water.

437

:

Because we're just so used to it.

438

:

I a statement like, girls are just as good at math as boys is incredibly sexist.

439

:

Because it suggests that boys actually have some kind of an advantage.

440

:

And that's better than some of the stuff you hear.

441

:

So we started looking into AI.

442

:

And when you type the word genius into mid-journey, you mostly get men.

443

:

And so we did a more extensive study on that and it's, yeah, most of these models have

brilliance bias to very, very high degree.

444

:

Yeah, but in the process, it kind of gives an opportunity to self-reflect, to ask

yourself, what do I think of when I think of the word genius?

445

:

To kind of like remove the shame and just be honest with yourself, you know?

446

:

To what extent do I imagine the same thing that the model imagines?

447

:

Am I shocked to see the model create any women at all?

448

:

And I think if we're honest with ourselves, there's really an opportunity for growth here,

because the only way that you overcome biases is to become profoundly aware of them.

449

:

And then you can make a choice.

450

:

Once you know what's inside your head, you can start making some conscious decisions.

451

:

Yeah.

452

:

So when you experience the model as a mirror of a collective human consciousness, kind of

exposes the biases.

453

:

So it makes it easier to see.

454

:

then you don't have to, you don't have to take ownership of it personally either.

455

:

So then maybe it's less shame attached to admitting it.

456

:

Exactly, mean the shadow really operates by shame, right?

457

:

We cannot take full responsibility for the biases in us.

458

:

It's not fair.

459

:

We did not invent them.

460

:

They get shoved into our head.

461

:

If we really, really pay attention, the messaging is continuous.

462

:

It's every single day.

463

:

And so...

464

:

Yeah, there is no shame in having these biases.

465

:

But there is pride if you're willing, there should be pride if you're willing to look

within and admit them within yourself and willing to put in the work to notice it within

466

:

yourself so that you can make better decisions.

467

:

continuously wrapping it in shame, telling people they're terrible for having these biases

is...

468

:

It's just perpetuating the problem.

469

:

It's a collective problem.

470

:

We as a society have this issue.

471

:

It's not something that individuals invent.

472

:

For the most part.

473

:

I you have people in the extreme who work to perpetuate them, but that's really the

exception.

474

:

Usually the only way out is through.

475

:

And how do these biases show up in music and in the models trained on music?

476

:

that's such a good question.

477

:

Music is one of the most honest facets of culture, right?

478

:

You want to know about a culture?

479

:

Listen to its music.

480

:

And so it's all of these biases are in the music.

481

:

And the lyrics, the lyrics in particular are sort of...

482

:

which is why co-creation is better in a way.

483

:

At least put the user in a driver's seat so they can be inspired to go in direction that

they find most meaningful.

484

:

Yeah, that's a good question.

485

:

I've never actually done analysis specifically on bias in music, in generative music, but

I like that.

486

:

I think one of the biggest problems, one thing that sort of immediately comes to mind is a

little different from the kind of biases we were just discussing, but sort of a lot of the

487

:

models tendency to converge towards the mean.

488

:

So if you look at the lyrics in Chat-Chip-A-T, for example,

489

:

ChachiPT does a really good job filtering out very, very obvious sort of discriminatory

biases, but it's not very good at giving you diverse results.

490

:

for anything.

491

:

when it comes to creative tasks, one thing that's really important is to be creative, to

give different ideas, to go in different directions.

492

:

And this kind of like bias to go towards the mean, to give you the average, to give you

the most expected is really problematic when it comes to creative things.

493

:

We don't want a system that creates music to keep creating the same kind of music, to keep

reinforcing only the most popular styles, only the most popular choices.

494

:

And that's a different kind of risk that we

495

:

face with these creative systems.

496

:

And it's actually not that hard conceptually to make them do something else, to make them

more exploratory.

497

:

But a lot of companies choose not to.

498

:

And I think that's another sort of mindset shift that we need to do.

499

:

Yeah.

500

:

Maybe the intuition for companies is to make something that is commercially appealing

instantly.

501

:

So like you hit a button and you get a catchy pop song that sounds similar to what you're

already hearing.

502

:

that sounds appealing for what you were mentioning earlier when it comes to retention that

the users might use it.

503

:

once or twice or thrice and think it's fun and then it's over because it's not new or

fresh anymore.

504

:

And yeah, I think we need to give more credit to our users.

505

:

We need to believe in humanity a little bit more.

506

:

Yeah.

507

:

In Norway, we have this organization called Tono, which is related to copyright.

508

:

It manages the rights of composers, lyricists, songwriters, and music publishers.

509

:

And they recently introduced some guidelines when it comes to working with AI as a

collaborator.

510

:

And these guidelines stipulates that only works with human creative inputs are eligible

for copyright protection.

511

:

And if a composition or lyrics are entirely AI generated without human contribution, they

can't be registered with Tono.

512

:

However, if a work combines human and AI generated elements, the human creator can

register their portion while the AI generated part is assigned to Tono AI with a specific

513

:

IPI number.

514

:

I mean, I know you're not a copyright specialist, but I was wondering if you could give

your thoughts on this approach and how it might influence the way artists and AI

515

:

developers collaborate.

516

:

This is-

517

:

Not this is probably on the better end of what I've heard copyright agencies trying to do.

518

:

At least it acknowledges co-creativity as a separate use case and it's open fundamentally

to different types of co-creativity.

519

:

So in a very fundamental way, I think there is something very intelligent about it.

520

:

The one kind of serious weakness that I hope the agency will eventually have a chance to

consider is just how difficult it is to properly separate who did what.

521

:

when you have a profound collaboration.

522

:

So just imagine trying to do it with two people to say exactly who did what.

523

:

If you really work together.

524

:

And even if I tell you, you you and I can work on a song, but then we'll need to tell an

agency exactly who did what, what kind of impact that could have on our creativity and our

525

:

ability to work together effectively, constantly tracking who did what.

526

:

So there is something in this proposal that sort of flies against the way that

collaboration works if you have a system that really can collaborate deeply and

527

:

meaningfully.

528

:

And so I think it has that real shortcoming and that it's going to be challenging for the

most well-meaning people if they're using a really good AI system that can really

529

:

collaborate with you.

530

:

It's going to be challenging for them to tease that out and in best attempts to be honest.

531

:

If the system does something like mix or master, right?

532

:

Like something one well-defined task, then it's easy, right?

533

:

Be like, I did the lyrics and whatever the melodies and it mixed or mastered or whatever

it did, right?

534

:

But very often some of the best

535

:

collaborative opportunities are much more interlaced.

536

:

Like let's say in my system you and the model write lyrics and sometimes you know when

you're working with a person they can give you one little idea but it was actually really

537

:

big.

538

:

We have it all the time in academia when we write papers it's so hard to tease apart who

did what.

539

:

And so I think that actually that's actually a real hindrance to embracing

540

:

quality collaborative AI systems.

541

:

I know that that was of course not the intent of the agency, but it's that's something

that we need to take seriously.

542

:

Yeah, so that's a challenge.

543

:

In human collaboration between musicians, are different approaches to this kind of task.

544

:

And if you're going to bicker about who did what, then yeah, this riff was mine, but yeah,

you played an earlier iteration of it and it might lead to a lot of conflict and probably

545

:

stuff happening that is kind of a hindrance of creativity.

546

:

So most musicians I talk to and play with usually agree beforehand that it's going to be a

50-50 split no matter what happens.

547

:

Just because then it's out of the way, you don't have to think about it while

collaborating or composing.

548

:

And I find that to be really the way to go because even though, let's say I...

549

:

I'm supposed to compose something with a musician friend and we're sitting in the same

room and I'm the only one playing, the other person isn't doing anything.

550

:

It would still impact me just the fact that the other person is there.

551

:

You could get kind of psychic about what happens if it's like some sort of non-physical

thing happening or...

552

:

Or if it's just the mental part of having a presence of another person that makes it, but

it certainly is an effect.

553

:

So I guess what I'm trying to say is that would probably be the case with collaborating

with AI as well.

554

:

So as you were saying, it's impossible to detangle.

555

:

So I'm just imagining this.

556

:

Cause I've composed a good deal of music and registered it with Tono and it's completely

trust-based.

557

:

You open a webpage and then you write, okay, this song lasts for this and that long.

558

:

And I have 50 % of the rights and this other person have 50 % of the rights or something.

559

:

And with an AI would do the same.

560

:

I could hardly see myself being incentivized to giving the AI a fair share of the song,

even though I know I used it in the process.

561

:

And I'm not sure if that's the way to go anyway.

562

:

At least if one were to look at it merely as a tool, because I don't give any writing

credits to my guitar.

563

:

either or writing stuff

564

:

I'm actually in full agreement with you.

565

:

think if you have really good tools that you can really collaborate with where this sort

of teasing apart is not even possible, then thinking about it as a tool and trusting

566

:

artists that they're engaging meaningfully is long term, that's the way to go, I think.

567

:

Just because anything else...

568

:

It's just not realistic.

569

:

It either blocks creativity or it...

570

:

People are gonna end up being...

571

:

Even honest people are gonna end up being unable to be transparent in this kind of context

because it's just so difficult to be transparent when it comes to collaboration on exactly

572

:

who did what.

573

:

It's so hard to know for anybody, you know, listening who's meaningfully collaborated on

anything with another person.

574

:

mean, they say if you ask people what percent of the project they did, let's say you have

three people who worked on the project, you ask each one how much percent you did, you

575

:

hope you most of the time end up with over 100 % when you add it up.

576

:

It's just, you know, when you have this meeting of the minds, whether it's with another,

with a machine or another person, you just gotta let, you gotta let, you know, one plus

577

:

one be bigger than two.

578

:

You gotta let the magic happen and...

579

:

society and when it comes to copyright offices eventually I think we'll have comfort that

it's okay to create this way and that it's in no way

580

:

diminishes the work, I guess, it doesn't make it less of an authentic process.

581

:

Because there are so many, especially as these tools evolve, are so many ways to give even

more of yourself when these tools are there to help you.

582

:

And yes, sometimes somebody could cheat and use it in a silly way where they don't do

anything, but I think that will show in the quality of the work.

583

:

But of course, now we're kind of getting bogged down into the details that maybe don't

make as much sense in regards to two of those guidelines.

584

:

But what makes sense is the part of entirely AI generated music.

585

:

That's kind of the point, I think, basically.

586

:

Especially when it comes to these large companies that obviously have trained their models

on

587

:

copyrighted data.

588

:

It remains to be seen what happens with trials and stuff, but what do you think about the

ethical considerations when it comes to training models on copyrighted music?

589

:

Yeah, that's the big question, right?

590

:

It's so, so, so, tricky.

591

:

It's so tricky.

592

:

think the most important thing, the most important thing is to not be hurting artists.

593

:

Even if you don't train on their data.

594

:

For example,

595

:

whole thing was OpenAI and it's Carla Johansson where they found somebody who sounded like

her and they were going to use Carla Johansson look-alike voice without using any of her

596

:

data.

597

:

That's also not OK.

598

:

In the end of the day, the use case of imitating a person without their involvement,

without their permission, without revenue sharing with them is unethical, regardless of

599

:

how you slice this, regardless of the details of the technology.

600

:

So I think that's the most important thing.

601

:

I think we need to be able to tell AI companies the stuff that we're not OK with them

doing.

602

:

Part of the reason why there is so much focus on how they do it is because there's more

precedent on trying to prevent companies or telling companies how to create their sauce.

603

:

And also, I mean, there's other issues.

604

:

You can force, you can say that you can't train on unlicensed data, right?

605

:

Let's say you have this kind of law.

606

:

But then, what if a company licenses a million books for a million dollars?

607

:

Right?

608

:

A dollar a book.

609

:

Did that help any of the writers?

610

:

Is any writer going to benefit from 50 cents or a dollar?

611

:

Probably not.

612

:

Who is benefiting?

613

:

Organizations that have copyright to large catalogs.

614

:

Is that who we're trying to protect?

615

:

Maybe.

616

:

I don't think that's how most people think about it.

617

:

If you want artists to benefit, need to be...

618

:

the companies need to pay significant amounts per piece of work.

619

:

And the economics may or may not work for that.

620

:

So it's, I feel like we kind of, and I mean that respectfully.

621

:

I mean, I have so much respect for creators and what they make.

622

:

But sometimes a lot of the public discourse about it is really, really naive.

623

:

Just because a company pays for the data doesn't mean that any meaningful money goes to

the creators.

624

:

And so when we fight for stuff, we need to understand what we're fighting for.

625

:

Or are we secretly just trying to get rid of these companies altogether?

626

:

Like, what is our real motive?

627

:

Or are we trying to prevent them from imitating artists?

628

:

Or are we trying to make sure that because sometimes I feel like what people what what the

discourse is trying to go for is just just misses a lot of important information.

629

:

Anyways.

630

:

I don't want confuse the audience.

631

:

I'm definitely on the side of the artists in a sense that I think that it should always be

about human creators and anything that the companies do that abuses creators, that hurts

632

:

them, is not acceptable.

633

:

I don't know if this was helpful.

634

:

Yeah.

635

:

It just tricky a lot of the discourse is naive and it's sort of Benefiting another company

you think it's benefiting you but really just benefiting the rights holders, right and

636

:

it's sort of

637

:

Have you heard any good arguments on the other side?

638

:

being of which other side here.

639

:

would be that maybe in a philosophical space in terms of collective consciousness and in

terms of ideas being the main thing.

640

:

I can make that argument.

641

:

I just don't know if I if I how much I support it but the other argument is Okay, here's

kind of another way I can argue it in the way that I've argued it in the past which which

642

:

again big question marks It's a problem is ownership right it's the way our world the way

you eat in this world is by having money, right?

643

:

and so if

644

:

Sharing without making money.

645

:

It has no no space in the way that we've organized ourselves, right?

646

:

And so, you know this opportunity that this these models that are being created which

really belong to all of us to begin with Don't actually belong to all of us.

647

:

These big companies are profiting from them.

648

:

And so sometimes in this kind of like financially organized world where

649

:

Open AI is benefiting from this knowledge created by everybody.

650

:

We end up hyper-focusing on you must share the revenue with all of us because we've all

created it, right?

651

:

Which is really understandable.

652

:

It's an understandable reflex.

653

:

Of course, it ignores the fact that it's impossible to share with 10 % of the internet,

right?

654

:

It's just not realistic.

655

:

But yeah, is a lot of magical things happen when we are willing to aggregate our data and

build a brain from them.

656

:

And there's a lot of magic that happens, which sometimes we become blinded to because we

are so worried about not getting our fair share.

657

:

But all of it, every aspect of it is understandable.

658

:

The way the reason we think this way is because our society forces us to think this way.

659

:

And we don't know how else to be humans without trying to protect what's ours.

660

:

And we don't know what to do when the only way to create something amazing makes it

impossible to meaningfully share with everybody who contributed to the data set.

661

:

Because the data set has to be too large.

662

:

We don't have a precedent for it.

663

:

We don't know what to do.

664

:

It upsets us.

665

:

It blinds us a little bit to the magic of what is being created because we're so upset

that it's not being shared fairly.

666

:

And so really there is no model for this.

667

:

We as humanity don't know how to solve this.

668

:

Really cool stuff is being created and not everybody who contributed benefits.

669

:

And that's new territory for us.

670

:

Yeah, I've heard some advocate for a solution where you have a model that has some kind of

backtracking function so that when you generate something you could kind of trace where it

671

:

came from and then you could just share revenue based on the material it was trained on.

672

:

Okay, but it's not true though, right?

673

:

Like, I've heard this too.

674

:

There is like, an entire ecosystem, okay?

675

:

Companies getting funding that have all kind of solutions to this problem, because it's a

real problem, right?

676

:

But it's not realistic.

677

:

It's entirely unrealistic.

678

:

And it's not just a solution, and it's a cool idea, right?

679

:

But it's a whole universe of solutions like it, and there's like whole set of them.

680

:

And none of them make any...

681

:

Respectfully, I feel that there are these serious shortcomings, because if you think about

the human brain, and in many ways these machine brains are similar,

682

:

We have no idea what we're inspired by.

683

:

I'm sorry, but we don't.

684

:

We don't.

685

:

We might think we do.

686

:

It's like, let's say if you love Taylor Swift and you're a musician, you might be like,

I'm probably influenced by her, but probably you are influenced by whatever you heard as a

687

:

little child.

688

:

And good luck tracing it.

689

:

Good luck tracing it.

690

:

Like, it's just not how brains work.

691

:

Brains don't know how they know stuff.

692

:

Brains don't know what they're inspired by.

693

:

And whatever system claims that it does know,

694

:

probably does so with incredible inaccuracy.

695

:

It might as well be making up what its source material are.

696

:

And the ridiculous thing is that people might be happy with it.

697

:

They might say, they might just believe the system on how it attributes this.

698

:

They might be happy that there is an attribution, even though most likely the attribution

is false.

699

:

Right?

700

:

So it's really more of an attempt to appease this human need for fair attribution, for

fair sharing of the pie.

701

:

then it is a genuine fair solution.

702

:

is it, is this technology?

703

:

We're not ready for it.

704

:

Our economic and social systems are just not ready for it.

705

:

And it's, and it's, that's why I think, I mean, as a kind of temporary solution, if you

will, let's just make sure that what it does in the end of the day doesn't hurt anybody

706

:

directly, at least, that it doesn't imitate any specific artist, that you can't log in and

say,

707

:

give me music in the style of Maya Akerman, you know, and cut me out of the loop, not

share anything with me, not ask for my permission, but have people be able to imitate me.

708

:

That would make me upset.

709

:

That would make any artist legitimately upset.

710

:

Disallow that.

711

:

Make that illegal.

712

:

Make it so that these systems help human beings and don't displace jobs.

713

:

Let's fix the real problems.

714

:

Let's think about it for a second.

715

:

Let's allow ourselves to think about it as a black box.

716

:

For a second, let's forget about how it works and make sure that what it does is positive.

717

:

Because when we try to tell the companies how to build their AI, we're really, most of us

are operating in a respect to any field that we know nothing about.

718

:

And so it's very, very easy to just say stuff that doesn't make sense and demand things

that are not gonna help, demand things that are impractical and ultimately get nowhere,

719

:

right?

720

:

Instead, let's demand that these

721

:

companies do things that are don't do things that are egregiously bad thinking about it as

a black box.

722

:

I think that's a good point of view to integrate.

723

:

Even if people are going to continue thinking about data and trying to desperately find

solution, solutions to that.

724

:

I understand that reflex, that desire.

725

:

But we also need to be thinking about what is this AI actually doing in our world?

726

:

Because that's something that's obvious to us.

727

:

That's something that we can demand control over.

728

:

Different point of view.

729

:

We don't tell people how to learn.

730

:

We don't tell human beings how to learn.

731

:

But we have a ton of rules of what you are and you're not allowed to do.

732

:

So I think that perspective, the black box perspective, I'm not allowed to go out there

and offer services.

733

:

I'm going to write songs for you in the style of, I don't know, the Beatles or whatever.

734

:

Like, I don't know.

735

:

I think that would be illegal.

736

:

I'm pretty sure I'm not allowed to impersonate people.

737

:

There's a lot of rules on what you are and what you're not allowed to do for human beings,

even though we don't control her.

738

:

people's brains are formed and how they learn.

739

:

And we kind of need to take that same attitude, at least in part, to AI, because I think

we're going to get further even as we try to figure out the other pieces.

740

:

Alright, Maya Ackerman everybody.

741

:

Yeah, nice.

742

:

But I mean, I guess what you're saying is we as a whole or humanity or something needs to

tell the companies what not to do and enforce kind of some rules about what we want AI to

743

:

do with our music and do to our world.

744

:

Do you think that is realistic?

745

:

I think any of it would be difficult because some of the AI companies are just really,

really rich.

746

:

So in that sense, it's very difficult.

747

:

But telling them how to build the AI is even more difficult.

748

:

It's even more difficult because...

749

:

I just see so many mistakes in the public discourse on what people assume is possible.

750

:

People assume it's realistic, that's just not.

751

:

And I'm just not seeing that being productive so far.

752

:

It doesn't mean it's not worth the effort, but I'm not seeing it working.

753

:

But going around and telling OpenAI, hey, don't use Scarlett Johansson's voice.

754

:

Even if you don't train on her data, don't imitate her voice.

755

:

That was successful.

756

:

They stopped doing that.

757

:

And they finally stopped and it wasn't clear it was going to work, but they did stop

making it so easy.

758

:

Like it used to be with Dali.

759

:

You could say, give me art in the style of, you know, insert your favorite artist or

photographer.

760

:

That's how they advertise their services initially.

761

:

But they stopped doing that.

762

:

Somehow that campaign was successful.

763

:

So we're actually seeing some success telling companies, hey, we don't want your AI doing

X, Y, Whereas a campaign saying we don't want your AI built in way ABC.

764

:

There have been some success, but that has been a really slow, really really slow effort.

765

:

So yeah, I don't care how someone's brains work.

766

:

Don't break into my house, you know, don't do illegal things.

767

:

Don't care, right?

768

:

Like it's a listen to my music, but don't claim to be me, right?

769

:

Like it's we have a model when it comes to people.

770

:

Yeah.

771

:

And those were some good examples of public opinion shaping how these companies conduct

their business.

772

:

Could you imagine some financial models that takes creators into consideration and also is

appealing to the public in terms of using?

773

:

Yeah, I mean the part that I think leaves actually a really big opportunity is for the

artists who are interested Only for those who are interested.

774

:

It's a very very important piece of the puzzle If somebody wants to offer up their style

if somebody wants to offer up their voice, which is such a personal delicate decision

775

:

Right not to be taken lightly Then I think there is a ways to compensate those artists

really well Right you say you go in and you say, you know, I want a song

776

:

I'm just going to use my own name to not implicate anyone else, right?

777

:

I want to use Maya's voice, right?

778

:

Then I should be compensated for that if I choose to engage in that.

779

:

And I think for the bigger artists, there is an opportunity to make a lot of money.

780

:

And even for smaller artists who have a really cool style or really cool voice or

whatever, something that they think other people will want to use in their songs.

781

:

there's an opportunity to make a lot of money by collaborating with AI companies for the

people who want, who really want that kind of thing.

782

:

Or maybe like, maybe you're okay with people using your lyrical style and you're not okay

with them using your voice, right?

783

:

That way you get to license exactly what you feel comfortable licensing.

784

:

And then you can imagine if that becomes hot, suddenly millions of people are.

785

:

Writing something in a way that leverages your style and maybe that's exciting for you.

786

:

And maybe it sounds absolutely I'm sure it sounds absolutely disturbing to a whole bunch

of other people and that's okay Grimes already started leveraging that People are making

787

:

music using using her voice and last I looked She gets half the royalties and some songs

have become successful.

788

:

So this is already a precedent for people who want to get involved in that way or maybe an

artist could

789

:

develop a different sound, a different style, different from the one that they use, and

they would want to license this new thing that they created.

790

:

So there's really so much possibility here for how an artist could scale how many people

they could kind of collaborate with through these AI systems.

791

:

So there are exciting possibilities.

792

:

There are ways to make money.

793

:

But for most people, it's...

794

:

I hate saying it, but for most people, it's not gonna be by just licensing their music.

795

:

Just because...

796

:

because of how the economics work.

797

:

I hate to be the bearer of bad news, but if you can imagine, like if you need to license

10 million songs, 20 million songs to build a really powerful model, you can't pay

798

:

hundreds of dollars per song.

799

:

It's because of how much money are you really gonna make on that model and how quickly

these numbers blow up very, very quickly and companies need to operate within the

800

:

constraints of what they're able to raise.

801

:

It's really terrible.

802

:

It's a little bit upsetting for me to have to kind of verbalize all of this, but I think

it's important for us as a society to understand the real bottlenecks here so that we can

803

:

move in a direction that hopefully, at least ultimately, we're happy with.

804

:

And maybe what you want to say then is I hate these AI models.

805

:

I don't want them at all.

806

:

Right.

807

:

And that's a legitimate perspective as well.

808

:

whatever perspective you have needs to be sort of it helps if it's informed by

practicality.

809

:

I think we'll call that the final statement.

810

:

So you so much for joining the podcast, Maja.

811

:

This was so fun.

812

:

Thank you for the fantastic questions.

813

:

Really enjoyed it.

814

:

Alright so I'll hit the stop button then.

Follow

Chapters

Video

More from YouTube