Artwork for podcast So Curious!
Music of the Future: Tech Innovations and Inventions
Episode 611th July 2023 • So Curious! • The Franklin Institute
00:00:00 00:39:40

Share Episode

Shownotes

In the future, how will we make and experience music?

In today's episode, Bey and Kirsten speak with some of the folks inventing that future. First, they sit down with with Dr. Jason Freeman to learn about (and hear samples from!) Georgia Tech's annual Guthman Musical Instrument Competition, an international contest showcasing newly invented instruments. Then, it's time for a field trip! So Curious goes on the road to visit Dr. Youngmoo Kim and his colleagues at Drexel University's Music and Entertainment Technology Lab. From vests that allow the deaf to experience music through their skin, to AI that can create music from noise, check out some of the incredible tech innovations being developed right here in Philadelphia!

Links for today's episode:

Transcripts

Speaker:

What's up? My name is the Bul Bey.

Speaker:

And I'm Kirsten Michelle Cills.

Speaker:

We're your hosts, and this is So Curious!

Speaker:

A podcast presented by the Franklin

Speaker:

Institute, and today, we have a really exciting episode for you.

Speaker:

Because we're looking in to the future!

Speaker:

Oh, damn, I didn't know we could do that.

Speaker:

This whole season is on the science of music, and today we're exploring a bunch

Speaker:

of new, innovative technologies in the world of music.

Speaker:

First, we'll be sitting down with Dr.

Speaker:

Jason Freeman from the Guthman Musical

Speaker:

Instrument Contest, a long running competition for new musical inventions.

Speaker:

And then we're going to take this show on

Speaker:

the road for our first ever field trip for the So Curious!

Speaker:

Podcast. And we're going to get a tour of Drexel

Speaker:

University's Music and Entertainment Lab with the director, Dr.

Speaker:

Youngmoo Kim.

Speaker:

Bey, I'm curious - ding, ding, ding! -

Speaker:

About your personal experience with music tech.

Speaker:

So, what is the first audio format that you remember listening to?

Speaker:

Cassette. Cassette tape, ummmm...

Speaker:

Okay. Cassette tape, CDs, obviously.

Speaker:

Torrents!

Speaker:

Yeah, right!

Speaker:

I was not in the cassette era, but I did have the hit clips, if you remember those.

Speaker:

That was like -Throw back!

Speaker:

-the little baby thing, and you put it in

Speaker:

and you just get, like, 90 seconds of the song.

Speaker:

And for some reason, we were like, "that's all I need.

Speaker:

It's perfect!" Okay, that's enough

Speaker:

nostalgia, because we're going to actually talk about the future of music today.

Speaker:

Enough with the past.

Speaker:

We are talking to Dr.

Speaker:

Jason Freeman. So.

Speaker:

Welcome to the so Curious Podcast, Dr. Freeman.

Speaker:

Thanks so much for having me! Absolutely!

Speaker:

Awesome,real quick, let's jump into it.

Speaker:

Introduce yourself, and let us know what it is you do.

Speaker:

So, I am a professor of music at Georgia

Speaker:

Tech here in Atlanta, and I also direct the Guthman Musical Instrument

Speaker:

Competition, which is an event that we hold on campus every year, identifying the

Speaker:

newest, most innovative, and kind of creative musical instruments in the world.

Speaker:

So let's get into this musical instrument competition.

Speaker:

What is the history behind it?

Speaker:

How did it get started?

Speaker:

What's the mission behind it?

Speaker:

So the competition actually started in the 1990s as a piano competition.

Speaker:

There was an alum of Georgia Tech, Richard Guthman.

Speaker:

He and his wife are huge fans of the

Speaker:

piano, and they really wanted to start a piano competition at Georgia Tech.

Speaker:

And we did that for many years, and had a wonderful piano competition here, jazz and

Speaker:

classical, and people from all over the Southeast came to compete.

Speaker:

But Georgia Tech was really changing, and we now have Bachelor's, Master's, Doctoral

Speaker:

programs in music technology, where students are really learning how to create

Speaker:

new products and services that change the music industry.

Speaker:

And so we talked to the Guthman's about how we might re-envision the competition.

Speaker:

And so we arrived at this idea of doing a

Speaker:

musical instrument competition, where the competitors don't play a musical

Speaker:

instrument, but they invent musical instruments.

Speaker:

And so we have an international call that comes out every year.

Speaker:

People from all around the world submit their instruments that they've created.

Speaker:

We have a panel that reviews and selects a group of finalists, who we invite to

Speaker:

campus for two days to share their work with our community here and with judges

Speaker:

that we bring in who help us to pick the winners each year.

Speaker:

And this all culminates in a big public

Speaker:

concert that we do at the Performing Arts Center on campus, where we match up each

Speaker:

of these finalists with their instruments, with a musician from the Atlanta area, and

Speaker:

they present a performance together that's showcasing what that instrument can do.

Speaker:

Wow. Yeah this is so cool.

Speaker:

And that must be hard to judge, too,

Speaker:

because it's truly like apples and oranges, right?

Speaker:

Everything is so different.

Speaker:

It is very hard to judge!

Speaker:

I was a judge once in one of the early years.

Speaker:

But every year there is kind of a debate about - what are the criteria that matter,

Speaker:

how do we compare these things to each other?

Speaker:

And over time, we've arrived at three kind of broad based criteria that really matter

Speaker:

to us in the competition that we carry from one year to the next.

Speaker:

One is musicality.

Speaker:

So what kind of music can the instrument make?

Speaker:

How expressive is it?

Speaker:

Is it something that rewards practice?

Speaker:

Is it something where you can develop virtuosity over time?

Speaker:

What is that potential to make new and

Speaker:

different kinds of music and to be expressive with it?

Speaker:

The second thing is engineering.

Speaker:

So, what are the ideas that went into building this thing?

Speaker:

Are there new kinds of sensors, or new kinds of software or artificial

Speaker:

intelligence that are helping it make new kinds of sounds, or create new kinds of

Speaker:

tactile interfaces, or things that haven't been done before?

Speaker:

And then the third thing is design.

Speaker:

What does it look like?

Speaker:

What is the form factor?

Speaker:

Is it something that's beautiful to look at?

Speaker:

One of our judges a few years ago was the chief curator of the musical instrument

Speaker:

collection at the Metropolitan Museum of Art.

Speaker:

So he brought this incredible perspective

Speaker:

to our panel, looking at this history that goes back thousands of years.

Speaker:

If you think about the collection of instruments in the Met, and the study of

Speaker:

musical instruments historically, and thinking about how these new instruments

Speaker:

really offer different perspectives and different designs, that iterate on and

Speaker:

move beyond the instruments that we've been using for centuries.

Speaker:

One of the things that surprised me most with the competition, we expected that all

Speaker:

the instruments would be kind of high tech electronic musical instruments, and the

Speaker:

majority of the ones in the competition certainly are.

Speaker:

But a lot of instruments have actually been acoustic instruments.

Speaker:

Many of our winners have been acoustic musical instruments.

Speaker:

In 2022, the winner was this instrument called the Glissotar, which looks kind of

Speaker:

like a clarinet or a soprano saxophone, but it has no keys on it.

Speaker:

It has a ribbon that you move your fingers

Speaker:

up and down and have this continuous control over pitch.

Speaker:

You put a soprano saxophone mouthpiece on it.

Speaker:

It was an incredible instrument that made

Speaker:

sounds I had never heard before in my life, but completely acoustic.

Speaker:

(Glissotar sample plays)

Speaker:

Can you walk us through the different categories of instruments?

Speaker:

I mean, you have string, you have wind, and are there some other ones that we may

Speaker:

overlook or just not pay much attention to?

Speaker:

So, if you were studying orchestration or

Speaker:

something like that and looking at instruments in a traditional way, you'd

Speaker:

look at wind instruments, you'd look at brass instruments, you'd look at

Speaker:

percussion instruments, string instruments, and so on.

Speaker:

And so they'd be classified by, kind of the mechanism by which sound is produced.

Speaker:

But when we look at the competition and we

Speaker:

try to classify them, those categories don't help all that much.

Speaker:

So we've seen many guitars over the years, for example.

Speaker:

So, these are guitars that in some way

Speaker:

extend what it means to be a guitar or to play a guitar.

Speaker:

I'll give you a couple of quick examples of this category.

Speaker:

So, one of our winners from a couple of years ago was the Lego microtonal guitar.

Speaker:

So this enabled musicians to play all kinds of microtonal scales that are common

Speaker:

in different musical practices around the world.

Speaker:

By having a 3D printed fretboard where you could actually put Legos on in different

Speaker:

spots, to be able to retune the frets kind of very quickly and on the fly.

Speaker:

(Lego microtonal guitar sample plays)

Speaker:

One of our winners this past year, the Hitar, was actually just a regular guitar

Speaker:

that sent its signal through a computer running machine learning.

Speaker:

When you hit the body of the guitar and

Speaker:

make sounds with it in a particular style of playing, to be able to transform that

Speaker:

to all kinds of different sounds, so it could sound metallic, or could sound like

Speaker:

you were in the middle of a tunnel, or all kinds of different things.

Speaker:

(Hitar sample plays)

Speaker:

So both of those instruments, very

Speaker:

different approaches, but they're both extensions of a guitar in some way.

Speaker:

They're trying to take this traditional instrument and enable to do more with it.

Speaker:

And so a lot of instruments that we see in the competition, whether they're inspired

Speaker:

by a guitar, or a piano, or a trumpet, are extensions of some traditional instrument.

Speaker:

And someone going to play them can use a lot of the technique that they've learned

Speaker:

through years and years of practice, learning the original instrument, and

Speaker:

bring that immediately into playing the new one.

Speaker:

So is there any criteria around -

Speaker:

does it have to be able to play, like, a traditional twelve note scale?

Speaker:

It can really be anything.

Speaker:

What we're really looking for from our competitors is a compelling story about

Speaker:

why they created this instrument and what they're trying to do with it, what kind of

Speaker:

music, what kind of musicianship they're trying to enable through the instrument.

Speaker:

What we see is that people are driven by

Speaker:

all kinds of different reasons to make new musical instruments.

Speaker:

Sometimes there's a natural phenomenon like the movement of water that they want

Speaker:

to explore and use that as an inspiration for their instrument.

Speaker:

One of our winners this year in the

Speaker:

competition was the Abacusynth, which was inspired by a traditional

Speaker:

abacus, you know, the mathematical calculation device.

Speaker:

And they really wanted to see how they could translate that into sound.

Speaker:

(Abacusynth sample plays)

Speaker:

So there's all kinds of, kind of

Speaker:

conceptual reasons that might motivate somebody.

Speaker:

There's also things that they might want to be able to do live in performance that

Speaker:

they can't do with a traditional instrument.

Speaker:

To be able to go control pitches beyond

Speaker:

those twelve notes in the traditional, kind of, Western equal tempered scale.

Speaker:

They might want to be able to make all kinds of different sounds and timbres that

Speaker:

aren't possible with traditional instruments.

Speaker:

They might want to be able to create new kinds of collaborations where multiple

Speaker:

people can play an instrument at the same time, or they might be able to want a

Speaker:

dancer to be able to activate a musical instrument through their movements.

Speaker:

Wow.

Speaker:

So there's all kinds of different things that drive people.

Speaker:

And when you say it's an apples to oranges comparison, it's not just about the music

Speaker:

that the instruments make or how you engage with them, but it's really about

Speaker:

the reasons that people are making these instruments in the first place.

Speaker:

So we're going to play a little game with you.

Speaker:

I want to hear - of course, I imagine everyone asks these kind of things -

Speaker:

like, some of your Hall of Fames that you can think of off the top of your head.

Speaker:

I know you're on the spot.

Speaker:

So first I'm going to ask you, what are

Speaker:

one or two that were, like, the most technically innovative?

Speaker:

The most technically innovative?

Speaker:

I think the ROLI Seaboard.

Speaker:

(ROLI Seaboard sample plays)

Speaker:

So it looks like a piano keyboard or

Speaker:

synthesizer, but it's made with these rubber kind of keys on the top, so they

Speaker:

don't move up and down like a traditional piano keyboard does.

Speaker:

But you can push them and you can mesh

Speaker:

them, almost like Silly Putty or something.

Speaker:

Not quite that much, but but they're very

Speaker:

flexible as you move around and they have an incredible amount of sensing.

Speaker:

And of course, it's a digital instrument, so you can take that movement on a key

Speaker:

left and right, or the pressure you're pushing up and down or forward and

Speaker:

backward and really map that onto anything that you want in the sound to change.

Speaker:

Awesome. That's dope!

Speaker:

And what has been your favorite sounding instrument?

Speaker:

Probably the Segulharpa, which was our winner in 2021.

Speaker:

This was an incredible instrument.

Speaker:

It's probably, not quite the size of me.

Speaker:

I'm a fairly small guy,

Speaker:

but it would probably go up to my chest if I were standing next to it.

Speaker:

And it was a hybrid acoustic -electronic instrument.

Speaker:

So, you play it with these sensors, maybe a little bit of a piano technique, but

Speaker:

they're much more sensitive than just traditional piano keys.

Speaker:

Then it activates electromagnets inside of

Speaker:

the instrument, that then play the strings.

Speaker:

And so, the sound that comes out of this is really, kind of, otherworldly, alien

Speaker:

kind of sounds, unlike anything I've ever heard before.

Speaker:

It's incredibly beautiful.

Speaker:

Bjork has performed with it on tour. It's been used widely by lots of different

Speaker:

artists that are really looking for a unique sound.

Speaker:

It's the sound that really sticks with me through all the years of the competition.

Speaker:

(Segulharpa sample plays)

Speaker:

Oh, that's wonderful.

Speaker:

All right, so the last one just the most fun.

Speaker:

I'm sure there's plenty of novelty and

Speaker:

whimsical instruments out there, but what have you seen as the most wacky wonky?

Speaker:

These don't have to be the winners, but yeah, we're curious.

Speaker:

Most wacky wonky instruments?

Speaker:

We had some pretty crazy ones in the early years of the competition.

Speaker:

There was something that was focused on spinning plates.

Speaker:

This must have been more than ten years ago.

Speaker:

The details are a little foggy in my mind, but I very distinctly remember that this

Speaker:

person was spinning the plates around and there were sensors embedded in this that

Speaker:

were somehow, like the speed of the plate spinning, and the height and all that kind

Speaker:

of stuff was mapping onto the music that we were hearing.

Speaker:

That's awesome. Wow.

Speaker:

Yeah, there's like some showmanship in that too!

Speaker:

And how does marrying music and technology help foster collaboration?

Speaker:

What's special about combining technology and music?

Speaker:

I think technology opens up new pathways

Speaker:

for us to collaborate with each other, ways that we can collaborate at a distance

Speaker:

with each other, and also ways that we can collaborate with the technology.

Speaker:

Right?

Speaker:

So it can enhance human to human collaboration.

Speaker:

But there's also so many possibilities in kind of the human-computer relationship.

Speaker:

So a machine musician can take on

Speaker:

intelligence, it can analyze what someone's playing.

Speaker:

It can become a really incredible partner that can spur new ideas and push you in

Speaker:

directions that you might not think of otherwise.

Speaker:

Last question for you is, what advice would you have for anyone who might be

Speaker:

interested in inventing their own musical instruments or music technology?

Speaker:

What advice would you leave people with? Just do it!

Speaker:

It's so easy right now.

Speaker:

We live in a time where you can download free and open source software to your

Speaker:

laptop and start programming code to make music.

Speaker:

You can start building your own virtual synthesizers.

Speaker:

You can get a few sensors and start putting something together.

Speaker:

There's so many tutorials and opportunities to learn online or to join

Speaker:

communities of people that are doing similar things.

Speaker:

And so I think we're at an incredible

Speaker:

point right now where anyone, whether you're an elementary school kid or you're

Speaker:

retired, or anywhere in between, has the tools and kind of the ease of use at your

Speaker:

disposal to begin experimenting and innovating.

Speaker:

And so just start and look at the examples.

Speaker:

At the Gusman Musical Instrument Competition website, we've got videos of

Speaker:

all of our past finalists and winners so you can get some inspiration for them, and

Speaker:

then just come up with an idea, and start hacking on your own.

Speaker:

Awesome. Well, thank you so much, Dr.

Speaker:

Jason Freeman. It was so wonderful to talk to you.

Speaker:

Thanks again. Thank you!

Speaker:

Awesome. Thanks.

Speaker:

I really enjoyed it!

Speaker:

All right, that was amazing.

Speaker:

Thank you so much, Dr. Freeman.

Speaker:

We're going to have to start planning our

Speaker:

inventions for the next year's competition.

Speaker:

Yeah. Okay.

Speaker:

Well, then what do you think?

Speaker:

I mean, I guess let's just workshop now.

Speaker:

A bow, something like stringy?

Speaker:

I don't know?

Speaker:

But that's probably not original, is it?

Speaker:

Well, he said you can use anything.

Speaker:

So, what if - OOH.

Speaker:

Okay, hear me out.

Speaker:

We make an instrument that's made out of some sort of food, so then the big

Speaker:

showmanship at the end, like, we play something beautiful.

Speaker:

Everyone's crying, everyone's like, that

Speaker:

was the most beautiful music you've ever heard.

Speaker:

And then right at the end, when everyone's clapping, we eat it.

Speaker:

And then they're like, oh, my God.

Speaker:

Is this, like, ASMR stuff? Yeah.

Speaker:

I don't know what the music's going to be.

Speaker:

I haven't figured out the instrument part, but the theatricality of, like, if we make

Speaker:

something out of food and then we eat it, like, how dope would that be?

Speaker:

I'm right there with you, we're going to win this.

Speaker:

Yeah.

Speaker:

You're telling me we're not going to win if we eat the instrument?

Speaker:

Come on. It's going to be cheesesteak based...

Speaker:

Yes, Go Birds! Yes! All right. Well, okay.

Speaker:

Let's stop talking about hypothetical inventions, no matter how good they would

Speaker:

be, because next we're going to be learning about some current musical

Speaker:

innovations that are taking place right here in Philly.

Speaker:

Next time you hear our voices, we'll be

Speaker:

reporting on the ground, outside of the studio.

Speaker:

This isn't awkward at all. So weird.

Speaker:

We don't have headphones on.

Speaker:

We don't have anything.

Speaker:

And today, we are joined by Dr.

Speaker:

Youngmoo Kim.

Speaker:

Thank you so much for joining us.

Speaker:

Hey, it's my pleasure. Thanks for visiting!

Speaker:

Absolutely. We are excited to have this conversation.

Speaker:

First, introduce yourself and tell us everything that you do.

Speaker:

Everything? Oh man, how much time do you have?

Speaker:

No, I'm Youngmoo Kim, I am the founding

Speaker:

director of where we're standing, what's called the ExCITE Center at Drexel

Speaker:

University - Expressive and Creative Interaction Technologies.

Speaker:

This is a research institute about the

Speaker:

intersections of technology and creative expression.

Speaker:

So technology and the arts, but so much more.

Speaker:

I am also Vice Provost of University and

Speaker:

Community Partnerships at Drexel University, and a faculty member of the

Speaker:

Electrical and Computer Engineering department here at Drexel.

Speaker:

Could you explain the space that we're standing in?

Speaker:

It looks like a play pen.

Speaker:

Yeah, it is kind of a play pen!

Speaker:

So this is our collaboration space.

Speaker:

We have meetings here, we do work here, we

Speaker:

have workshops and events - sometimes musical performances as well!

Speaker:

And a lot of K-12 outreach activities as well.

Speaker:

We host a lot of camps and after school programs here.

Speaker:

But why don't we head over to the piano

Speaker:

since that is sort of the centerpeice of this space.

Speaker:

Yeah yeah, let's do it. Hell yeah.

Speaker:

I guess, as we go over there, can you

Speaker:

explain the mission behind the Music and Entertainment Technology lab?

Speaker:

Yeah, so my lab is one of three labs that are based here at the ExCITE Center.

Speaker:

Our mission is really to explore that

Speaker:

future of music and media through technologies.

Speaker:

So this is one of our examples.

Speaker:

This is what we call the Magnetic Resonator Piano.

Speaker:

It's a standard grand piano - right? (Plays piano)

Speaker:

But it's also augmented with electromagnets.

Speaker:

Those electromagnets don't touch the

Speaker:

string, they're about a quarter inch away from the string.

Speaker:

But by varying that magnetic field using electricity and electronics, we can

Speaker:

vibrate these strings in very unusual ways.

Speaker:

So let me fire it up...

Speaker:

And, Kirsten, can you explain to the listeners what this looks like right now?

Speaker:

Yeah, it looks like a grand piano that has gone through like a science fair.

Speaker:

You know, in movies when somebody has

Speaker:

to, like, do the wire thing so that something doesn't explode?

Speaker:

Oh, yeah. Kind of looks like that as well.

Speaker:

Yeah.

Speaker:

I will say that nothing here is dangerous!

Speaker:

Second, nothing is destructive to the piano.

Speaker:

We can actually lift all of this off and

Speaker:

pack it up in cases and install it on other pianos, which we have done.

Speaker:

This system has been used in concerts and recitals on multiple continents.

Speaker:

It is on the soundtrack to a Disney movie, Christopher Robbin.

Speaker:

Yeah, the composer for that soundtrack, J

Speaker:

on Brion, loved the sound of this and did a recording with my former student,

Speaker:

Andrew McPherson, who did a lot of the development work around this.

Speaker:

So I'm going to invite my friend and

Speaker:

colleague, Daniel Belquer, our artists and residents here.

Speaker:

Daniel is an amazing musician and composer

Speaker:

and technologist, to demonstrate a few things on the magnetic resonator piano.

Speaker:

(Daniel plays Magnetic Resonator Piano)

Speaker:

So, it's a piano that can vibratto.

Speaker:

(Daniel plays Magnetic Resonator Piano)

Speaker:

Oh, my God.

Speaker:

See, hearing that come out of, like, a

Speaker:

traditionally acoustic instrument is so insane.

Speaker:

It was amazing!

Speaker:

It had the - from I mean, I noticed a little bit of what you were doing, but it

Speaker:

had the vibrato on the keys and you were just moving side to side.

Speaker:

Yeah.

Speaker:

Are you able to talk about that a little bit?

Speaker:

Because I know, like, normally when you strike a piano key, it's just - ding!

Speaker:

Absolutely, it's game over,

Speaker:

right? With the piano, it's just about how hard you press the key.

Speaker:

But with this, because we are using the

Speaker:

electromagnet to continuously vibrate the strings, right.

Speaker:

And all of this is acoustic, by the way.

Speaker:

People always ask, is there a speaker in there?

Speaker:

No, this is purely acoustic.

Speaker:

This is string vibration.

Speaker:

But because we can control that

Speaker:

continuously, there's a special sensor over the keyboard that doesn't get in the

Speaker:

way, but it's controlled right from the keyboard.

Speaker:

It sends information to a computer, which

Speaker:

then generates the electromagnetic signals and go through an amplifier here.

Speaker:

So we can control that sound very, very precisely.

Speaker:

But maybe I'll ask Daniel to comment since

Speaker:

he has played this instrument multiple times and what that feels like.

Speaker:

Please. No, it's amazing because it completely

Speaker:

opens up other dimensions of the instrument, right?

Speaker:

So, you have to play using the regular piano technique, but also other ideas.

Speaker:

So, as you were saying, like how to

Speaker:

sustain a note after it has been hit, which is not feasible on a regular piano.

Speaker:

Yeah, I've taken the basics of piano lessons, but nowhere in the basics did

Speaker:

they say, "hey, wiggle your finger at the end of it.

Speaker:

And if you'll get more sound," it's always just, strike it, that's it.

Speaker:

Well and yeah, if you go to a classical

Speaker:

concert, there are these virtuoso pianists who do that, right?

Speaker:

Who want that ability to kind of continuously affect the sound, like you

Speaker:

would on a violin or with a voice or with a wind instrument.

Speaker:

You can't do that on a traditional piano,

Speaker:

but on ours you can! That gesture becomes meaningful.

Speaker:

And how much control would you say you have over that wiggling?

Speaker:

The software is completely configurable,

Speaker:

so it can actually change the waveforms that drive the piano.

Speaker:

So you can have things that go really fast

Speaker:

and stop, or things that drive really slowly, the note.

Speaker:

So if I just - (Daniel holds note on Magnetic Resonator piano)

Speaker:

- hold the note slowly, this sound will stay here.

Speaker:

So as you press the note, you start hearing -

Speaker:

(harmonics excited)

Speaker:

- you see, "bo be be bep"

Speaker:

It's just a single note with the harmonics being excited.

Speaker:

After all. We're at ExCITE Center.

Speaker:

Haha, absolutely!

Speaker:

So harmonics are a different mode of vibration that you, traditionally, you can

Speaker:

get on stringed instruments like guitars and violins.

Speaker:

You can have the open note, or if you lightly press your finger on a string

Speaker:

without actually pressing it all the way down, you get double the frequency and you

Speaker:

can actually get triple, or quadruple, as you go up higher.

Speaker:

That's sort of the basis of vibration and of music.

Speaker:

But normally you can't do that on a piano.

Speaker:

(note played)

Speaker:

That's the harmonic series for this note.

Speaker:

Normally, all those notes, all those pitches happen when you press one note.

Speaker:

Right.

Speaker:

But here we can separate them out and you can do it on the whole chord.

Speaker:

(plays Pachelbel's Canon using harmonics on Magnetic Resonator Pia

Speaker:

This is so cool.

Speaker:

And Youngmoo, is that specific equipment

Speaker:

that's on that piano, is that the only one or are there duplicates of it now?

Speaker:

There are a couple now, it's under five. Oh, okay.

Speaker:

Wow. So, yeah, very few.

Speaker:

Yes.

Speaker:

And we do get requests from musicians and composers who want to use the system.

Speaker:

Okay. What room are we stepping into now?

Speaker:

This is the workshop, and this is where the magic really happens.

Speaker:

There's a ton of different projects here.

Speaker:

This is where a lot of our work and our research happens.

Speaker:

I'm going to hand it over to Daniel.

Speaker:

Do you want to tell them?

Speaker:

So the name of this project is Music, Not Impossible.

Speaker:

It started off in 2014 as a way for the

Speaker:

deaf to experience music through vibrations on the skin.

Speaker:

And this project has been receiving many awards and celebrity endorsements.

Speaker:

Lady Gaga, her dive bar tour launch was using this technology in Nashville.

Speaker:

Oh, can I put this on? Oh, my gosh.

Speaker:

let's get you all suited up.

Speaker:

Thank you, I would love one.

Speaker:

I totally wasn't expecting this.

Speaker:

All right, so, yeah, for again, the listeners, this is a vest.

Speaker:

It kind of takes on the laser tag like structure, and it vibrates.

Speaker:

Okay, so it seems like we are fully suited.

Speaker:

So Bay and I are here in the hottest of 2023's newest fashion.

Speaker:

We are wearing vests that look, like I said, like laser tag, or like VR maybe,

Speaker:

but we are about to hear music with them on.

Speaker:

Well, the vests each have a bunch of vibration motors.

Speaker:

So the goal of these is for someone who

Speaker:

doesn't have normal hearing, or is deaf, can still experience music not through

Speaker:

actual sound, but through vibration through the skin.

Speaker:

So the system is designed to kind of

Speaker:

transmit that feeling of music just through the vibration.

Speaker:

And folks who are differently abled and

Speaker:

may not have full hearing, have you had any personal accounts from them?

Speaker:

I can only imagine the experience...

Speaker:

Well, Daniel will tell you.

Speaker:

Well, one, Music, Not Impossible has a

Speaker:

great website, and there are some wonderful videos and testimonials there.

Speaker:

But two, literally bringing people to tears.

Speaker:

For someone who hasn't been hearing,

Speaker:

experiencing some connection to music is a very moving experience.

Speaker:

Also, even for people who aren't deaf, but being in an environment where you have

Speaker:

hearing audiences and non hearing audiences, dancing together.

Speaker:

Mmmmm...

Speaker:

As we expanded, we saw it was not just for the deaf.

Speaker:

It was appealing for everyone as an augmented experience.

Speaker:

So now you're in a venue, you can 't tell who is deaf and who is not.

Speaker:

And this is super special.

Speaker:

This is really cool, too, because this

Speaker:

kind of, like, calls back to the So Curious!

Speaker:

Season one, where we were talking about the human enhancement wearable technology.

Speaker:

You said about crying, I'm thinking about - I've seen those videos of people

Speaker:

who are colorblind putting on the glasses and seeing color for the first time.

Speaker:

And those always make me cry.

Speaker:

I can't even imagine what this is like.

Speaker:

Okay, we're going to start with a piece created by my friends from LA.

Speaker:

It's called Beacon Street Studios.

Speaker:

So let's get started.

Speaker:

Let's do it. (Music starts playing)

Speaker:

Oh, yeah.

Speaker:

Oh, man.

Speaker:

So again, we have this vest.

Speaker:

It's wrapped around our torso.

Speaker:

I'm trying to make notes of what

Speaker:

instruments is playing, but as the instrument plays, it vibrates at

Speaker:

different parts of the vest and the torso and the ankles and wrists.

Speaker:

And it's like bringing dynamics into the area.

Speaker:

Every little side piece is like, being

Speaker:

activated right when it should like, they weren't all going at once.

Speaker:

And it feels like how a piano sometimes

Speaker:

you're only using, like, two keys and sometimes you're using, like, six of them.

Speaker:

And I'm feeling it in my back now. Yeah.

Speaker:

It's really difficult to describe, but the best I can say is, like, the instruments

Speaker:

are individualized in different areas of the vest.

Speaker:

Oh, that was crazy! Yeah.

Speaker:

It can move in waves... Yeah.

Speaker:

And it travels from one part to the other.

Speaker:

Definitely the neck and shoulder.

Speaker:

I know we're supposed to be describing,

Speaker:

because we're professional podcast hosts, but this is insane.

Speaker:

I am doing my best to not dance. I know.

Speaker:

Which is really awesome because I'm like,

Speaker:

the vibrations, really kind of like, get you going.

Speaker:

And we can make them sound too, when we want, because part of the experience for

Speaker:

the hearing is actually to produce the sounds on the devices.

Speaker:

So depending on the frequency, we can make them sound.

Speaker:

So as you see, now...

Speaker:

There's intensity, now there's intensity.

Speaker:

So there's light vibrations, and then there's like, wow, that was a strong one!

Speaker:

Oh, my God.

Speaker:

So there's like a strum of a guitar that I feel in my arms.

Speaker:

Yeah, there's no sound now it's just you guys.

Speaker:

Yeah. That's just what's on your motors.

Speaker:

So now imagine 100 people or something.

Speaker:

It's just that strong. Yeah.

Speaker:

Wow. Yeah.

Speaker:

It feels like an entire band.

Speaker:

How did you program the vibrations to go

Speaker:

along with the specific music that you choose?

Speaker:

So we have a few ways of doing that.

Speaker:

The initial goal of the project was to

Speaker:

broadcast live music to the audience at a concert.

Speaker:

Right.

Speaker:

So we would get several channels of instruments from the mixer on the stage,

Speaker:

like the drums, the guitar, the vocals, the bass.

Speaker:

And then we would just transmit it wirelessly to the audience.

Speaker:

A few years back, as we were working with

Speaker:

Mandy Harvey and I don't know if you're aware of her, she was one of the finalists

Speaker:

for The Voice, and she's deaf since she was 18.

Speaker:

She's a friend of ours and a very close collaborator.

Speaker:

So she said, "Daniel, I really would like to feel my recorded songs, because I had

Speaker:

to produce it and record in studio, but I cannot sing to a recorded track.

Speaker:

I have to have the band, because I need to

Speaker:

touch the piano, I need to feel the vibrations through my bare feet.

Speaker:

So I started working on a system for us to play the vibrations in sync with the song,

Speaker:

and now it evolved into a whole full composition platform.

Speaker:

So now we can design and we can also combine both.

Speaker:

We can have the live music with prerecorded stuff as well.

Speaker:

So creating all kinds of crazy effects, you can control -

Speaker:

Wow...

Speaker:

You have 24 points on your body, and they are individually addressable.

Speaker:

So think about LED lights, right?

Speaker:

If you want to control each LED, and to do a different effect, you can do it.

Speaker:

It's the same thing with vibrations.

Speaker:

We can make movements.

Speaker:

We can create all kinds of textures

Speaker:

individually controllable in a composition.

Speaker:

One might say, like, having these buzzing pads on us are a little, like, disruptive.

Speaker:

But for me, the experience is it's instigating movement.

Speaker:

It's like telling us where to dance, how to dance.

Speaker:

It feels like the Dance Dance Revolution game.

Speaker:

It feels like it's showing you how to dance.

Speaker:

100%, it's physical sensation.

Speaker:

I mean, the moment that track hit, I started jumping.

Speaker:

It was like we couldn't not! Yeah yeah, no, it was really cool.

Speaker:

That's amazing. Wow.

Speaker:

We did one more thing to show you! Yeah, let's do it!

Speaker:

Okay, so we have one more thing. So we have to lose the vests, all right.

Speaker:

So I guess the theme was about the future of music, right? So we showed you kind of

Speaker:

the future of music being new instruments, accessibility, but it's obviously also

Speaker:

going to be around computing and AI technology as well.

Speaker:

So, this is Charis, Charis Cochran.

Speaker:

She's a PhD student in electrical engineering here at Drexel.

Speaker:

She is using AI to automatically learn the sounds of musical instruments, alright.

Speaker:

So, how do you do that? Yeah, yeah!

Speaker:

That was my first question!

Speaker:

First of all, how?

Speaker:

How do you train an AI to do anything? How do you do that?

Speaker:

Yeah, so basically, we've gotten together a bunch of data.

Speaker:

So it's about 5 hours of music where we

Speaker:

have labeled the predominant instrument in each of these examples.

Speaker:

So there could be other instruments in the background, but say, for one example, you

Speaker:

have flute and then, like, cello and things going along with it.

Speaker:

With that, what I've taken and done is trained a diffusion model, which is

Speaker:

similar to a lot of the architectures you see in the text to image space.

Speaker:

But now we're taking these musical

Speaker:

samples, adding a lot of noise until you can't recognize it anymore, and then

Speaker:

training this model that will take out all of the noise and steps and produce music.

Speaker:

And then we condition it on these instrument labels so that we can control

Speaker:

which instrument we're trying to produce here.

Speaker:

There you go. And that's how you train it.

Speaker:

Oh, my God, it's that easy, Bey. Why didn't we think of that?

Speaker:

So this is using deep neural networks, which is the technology behind all of AI.

Speaker:

It's just that it's being applied specifically to music data.

Speaker:

It's not just, hey, here's my entire music library.

Speaker:

It needs to know, okay, this recording is primarily cello.

Speaker:

This recording is primarily piano, right?

Speaker:

So there is a data set that Charis has

Speaker:

been using that is labeled that way to train this.

Speaker:

And what we're going to demonstrate is a

Speaker:

couple of things that it's learned about what a particular instrument sounds like.

Speaker:

And so it would eventually be able to be

Speaker:

like, let's say, I'm a pianist and I am listening to a song that I love and it's a

Speaker:

full band, and I want to just isolate the piano part to hear it.

Speaker:

Like, it would be able to isolate just certain parts.

Speaker:

Yeah, that definitely is one application of AI.

Speaker:

So these models can be used for source separation.

Speaker:

Right now, a lot of them are being trained just for generating new pieces.

Speaker:

So you give it noise, and you don't tell it anything about the context of the

Speaker:

music, it'll make up whatever kind of piano it wants to.

Speaker:

Wow. So we have a couple of examples here.

Speaker:

And this is still early days with training

Speaker:

this model, but this is noise turned into flute.

Speaker:

(AI-created flute sample plays)

Speaker:

And just when she says noise, what it started with was, "PSHHHHHHH".

Speaker:

Oh, my God.

Speaker:

We say, hey, give us a flute, it turns it into a flute.

Speaker:

Nothing musical was given to it, like no notes or anything, it's just that.

Speaker:

Yep. It's just that that is quite amazing!

Speaker:

Damn! And then we have saxophone.

Speaker:

(AI-created saxaphone plays)

Speaker:

Yeah. And they're short right now because it

Speaker:

takes a lot of processing power to generate these right now.

Speaker:

So we just create, are they three second, five second clips?

Speaker:

Yeah, three second clips here. Okay.

Speaker:

So here's trumpet.

Speaker:

(AI-created trumpet sample plays)

Speaker:

Wow. So there's a little bit of variation, but

Speaker:

yeah, no musical information was given to it.

Speaker:

And there's a lot of like, the conversation around AI is very, very fresh

Speaker:

and new and it's being thrown around a lot.

Speaker:

Hot topic. Yeah.

Speaker:

What's a misconception that people aren't getting or just completely missing.

Speaker:

I mean, the reason things like Chat GPT are so good, or scarily good, is because

Speaker:

they've been trained on all the text on the internet, right.

Speaker:

And there's a lot. Which is enormous.

Speaker:

And there's a lot and that's all written by humans, so it sounds very human.

Speaker:

Like doing it with music data, even though

Speaker:

there's tons of music out there, it is very hard to get that kind of specificity.

Speaker:

Right, you can go listen to whatever you want, but if you try to say, I only want,

Speaker:

like, saxophone solos, that's actually still a hard thing to find.

Speaker:

So it's about labeled data.

Speaker:

It's also, sadly, a bit about copyright.

Speaker:

In the image space, in the text space,

Speaker:

it's just much easier to get lots and lots of data.

Speaker:

On the music side, it's been harder because most of that stuff is copyrighted.

Speaker:

And you can try to train models using public data, but then you're not using

Speaker:

necessarily the latest or the most popular types of music.

Speaker:

So that's a bit of a challenge.

Speaker:

So more and more places are trying to create agreements with companies or record

Speaker:

labels to get more and more of that data for training.

Speaker:

But that's the frontier right now. Wow.

Speaker:

So if I'm understanding what you're

Speaker:

saying, it's basically that a I can listen to a saxophone all day, but until someone

Speaker:

says that's a saxophone, it doesn't know what it is.

Speaker:

Is that accurate? Yeah, exactly.

Speaker:

Wow. So this is for both of you then.

Speaker:

Why do you think it's important to push for innovation within the world of

Speaker:

entertainment and entertainment technology?

Speaker:

Because there is a lot of pushback.

Speaker:

The entire history of music and entertainment has been a symbiotic

Speaker:

relationship between creativity and technology.

Speaker:

You don't get one without the other.

Speaker:

We wouldn't have had advances in

Speaker:

filmmaking, in stagecraft, in operatic performance, or certainly in electronic

Speaker:

media as well, without technology infusing some very, very creative people

Speaker:

with new ideas about how to express themselves.

Speaker:

So it's a symbiotic relationship.

Speaker:

If you take one half of that away, you're going to lose all that.

Speaker:

That technological advancement is

Speaker:

absolutely necessary for the creativity of the future.

Speaker:

And what future development, even if it's just a concept, we saw a lot of technology

Speaker:

here, but what future development and entertainment tech are you excited about?

Speaker:

Yeah. What do we have to look forward to?

Speaker:

So I can say a little bit more about the types of models that I'm working on.

Speaker:

I'm looking at models that you could say, "okay, generate a full song" or "generate

Speaker:

a full album", and then you could then have tracks for each of these instruments.

Speaker:

And instead of a blank slate to work from, now you've got a little bit of creative

Speaker:

freedom and, like, a starting point to go off of.

Speaker:

Wow...

Speaker:

Some people have compared this to, like, discovering fire.

Speaker:

Is that like, how do you feel about that?

Speaker:

We won't know for another couple of

Speaker:

decades, or maybe centuries, particularly with AI.

Speaker:

A lot of us do think this is an inflection point, right.

Speaker:

AI is capable of doing things that we thought were impossible when I was in

Speaker:

graduate school, 25 years ago, it's advanced that rapidly.

Speaker:

It's doing things that are, in some ways, very scary, and I acknowledge that.

Speaker:

And there are plenty of ways that AI could be used for evil, for ill.

Speaker:

I would say that there are also plenty of ways AI can be used for good.

Speaker:

It's the sounds that we haven't even conceived of, right?

Speaker:

The instruments, the sounds, the

Speaker:

combinations that you just cannot practically experiment with in real life.

Speaker:

Opening that up digitally will only enable

Speaker:

more creative and artistic possibilities in the future, I think.

Speaker:

Hell, yeah. What a great place to close.

Speaker:

Awesome. Thank you so much for having us.

Speaker:

Thank you for visiting!

Speaker:

We're used to having people in the studio, but we're here in your actual space.

Speaker:

This is amazing, we should do this more often!

Speaker:

Come back again! We'll have more stuff! Oh, yeah, we will!

Speaker:

Yeah, we'll come back in a couple of years, and I'm sure it'll be like

Speaker:

something crazy new that Charis came up with!

Speaker:

Thanks again, Dr.

Speaker:

Kim, for having us.

Speaker:

That was honestly amazing, I'm mind blown!

Speaker:

There is just so many amazing innovations in this world of, like, music tech.

Speaker:

It's crazy that we're going to get to see

Speaker:

how they're all going to be integrated into the music world in the coming years,

Speaker:

but I cannot even figure out how they're going to do it.

Speaker:

Yeah, and listeners better be sure to tune

Speaker:

into next week's episode because we're going to look at even more innovations,

Speaker:

this time, how music can affect our physical performance and health.

Speaker:

And so there's really interesting things where doctors are even considering

Speaker:

prescribing arts experiences for general health and wellness.

Speaker:

So listeners, please, please subscribe.

Speaker:

Why have you not already subscribed?

Speaker:

Bey, they haven't subscribed yet?

Speaker:

Yeah, and they have to give us five star reviews, for real.

Speaker:

You have to give us a five star review.

Speaker:

If you haven't subscribed yet, I mean, I appreciate that every time you want to

Speaker:

listen, you just have to type us in and search us.

Speaker:

But man, if you subscribe, you're going to

Speaker:

get a notification every Tuesday this summer when we release a new episode.

Speaker:

So, go ahead and do it wherever you listen, and we will see you next week!

Speaker:

This podcast is made in partnership with

Speaker:

RADIOKISMET, Philadelphia's premier podcast production studio.

Speaker:

This podcast is produced by Amy Carson.

Speaker:

The Franklin Institute's director of digital editorial is Joy Montefusco.

Speaker:

Dr Jayatri Das is the Franklin Institute's Chief Bioscientist.

Speaker:

And Erin Armstrong runs marketing, communications and digital media.

Speaker:

Head of operations is Christopher Plant.

Speaker:

Our mixing engineer is Justin Berger.

Speaker:

And our audio editor is Lauren DeLuca.

Speaker:

Our graphic designer is Emma Seager.

Speaker:

And I'm the Bul Bey.

Speaker:

And I'm Kirsten Michelle Cills. Thanks!

Speaker:

Thank you! See ya.

Chapters