Artwork for podcast Tech Talk with Amit & Rinat
Deepfake
Episode 486th September 2022 • Tech Talk with Amit & Rinat • Amit Sarkar & Rinat Malik
00:00:00 00:46:05

Share Episode

Shownotes

How do you know what you watch on the Internet is real? How do you know what you are hearing over a phone call is real? Technology has come so far now that we can create a realistic fake video/audio and fool people into believing whatever we want. This can have some serious consequences, sometimes even pushing a country into violence.

In this week's talk, Amit and Rinat talk about Deepfake, why is it getting this much attention, what is it, how is it used and a lot more!

Transcripts

Rinat Malik:

Hello, everyone. Some of you have been listening to us for well over a year now, but can you tell with absolute certainty that we are real people? What if we tell you that we're not? And there is absolutely no way of telling because of this technology that we have in recent days that is now being told this deep fake. It's a very clever and interesting technology where you mimic or create content like video, audio or images of real or imaginary people. That looks or sounds exactly like another person. So interestingly, we could be imaginary unreal people that you probably wouldn't have known. To assure you guys, we are real people, although I don't know you wouldn't believe us now. But that's the interesting side of deep fake. And that's what we're going to talk about today. So yeah, welcome, everyone to our episode of our podcasts Tech Talk, and this week, we're going to talk about deep fake. A technology based on artificial intelligence and recently, is being talked about a lot due to its security concerns and also potential good and bad use controversial. So, yeah, deep fake. Amit, what do you think of Deep fake What is it or what's your thoughts on it?

Amit Sarkar:

Thanks, again for the good introduction Rinat. I think you sent the audience to roller coaster for a second making them question out there listening and what they're watching. Is it actually real? And the funny thing is that it's actually difficult to tell because the video is 2d and the audio you can easily fake. So there are now technology where you can have pre recorded voice, go through it, and then you have words and then you can fake someone's voice. And similarly over a 2d image, it's much more easy than a 3d image of a person so deep fake is very, very interesting. And the reason I wanted to talk to you about it today is because of the way Artificial intelligence has improved and the way the graphics cards, and the computing power has improved. In the in the last decade or so, that has enabled this technology to be now almost everywhere around us. And we don't even realise sometimes because we follow a lot of people on social media and we don't know what's real and what's fake. And that's the job for say Facebook or Twitter, or Instagram to do but somehow sometimes they also can't do the jobs properly. And that’s why we as human being has to be alert as to okay what is real what is fake can we actually verify something that we are reading or watching or viewing and see and tell whether it is real or not through multiple sources. Because it’s very easy to read something or find something online and then quickly share it with people. And that’s how you share miss information very rapidly.

Rinat Malik:

Yeah, Absolutely. I mean now a days in digital age sharing is so quick and easy that it’s actually a breeding ground for miss information and it’s difficult to always keep track of wrong information and Also because there are so many information it's usually much easier to create or spread misinformation rather than refuting it. And you know, identifying which is and which isn't. As a result, we see a lot more misinformation nowadays in digital space then then then before because before publishing a book or something was a lot more difficult so if someone wanted to spread misinformation they had to print, they have to publish then they have to spread the paper copies which was a lot more difficult nowadays, it's much easier to spread anything and then much more difficult to prove something is wrong. For example, the Flat Earthers they can they can create content and spread like wildfire. But people who want to refute that with actual logical explanation, each of their arguments, they would have to spend a lot more time to actually explain create that content and then put it out in the internet. That's why deep fake could be very easily used to do this kind of negative, it can be used negatively, in a way that going back to what you were saying that influencers and celebrities,

Rinat Malik:

we do, you know, take their words a lot with a lot more trust. I don't know if it's the right thing to do or not. But we do and then if it's faked, you know, so wherever you're celebrities in and you are thinking that you might be thinking that oh, the usual celebrities, Kardashian families, I don't really follow them or believe them. But there are celebrities that you do actually. Trust for example, if you're not that if you're not a fan of that arena of influence, there might be a fan of Stephen Hawking's or Neil deGrasse Tyson or Bill Gates, or Hank Green or, you know, the green brothers, etc. Yeah. So yeah, I mean, you definitely do have different kinds of celebrities or influencers that you will trust. And if you know if someone like if that kind of influencer was to say something, even if it is not easily believable, you must still give it a second thought. And that's how deep fake can make it even worse to spread misinformation because it's so much easier now to fake. The identities of these people in actual audio and visual content. So yeah, that's quite in dangerous but it does also have some positive news, I'm sure I'm sure. What are what are some of the uses of the effects Amit?

Amit Sarkar:

So before we delve into the positive bits, I think we also need to clarify what the fake actually stands for so fake people understand. So if something is not real, it's fake. So it's just the opposite of real and deep comes from deep learning. So, in one of our previous episodes, we have talked about artificial intelligence, and deep learning is part one way for computers to learn about their environment. The system, and then make predictions and then or do certain tasks based on what they have learned. So deep learning is basically used to create fake is basically used to fake images or videos and how do you do it? The way to do it is to. Put say the image of Donald Trump over Vladimir Putin very two powerful celebrities who are poles apart. And they have very extremist views. And suppose you put the face of Donald Trump on Vladimir Putin or vice versa. What can happen? You can send a friend you can create a frenzy among their followers. Right and in order to achieve this, you first need to analyse a lot of photos of Donald Trump and a lot of photos of a lot of videos of Vladimir Putin.

Rinat Malik:

Which are easily available online.

Amit Sarkar:

Which are available on YouTube or Vimeo and many of the news channels once you do all that, so for that you need a powerful computer. You can't do it on a simple laptop. So you will need to do a lot of analysis a lot of learning and then you try to see what are the common features. You try to compress everything and then you try to superimpose. That's where the magic happens. So So you might have seen a fake or deep fake video without you realising it. And what we'll do is we'll share it as part of this podcast in our description. And you can then have a look as to what the real video is and what a deep fake video is. So that gives you an idea. So that's a deep fake video. So it requires a lot of computing. Imagine I want to start a war. So I live in the UK, and I want to start a war with say Germany. Just saying we are not going back to World War days, but I'm just saying that I want to start a war with Germany. Okay. Now in order for me to do that, I need to show some images satellite images of Germans or English army moving their soldiers or their aircrafts, or ships closer to each other's borders. Now using satellite imagery. You can easily see this, say Google Earth or whatever, and each country has their own satellite imaging technology. Now what if you create a fake image

Amit Sarkar:

and you show that okay, UK soldiers are right outside the border of Germany, in Germany and you start circulating this on social media. Imagine how much panic will that create that simple image because it's fake. And because it looks very real. It's very difficult to tell whether something is actually happening or not. So then you have to go through difficult, sorry, different media sources. So that's what you said. It's more difficult to prove something is right. Than to prove that something is wrong. And that's where the power of the fake comes. Because it's so difficult to prove something is wrong. It gets easily passed on. And now you create a war like situation on Facebook. When nothing has actually happened. No army has moved. everyone is in a peace. But on Facebook. You have now shared an image that creates a frenzy. And now it’s Facebook's job to identify such images using AI again using artificial intelligence to see what's real and what's not a real. And then based on that, remove those images and classify them as misinformation or abuse or something else. So yeah, that's deep fake. That's one aspect of negative aspect of deep fake. When we talk about positive aspects, which you asked which I'll answer now is, suppose a person dies. I die 20 years down the line. You need my voice to power a video. Say I'm a very famous actor, or I'm a very famous podcaster and you need my voice on a podcast, but I'm dead. So you create a script and the script can be voiced using my voice. And that's basically cloning voice cloning. So you hear through repeated numbers of audio files, which I've recorded and uploaded on YouTube uploaded on Spotify, and various other podcast platforms. And from there, you create a void spectrum. And using that voice spectrum you generate my voice this has actually happened in some of the films. A dead actor has been brought back to life. It's not deep fake, but it is almost the same technology. How do you create an actor bring them back to the to the film, even when they are not alive? Or put their face on? On their body double and use their voice from the previous films and then create this whole illusion that there is a person who's still alive.

Rinat Malik:

The first thing that pops into my head is there was a few unfinished songs of Michael Jackson. And I don't know if that's possible to recreate those or complete those. Were using this technology. But then again, as long as you declare what you're doing in technology, I suppose that's usually a good thing. But if you don't declare,

Amit Sarkar:

So if you declare that then you aren’t fake anymore because you have declared it. but you don't declare and it's passed on as real then you know that okay, you pass something fake, which is not real as real. So yeah, you're right.

Rinat Malik:

Yeah. That's when that's when controversy comes in. But again, you know why I listen to you I thought of another sort of positive aspect or positive use of Deep fake is people with disability, like people who have lost their voice for various reason. They could create audio visual content and have their previous voice, you know, voice samples to create a new voice sample, and this is the, I mean, obviously, if you're recreating that person's persona that could still be ethically controversial whether that person would have wanted it or not, but in real life scenario, the person who is the owner of that voice could be helped or any kind of disability. Not obviously not any kind of disability, any kind of disability that affects the voice or the visual side of the person. They could improve their appearance in social media or in the internet altogether, by using Deep fake. so there are there are various positive news I'm sure the

Rinat Malik:

audience would, you know, be able to think about many other positive ones. But this is one of those technology where as soon as you know what it is the first thing that pops up in your head and the negative uses because yeah, it's

Amit Sarkar:

Because it has the word fake in it right? and fake means scam, fake means not real fake. Means treachery. Fake means getting your rib ripped off. So yeah, that's, that's why I think it's it creates that image where, okay, deep fake. It's something that's not real, something that can create controversy,

Amit Sarkar:

Something that can cause harm something that's used to spread misinformation. So yeah, that's those are the kinds of things that you think about when you think about deep fakes.

Rinat Malik:

Yeah, and another very controversial negative news is, and what I've heard is that, you know, the majority of all the deep fakes are available on the internet now in photographic uses. Yeah, so a lot of the times what is happening is a celebrity's face is being used on a porn actor’s Body and then they're re-creating those images without the consent of the person's face that's being used. I mean, obviously, if it's all consensual then.

Amit Sarkar:

Well, yeah, I mean, I don't think that people would give consent to those things. Because it's yeah, it's something that no celebrity would do. But yeah, that's, that's something that is very popular. Unfortunately. A lot of celebrities are. I mean, their reputation are put to test using these technologies, which are not good. It's not the right thing to do. It's not ethical, but because the technology is so easily available, and because people can so easily get away with it. And there are still no regulations. And that's why there's a lot of pressure on social media platforms to identify deep fake, identify any fake profile, identify any profile that sharing misinformation, and then to contain it. It's very easy to create a image of a person and then just share it online. I mean, it's, it's like content creation, right? So you're creating content every day, or every week, every month, and then you're sharing it online. The moment you share it online. Someone can just create a copy and then keep it and even if it gets deleted, it just spreads like wildfire. So there's always someone who's listening or was watching and that's how sometimes what happens is there is a news of something. There is a video or there is audio, and somehow the government's decide, okay, we have to get this video deleted from the platforms. But because someone made a copy of it, you can't actually delete it. It now spreads like wildfire, through Twitter, through Facebook, through Instagram, Snapchat, Tik Tok, etc. And that is the I mean, it's unfortunate. But yeah, that's what happens. And it's not just like, okay, celebrities are getting impacted, you can get impacted. You can get a call from a mother. You don't know if it's your mother. You see the number maybe there's a kidnapper and you get a call in your mother's voice asking for money. And because you recognise the voice and you are not seeing her in person in a video call, and you don't and you trust that okay, it's the same number from which she always calls you. You actually believe and then you transfer some money. It's a real scam. It happens and you are duped of your money. Just because a simple voice call that you could not decide whether it was real or fake.

Rinat Malik:

Yeah, absolutely. I suppose the three most damaging usage of Deep fake. One is pornography. The second is political, to spread political misinformation. Number three in no particular order, all of them are in scamming people on the internet.

Rinat malik:

Yes. Yeah. It's easy to be an imposter. If and usually people questions video a lot less right I mean from the early stage of computing, not even internet. Photoshop was there. So doctoring images has been kind of, you know, we've all kind of, you know, been sort of careful and cautious about images. being fake, but videos being fake. We don't think about it straightaway. And it's also you don't come across them very much. You don't think you come across them very much. But yeah, because of whichever is on and also audio

Amit Sarkar:

Yeah, video with audio. So say Facebook's Mark Zuckerberg is now saying something very controversial. It's not real. His face is there. His voice is there. But it's not a real video.

Rinat Malik:

Would it also be that the celebrities maliciously use it? As a opposite way? Like for example, if they

Amit Sarkar:

To create a publicity

Rinat Malik:

sorry,

Amit Sarkar:

to create publicity?

Rinat Malik:

Yeah, and also like say for example, some sort of confidential audio or video is lead to them being you know, politically incorrect or something. And then they which is actually true scenario, but they said oh, it's actually deep fake people.

Amit Sarkar:

Yeah, exactly. You can always denied saying that. Okay, and now, it lies on the other party to prove whether the audio was real or not. Yeah, it was doctored or not.

Rinat Malik:

Yeah, I think Deep fake hasn't still reached, like a stage that went to that level of perfection that

Amit Sarkar:

I mean, with images it has. You can easily check online with audio I think it has. I think with video it's a bit tricky because video you have to get a lot of things right. The face has to move with the body. The voice has to match the moment of the lips, the accent, the colour, everything so it has to match. So it's very difficult on a video, but I think on the image you can easily create a lot of fake stuff. I think it's very good. And audio. I think there has been fake audio even before the internet. You heard via tapping phone tapping. People used to record voices, and people used to record voices over voices. So yeah, those kinds of things are already there. But I think it's with video which is the most challenging part where it's most computer intensive. That's where it happens.

Rinat Malik:

Yeah, basically, there are many ways of using and not using or claiming that it's been used in Deep fake. I think the level of perfection on videos I mean, I haven't seen the I have seen some deep fake videos, but they weren't meant to be perfect. Like very realistic. they were created as joke. You know, they didn't care about making it ultra realistic. So I don't know what the you know if someone made a lot of efforts to make it realistic. I don't know if it's easy, or if it's going to be easy to thought or not. But the videos that I've seen, they were kind of easy to I mean, I could tell that this this doesn't look real. I mean, you know exactly celebrity. Basically, it might be difficult and also, you know, hopefully, some of you would already know and some of you want there's an interesting theory about uncanny valley. So basically as the technology progressed, you know, we started building robots more and more realistic. We started building you know, content more and more realistic, but it comes to I mean, the more it's like humans or animals with tend to like it more relate to it more. But it reaches a point after which it's so realistic that it's just about very real but not fully real yet. And that's the that's the time when we feel very so the more the liking you mentioned, yeah. So, so basically in charge, if you graph and the more human like it is, the more we get attached to it, but when it reaches a level where it's very human like but not just fully there yet like say for example 90% to 98% We feel very uncomfortable, because it's not human to tell the story human but it's still very much like human so there is a very uncomfortable period. So basically, that uncanny valley, the deep fake I feel now is at a stage where it's very realistic, but you still know that it's not real so you still can identify your brain, your brain can identify that it's not real human. And as a result, you know, you feel a little bit uncomfortable. But then again, if it's presented to you in a strategic or planned manner, that you don't really concentrate whether this video is real or not. For example, you know, like, when you're watching something really quick, a lot of things are happening in a video part of it being that you wouldn't even question it because you're just you know, invested in the story. Then, you know, then you have just consumed a fake story. So yeah, there's so many aspects of looking at it. And yeah, it's interesting how, how it can be used.

Amit Sarkar:

Yeah, because, I mean, if you remember the film matrix, and they talk about that, like if you create something very perfect, humans will reject it, but if you just create not perfect, but just a slightly imperfect humans will accept it. And that's what I think I think what you are referring that sometimes the imperfections make it more real. If you try to make something very perfect, then you know that it's fake, because it can't be perfect.

Rinat Malik:

Yeah, absolutely. Exactly what I'm talking about. I mean our audience, feel free to Google uncanny valley. It's a very interesting Topic.

Amit Sarkar:

Yeah we will share it in the description. It's a very interesting concept, but I think our and our audience will like it. It's very useful to know So thanks for sharing Rinat. Yeah, yeah. So yeah, I mean with all these things. Yeah, we have video, audio voice, you creating all these fake technologies, I mean, technologies to create fake stuff. And then you're passing them on as real. And then there are companies on the other side like Facebook, Instagram, and Google Microsoft. And what they have to do is they have to figure out a way to identify which are the fake or the misinformation, and then remove it. Because what can happen is during election, you can actually influence voter behaviour, just by some fake news. It could be a news, it could be an image, it could be a video and because of that, you change the outcome of an election. And that is actually quite dangerous. Now, because that's voter influence and you should not be doing voter influence. We should campaign but voter influence is something else. And this is very controversial and a lot of companies are now liable to actually remove misinformation from their platforms and it is not their responsibility to get the misinformation or these fake things out of the platform, or ban it, remove it, etc. because it influences a lot of people Facebook is very powerful. I mean, about a billion users log into the platform. Imagine if something goes viral on Facebook, a billion people will read that content or view or imagine the power of it and that's where I think companies have a huge responsibility because of the scale of that platform. They have a huge responsibility to make sure that such kind of misinformation doesn't spread. So that's why now they're bringing in more regulation because earlier information could only travel as far as we could travel. But with the internet, it can travel anywhere, instantaneously. It can go to space. I mean, we can talk we can send fake news to astronauts on International Space Station. And

Rinat Malik:

yeah, I think I think they can also access social media anyway, so you don't even have to send it targeted. They could just be you know, looking at your Facebook.

Amit Sarkar:

Exactly and then you can feed or radio broadcast. And if they are, say travelling to Mars or a very distant planet, and every time they hear audio from earth it takes a couple of minutes, then you can easily fake that you can intercept that audio and then send something which you have faked and you can create a panic

Rinat Malik:

yeah could be malicious and to be honest and because there is a delay with obviously this good eight to 12 minutes delay right?

Amit Sarkar:

Yeah. depending on where you are in the solar system, yes,

Rinat Malik:

yeah. So, So I'm talking about Mars. So you know, just to get to Mars, there is quite significant delay. And by that time, if say for example, 150 years or hopefully 200 years from now or less there is a civilization in Mars. There is a large group of people living in there and do you or someone sent a fake message there with deep fake which would probably be more advanced or more realistic by them. And then that could cause a frenzy or a panic for them to actually ascertain that it's real or fake. They have to communicate back to say, for example, a message from the President of whichever country and then they have to relate back to that the president of the government and then the government have to verify and then sent back but okay, by the way, this is Fake. That will take at least an hour or two even the most responsive of governments. And by that time, if it was quite a serious, you know, message, there would be Havoc started in the whole entire planet. So it could be so dangerous in future and yeah, I wish I wish there is more regulations very quickly. You know, that comes in and you know, people are just like we have technology to create it. There could be more technology to identify, you know, maybe with the click of a button for identify whether it's actually fake or not. That's probably what's, what's the what's needed technology with the current situation. Yeah.

Amit Sarkar:

I think what is happening is that AI, which is being used to create all these deep fake is also now being used to identify such fake stuff, and then automatically remove it because imagine if a human has to go through every single piece of information, image and voice and video, that it's very difficult, right?

Rinat Malik:

To be honest. Yeah, that's first of all, that is difficult and still not 100% accurate. I mean, exactly is created for to sort of misinformed human

Amit Sarkar:

To fool you.

Rinat Malik:

Yeah, yeah, absolutely. So now this is this is very interesting side of me. Obviously, I'm a big fan. of technology as a result AI and machine learning and everything. But in some cases, I feel decision trees are more preferable because even if AI technology are used to identify whether something is fake or not, AI is there by 100%.

Amit Sarkar:

Yes. Now, because it's trained on a model and the model is not 100% Because that's by human. So you will have inherent biases in it.

Rinat Malik:

Absolutely. So as a user of that AI, because usually AI is 90 to 95% accurate and if I'm the person using AI to identify misinformation, I would always take the result of that AI as accurate because it's 95% of the time it is accurate. So if we did make a mistake on a very critical situation I would be in a biased fashion believing that and that could be quite harmful. So in these kinds of scenarios, I feel like 100% accuracy is necessary and that is usually ensured by a very robust algorithm, which is algorithms are usually decision tree based. Yeah, it could be very complex and could be very capable. But it's based on decision tree which might make it slower or might make it you know, possible or not, but if he is possible, then it's 100%. So, in very critical scenarios like this, I feel like I would trust an AI less and I would probably trust a well thought well structured algorithm, decision tree based algorithm more It sounds counterintuitive, but to prevent AI productions you know, I feel like probably decision trees might be, I would love to hear feedback from the audience. What do you guys think on this and yeah, what's your thoughts on that?

Amit Sarkar:

I think, yes. Decision Tree and algorithms. Yes, it's absolutely true. We cannot trust a technology that we have created with our own inherent flaws and inherent biases. And there has to be some kind of so yeah, so it's, it's very interesting. To think it in that aspect. Because, yes, technology can be used to create this fake thing and also identify, but how much do we trust that technology? And we have to think like a sceptic because even the technology that we create to identify such misinformation could be flawed. So you're right in highlighting that, yes. We have to be very careful. Take everything with a pinch of salt. And then yeah, decide whether what we have identified is the right kind of embrace information that we want to remove or not. So I think there has to be two three layers of filtering. So like you have two factor authentication in when we talk about passwords, right? So you have two layers of authentication in order to prove someone's identity when you're trying to log into a system. Similarly, if we want to remove some misinformation, we can use AI to first do the hard work of identifying okay, what it thinks is fake and not fake. And then we can then ask a human to validate that those results and that can reduce a lot of effort for humans. To like, go through each and every single image video or audio

Rinat Malik:

Yeah, that's potentially the only way we have right now, but I'm still not going to be convinced that that way would work. The reason is lottery tickets. I mean, you know, people still buy lottery tickets, even though their chance or probability of being struck by lightning is more. The reason is, they don't see the millions of people who bought and didn't win. They see one that did win. And it's an inherent bias. It's it if you turn it around on the other side, the person who is checking whether it's fake or not 95% of the time, it's gonna be the result that the AI checker AI predicted anyway. So when there is something they are going to be biased to think that oh, it's probably, you know, what the AI said as results because it's usually 95% of the time right? So whenever you are checking something and it's matching with the previous result, 95% of the time we would actually question yourself when that other 5% comes. So yeah, I mean, I'm not offering a solution. I don't have one. My off the top of my head. My initial solution is to use decision tree or an algorithm but yeah, on AI based, you know, how can you make it more robust? I don't know. I mean, obviously, some AI is with the result also gives a confidence score. For example, you know it identifies handwriting and a line of handwriting, it says that, okay, this is actual writing, and it's 97% confident that it's correct, or 75% or however many percent, but even the confidence score is generated with the AI that's again, inherently bias. So it's a very interesting side to think about if you actually want to be foolproof, and you know, I have I have no grey area and have a black and white decision made. Decision Trees feels like value, but the thing is in when we have such critical scenarios when we need absolute 100% certainty, usually, we are happy with 95% I mean, obviously, if Havoc is created on a planetary level, like we just talked about then yes, that's a scenario where you need that, but in regular lives, as long as you have mine in over 90% accuracy, you can sort of get by,

Amit Sarkar:

you can get by it. Yeah, I think elections are something very political or something. That can affect some the aggression between two countries so if it gets start a war, if it can influence election, if it can jeopardise someone's status. So like a celebrity. I think that's where you think that yeah, this is very critical. This technology has to identify, otherwise it will, it can destroy someone's reputation, people's lives, etc. So those are the places where I think it's very critical. Majority the cases yes, you're right. It may not be that critical. It's just for a meme just to make fun out of people, etc. But when, when it starts affecting people's lives and things start getting real, and that's where we need solutions, and that's why the companies need to step up. Decision Trees, having AI, having humans. Yeah, there are various ways in which you can do it's like do you trust the system that's created the system? So it's something like so. So and it's difficult, so that's why whenever you read anything on the internet, and this, I can now say for certain because, see, we have been talking about technology for over two years, right? And we read a lot so that's why we are able to come up with all these new topics. And when we read we have to identify whether the sources that we are reading are real or are they actually spreading some kind of misinformation. So we have to validate some of the sources with multiple reads etc. And we know now to a certain degree which sources to trust and resources not to trust. And similarly, whenever you're going online, you need to be very careful about what sources you trust and what sources you don't trust. I mean, I'm not talking about people. I mean, you can trust someone I can trust Rinat, but I don't trust what Rinat is sharing. So that's there is a difference between that so if you know what Rinat is sharing comes from a reliable source like BBC, then you can be certain that Rinat I trust and the things that he shares from BBC I can trust so I can safely say that whatever information he sharing is real. And I don't have to do my own research in fact checking. So that's what whenever we try to share information we try to do the fact checks and we try to post the original source of that information if we can. So I mean, we talked about deep fakes. We talked about the harmful effects, we talked about the positive effects, but in the end, why are we talking about it? The reason we're talking about it is to create awareness, as we always want to do with our podcast and the show, but the other thing is to make you aware that okay, something like this exist, it can impact lives, it can cause harm, it can it can do of your money. So please be careful. Please be aware. Please be alert. Do your homework and don't just share things randomly. Try to validate the source. That is very important.

Rinat Malik:

Absolutely. Yeah, very much. So. I mean, you might be thinking that yeah, this is a technology let's Yeah, I'm aware of it. So now, as a consumer of content, I'll be more careful. But Do be do be mindful of the fact that not only a consumer. of course, yeah, definitely be careful as a consumer of content whenever you read, or even share so when you're sharing something then you're not just consumer, you are the spreader of yourself, which conveys information now, so you do a few things, right? You consume content you then share or not, you know, if you share it then that's another type of spreading or activity or in this arena, and then also what would also happen to you that this could happen to you. Rather, someone else could do a deep fake of yourself without your knowledge or consent and use it for malicious purposes. So for that, of course, be mindful of where you upload your content and careful of what you say. But again, you know, how much can you do like we are uploading our contents video and audio and our voices can be copied and taken, you know, be made to say something completely different. So it's, I suppose at this point, it's important to be vigilant about your online presence. So you know where and when you shared your content. And if there is something outside of that, then you know that that is very likely. Okay. So yeah, I mean, it's about being vigilant of the content you consume, but also the content you share and also the content you put out, you know, having a knowledge or keeping a track of that so you can easily identify if you're the victim. So all of these different ways.

Amit Sarkar:

Absolutely. I think because everything is now stored online, and it just makes life so convenient, that we sometimes forget what can be done using that kind of information, say your Cloud account gets hacked. And all your photographs are now in public. And you have shared some private pictures and those are on the cloud. And now it's public. Imagine a hacker gets I mean hold of that photograph and your reputation is ruined. And they spread such kind of information. Now, think like this. Fake information, fake profiles, fake stories, they've always been there. So why are we talking about deep fake? Because now AI has come into picture. And it's more convincing. I mean, it's more convincing for me, or if I see a video, or I see an image or I listen to something, rather than just a fake profile, or fake news or fake story, it is more convincing, and more powerful. If I watch someone talk about something, even though it's fake, but I can easily get influenced by it. Simply because created using artificial intelligence. And that's what we have to be aware of that technology like this can be used for malicious purposes. And we just have to be careful about it. I mean, think like this. I mean, currently we have so many biometric technologies, right voice recognition and face recognition, etc. And we want to open our laptops. And we have fingerprint scanners. We have voice we have facial recognition. What if we can somehow sometime in the future uses technology to actually unlock a device just by downloading a piece of your audio and running an AI algorithm, creating a voice spectrum, asking you to utter few words and then relaying it out loud and your laptop gets unlocked? Imagine and that's the scary part.

Rinat Malik:

Absolutely. Yeah. Very much so. So yeah, I mean, we'd like to, I suppose leave you with this thought of as we usually do to be aware of this technology exists and also, we want to find out the positive use of News and if you if you find the positive news that hasn't been professionally implemented yet, then why not go for it because the technology exists if the Use doesn't exist, that's a massive gap in the market. You could, you could potentially have a really good business idea out of it. So yeah, I encourage people to again, be aware of all the technologies that exist, that's why we talk about these things and you know, hopefully something good will come out of it. Hopefully this will sort of make you cautious enough to something hopefully bad will be avoided. So yeah, that's, that's sort of my final thoughts, and hopefully you guys have enjoyed the topic we're talking about I you know, I very much enjoy talking about it a lot of new sort of thought experience. And yeah, it was it was quite interesting to explore this this topic with you Amit.

Amit Sarkar:

I think. I think this technology is quite new. I read about it and I thought, okay, it's a very interesting topic to talk about because it's quite relevant today. There are a lot of people getting impacted. We sometimes do share misinformation. Just because we are not very careful. And I think as a tech podcast, it's our job to share such kind of information for our listeners and viewers so that they are more aware. And thank you so much again Rinat for that. Absolutely. Wonderful conversation about another interesting topic in technology. And I hope our viewers and listeners have learned something new and see you next time. Thank you so much again.

Rinat Malik:

Reach out to us again. Our contact details should be the description and let us know your feedback or ideas or if you'd like to join us as a guest. I hope to see you guys again. Next week. With that. Thank you very much.

Chapters

Video

More from YouTube