Artwork for podcast Tech Talk with Amit & Rinat
Humanoid Robots
Episode 7018th June 2023 • Tech Talk with Amit & Rinat • Amit Sarkar & Rinat Malik
00:00:00 00:45:43

Share Episode

Shownotes

Robots come in all shapes and sizes. They all perform different functions. But the robots that look like us are called humanoids and they are the most exciting robots currently. There are both advantages and disadvantages to having a bi-pedal robot. But still, there are applications where a robot looking like a human has far more acceptance from the general population than other types of robots. Such as assisting the elderly and disabled, working in dangerous environments, and providing companionship.

In this week's talk, Amit and Rinat talk about Humanoid Robots, why they are getting more popular, their applications and a lot more!

Transcripts

Rinat Malik:

Hi everyone, welcome to Tech Talk, a podcast where Amit and I talk about various aspects of technology and its effect on socio economic and our cultural lives, etc. Today we're going to talk about a very interesting topic. It's humanoid robots. Thank you Amit for coming up with this topic. This is a since we've talked about similar kinds of topics robotics before but as the technology progresses and new sort of cutting edge companies are getting into robotics more and more and building more and more awesome robots, and they are very much human like so we thought. Why not talk about again, about humanoid part of robotics. There are many ways robotics is advancing. But humanoid ones are quite interesting and I think our audience would find it quite fascinating as well. So yeah, let's, let's talk about humanoid robots.

Amit Sarkar:

So thanks Rinat again for that introduction. I was actually fascinated by this topic, because Tesla released a video recently about its own bots are humanoid robots. And I it made me thinking like, okay, so Tesla is now investing in it. And we know there are certain other companies who have done a lot of good work in this area, especially Boston Dynamics. So these are the two companies who have developed Honda has developed previously. ASIMO robot A-S-I-M-O. So these companies have developed robots that look similar to human beings. And that's why they are called humanoids. They're different from androids. So I'll just say that for now. It's humanoids because they are robots. They are not humans, but they look like humans. androids are part humans part machines. So we are not there yet. And they are the they look like human beings, but they're completely robotics inside. And an example of an Android would be something from Star Trek data is an Android. The character data in Star Trek series detection ratio, so humanoid are similar but they just look like humans. They don't talk or they don't do anything. And they are robots. You can you can you if you look at them, you can see that okay, they are robots. They are not like they have they're bipedal, so they have bipedal locomotion. In the sense that they have two legs, just like human beings. They're fair. So they have, they have two legs, they have two hands. They have mouth to talk if they are talking but they definitely have a head and with some cameras that look like eyes. I don't think they have ears. But they might have sensors. It depends on what kind of humanoid trying to build. And they have a torso. And so yeah, so they are very similar to humans and it's a very interesting piece of technology. And so that's why I wanted to talk about it.

Rinat Malik:

Yeah, no, absolutely. Also It stands for giving this insight of different types of robots. I want to just dive a little bit deeper into it. So there are robots. They're like physical robots. I know we've talked about software robots before but today we're talking about physical robots. So when we talk about physical robots, we actually immediately think about humanoid robots. Big thanks to the media, to be honest. But there are many different types of robots. Robots that doesn't look like humans at all. They don't even have you know, a like a sort of like a torso at all. They just have it could be a car or a vehicle or all of those things which are unmanned and sort of moves around with an objective. You could call it a robot. To be honest, as the technology advances more and more, the definition of robot is becoming more and more vague because there are so many different types. So among all of the different types, a very, very popular or a very sort of identifiable type is the Humanoid Type, which is in early days, you know, decades ago, it was it looked like sort of it can shaped limbs, you know, I have to do too and C-3PO comes to mind. Yes. And then, you know, as the technology advances and became you know, more and more advanced in sort of building more nuanced limbs and sort of motor functions of those limbs. So our robots became more than more like Human, and the more I talk about it, I want to sort of mentioned this interesting topic. Called uncanny valley. I don't know if you know about this Amit. It's very topic actually, and I think our audience will enjoy it as well. So, the As humans, we whenever something will look like us or something behaves like us, like humans, we find it you know, we find it more comfortable and comfortable to deal with it. As a result, obviously, because there is that benefit humans kind of want to interact with other human looking things. That's why you know, in robotics, humanoid robots have become popular because companies want to build things that will be, you know, largely accepted. Now, you know, you know, beloved movie characters like that we really liked and then you know, more and more robots, humanoid robots coming in picture in real life, and we started liking them, but as it becomes too much like humans, the sort of it we have, we started to feel like a have a like a strange uncanny feeling. And that's when we stopped liking it because it's too much like humans, but not quite there yet. So just when we reached that stage, where very much looks like human, behaves like human, but there's just something missing and that's the at that stage, we feel really uncomfortable. And they have a strange feeling that, you know, if you sort of put it up put a chart like in an XY plot, where robots are slowly becoming more and more and more and more human and the more and more we're finding them likeable, likeable, more and more likeable. So the line is going up, up, up, and then at one point, we don't find them comforting or family anymore. We feel very uncanny and strange, and that's when it drops the level of comfort. And then when it becomes fully like human, then we feel comfortable again. So it sort of creates a value in that x y chart. And that's what's called uncanny valley. Obviously, if it's fully like human then we don't know that it’s a robot, that's why we're more comfortable, but it there is like, sort of conspiracy theory that Why did our sort of wild evolution make us this way? That's something that is very much like humans, but not humans, and we should feel sort of awkward or uncomfortable or strange about it. Maybe there was something that was very harmful to us in the past, but that's just a conspiracy theory and I don't know any science backing regarding it, so I'm not in any way supporting that. But it is an interesting theory to think about that. As humanoid robots are becoming more and more like humans, there is a point where there is a drop in acceptance from humans.

Amit Sarkar:

Interesting. I think I've never heard this theory, but I think I've heard bits of it. Like yes, people find it more acceptable. When the robots look like human beings. And it’s incredible like human beings. have evolved only on this planet with a specific gravity, a specific temperature etc. And a pair of eyes, a huge brain, hands. We have five fingers. We have legs, and we can crawl we can I mean we can do so many things with our limbs. And I mean, evolution has played a huge part and yes, if something looks like us, then we definitely accept it more compared to if something looks like a dog or a lion or something else. So yeah, so I think I can relate to that. But this is a very interesting theory, and I will definitely want to read more about it. And thanks for sharing this because I think it's a interesting thing to definitely know about, especially when these robots are now going to be more and more part of our lives. And they already are, I mean, in some scenarios, they already are. But I think let's take a step back and think about, okay, so we have robots that look like humans, and we know that okay, it's good for people because people find it more acceptable. So, so let's try to create robots with that look like humans. So and we've talked about the appearance as well. They just look like humans with a head, hands, legs, torso, everything. So that's that. But then what are the mechanics behind running a robot? So a robot normally is autonomous So and, and some robots are remote controlled. So you used to have remote control robots, toy robots. My son is growing up. So some people give him toys. I mean, he's too young for robots. But he still have a remote control car remote control excavator, etc. So but I remember from my childhood that I was gifted a robot. You put some batteries and you just switch it on and we'll do some moves. So it will do that. Those moves autonomously. So it definitely has some chip that has a program in it, and that program will tell it what to do. More advanced robots will take inputs from the sensors and process them given output in the form of motion, in the form of some action, or some kind of voice response, or maybe a nod of head etc. So, these are taking from the sensors, and these sensors could be anything it could be. It could be cameras to record what's what they're viewing. So then you can also have algorithms that recognise what the surroundings are through, say machine vision. We've talked about machine vision. And then we've talked about AI. So how do we leverage that. Then there is this whole thing of balance? So if they look like humans, they have to balance themselves. We find balancing very easy. As we grow up, I'm seeing my son growing up so he finds he initially found it very difficult, but then he got a hang of it and say try to balance yourself on a skateboard. So that's again taking it to the next level. So yeah, so balance is another aspect and then there are other things like okay, how do we power the robot, where is the battery pack. So then these are all the mechanics and the algorithms that have to be on a chip and the hardware that's needed? And then how strong is it? What are the materials? How do we make it more flexible? How do we make the robot do all the range of human motions like our hands can move in as many directions as possible? Get the robot to that? How do we make sure the grip of our fingers get replicated can be replicated on the robot because we can lift an egg without breaking it. Can the robot do the same thing? And can then robot pull a car with the same maybe with a bigger force? So how do you make a robot understand all that? So there are different nuances to all these things? And I think that's where it becomes more and more interesting.

Rinat Malik:

Yeah, absolutely. There's so many ways you can think about you know, the there's so many aspects to think about when you're thinking about humanoid robot, but I keep coming back to the first. You know the core of how we landed on this humanoid design because there could be robots which are built for a specific purpose and there could be robots that are built for generic like all you know, generic general purpose robots if you like. Now, for both cases, as if you are just thinking about an engineer with no sort of view of socio economic interaction, how the you know, the human ergonomics and how it will do all of these things. You will just come up with the best design or to perform a specific task if you're doing a special purpose robot or for even for generic things. You would have various designs and that design of the robot may not look like humanoid at all. But how did we ended up how did we how do we still have the humanoid sort of shape as one of the popular types, because obviously, you know, millions and millions years of evolution has sort of brought about these shapes in which are kind of ruling the world humans so it there obviously are various benefits for general purpose robot to have the similar kind of shape and similar kind of ability. Because it's obviously you know, when we talk about how did human became the top of the food chain if you want to put it that way? It is because of our big brain. But on a humanoid robot, the processor is not going to be in the head like we have. However, when we were confronted with predator for example, the tiger it's no I think our brain had some sort of contribution on for us surviving.

Amit Sarkar:

The flight mechanism fear of flying, somebody get afraid some people start running so

Rinat Malik:

So that’s decision making part might be done by the brain. But if it's your decision is to do flight, you should be able to take flight very quickly and survive. And a lot of us did, which is why we still exist. So we must not forget the sort of the nuanced design that we've sort of evolved into and that is also a very sort of a strong design that have evolved and became you know, was perfected over millions of years of evolution. So as soon as we you know, try to find a solution to a problem. Usually, when we look to look at the nature, we find a good one and in this case, looking at the neat picture is like looking in the mirror and we found the best kind of general purpose robot and what are the what shape it could be. Now, one could argue that you know, the legs we have are not really good for moving, moving fast. You know, we could have had wheels we've axles, although evolution wouldn't allow that, but that's a different topic altogether. But we might think that that would be a better design, but when it comes to terrain like surface we were or stairs or sort of mountainous areas over their wheels or even the sort of gripped wheels that they have in tanks wouldn't really work the way two legs work. So there are many general purpose benefits for humanoid robot not just to be a friend to humans, which is also an important part, of course, but there are many benefits of this design anyway. And now how do we make this design possible with all of these sorts of constraints, you know, mechanical constraints, electrical constraints, as you mentioned, how do we power it etcetera? That's obviously a completely a new era of problems because the way we feel ourselves is completely different because it's a chemical reaction that how we generate energy, but it's what batteries also doing chemical reaction, but it's kind of a different way. But the first thing that comes to mind when we're talking about power, our battery, is that the way we do it is that we serve continuously sort of consume fuel in order for us to keep going and we do a lot of that's what I mean by fuel for ailments and we have a lot of backups I mean, even people can survive after you know five days even with without food, not water, but that's a different definition different topic altogether. But if we looked at the problem that way, maybe then we could we could think about better solutions. I mean, obviously using solar panels to continuously recharge may not be technically advanced enough technologically advanced enough to work because the amount of power generators may not be enough. But yeah, it's interesting how these companies who are at the cutting edge of this now like Boston Dynamics, and apparently with the new news, Tesla is also very much onto it onto this, this this area now as well. It's very interesting space and one to sort of watch out for.

Amit Sarkar:

Definitely, and I think you put it right like why, why robot should look like us? What advantages it has, I think we're not very fast. That's for sure. We are not very strong. Again, there are stronger animals in the world. We don't have a great vision. We don't have a great sense of smell, taste, etc. So we are very average when it comes to comparison but as a package, I think we are very good because we have enlarged brain compared to all of the species and that has given us the ability to rule the world but and given the hands so I think if you don't have the opposing thumb you it's very difficult for us to hold any tool, not the opposite of thumb. It's difficult to grab anything and then work with it. So our hands give us an advantage where we can use tools. Stones, we can use wood, we can create fire, we can create wheels, we can create tools to cast iron, etc. So that gave us a very specific advantage over animals. That's why the animals never went beyond so a Lion cannot hold a screwdriver. A Lion can't hold a drill machine like we can so it loses out on that part. Maybe we can't run as fast or maybe we are not as strong as lion. But we can hold a screwdriver and we can hold a drill. And we can do a lot of things with those kinds of tools. And that gives us very specific advantages. And I think again, with when it comes to bipedal motion, so it's very rare for…in the animal world to see bipedal movement. It's maybe only the monkeys and monkeys also use a lot of their hands. Gorillas, chimpanzees, they use a lot of their hands so they have very strong upper body and they have good legs to hang themselves. But if you look at the majority, the animal world they are all four limbs to walk around and move and that gives them specific advantages. But I think when it comes to balancing, I think that gives us an advantage because then we can try to swim we can try to stand on things, we can go up the stairs, we can we can do we can balance ourselves on a rope, etc. So it gives us again, very specific advantages. So yeah, so there are limitations, but there are advantages. And I think it's a good thing to think about like why like us, because I think the problem is that we are trying to solve our problems for us? And the reason we are trying to create trying to create these robots is to replace some of the manual work that we are doing, say construction work say drilling in the roads, it creates a lot of noise I don't want to expose myself to that kind of noise, or even that kind of vibration when I'm holding the drill to drill a road. I don't want to do that. I don't want to go down the sewer. Maybe I'll send a robot because it's a very dirty job. So I don't want to go up a high electric cable it's quite high, I can fall I can get electrocuted, etc. So and then I don't want to go into a radioactive zone where I need to operate tools, but I can't go because of my because I can get cancer I can. I can have mutations in my body. So there is specific scenarios for which we need humanoid robots. Even on the assembly line. We maybe we have to get rid of people on the assembly line and try to replace them with robots. So it's I'm not saying that it's bad for the people who are already working there. But those people can do some other constructive work. So we can use them for something else where they can be better utilised. So I think it's a it's a win win situation. And that's where the trend is going towards like and plus also companionship in the age of sorry, social media and COVID there is a specific advantages of being humans. And that's why I think most of the companies are now investing in robots that look like human beings, especially Tesla now entering this market, and it gives us an advantage especially at this planet. I mean, if you go and say Moon or Mars, you might need a different set of robots. So like you have rovers that can go across different terrains, because you still don't need a lot of tools. You still are exploring. So for exploding, Wheels are the fastest way to move rather than having robots walking on stones and pebbles and trying to fall. So wheels are much more stable. So that's why it's a different type of robot. And you mentioned that there are specific robots for specific things but I think human robots have those advantages. And I think another thing is that if we get rid of the menial work and they are done by humanoid robots, then it gives us a lot of time to do other creative stuff, other innovative stuff. It's not to say that I mean, we are trying to get rid of people from the from their menial jobs, but it's just to make sure that the robots are doing those stuff, which are very boring, repetitive and not very good for your mind. But then, people can take up other roles, other jobs opportunities, which are more based on their cognitive skills, and they can utilise their abilities much better.

Rinat Malik:

Yeah, I mean, there is. I know this to be a controversial kind of topic, but also it's not fully related to robots, but you know, as you were mentioning the jobs that, you know, we want to give humans more cognitive jobs. I was actually, you know, thinking, of course, yeah, that is always more fun and meaningful a lot of the times but sometimes doing something repetitive is a little bit therapeutic. You know, it's doing ironing and singing, but sometimes it's just, you know, you just want to do move, you know, do the similar movements without having to do a lot of cognitive thinking. But I guess we are making it so that the future world will not have much of those work for humans to do, which I don't know what kind of effect it will have. But it's an interesting thing to think about and just saying out loud, thinking out loud, but yeah, I mean, as humanoid robots become more and more advanced and can do all of these, you know, like ironing or washing dishes and stuff like that. You know, it takes away those tasks for from humans to do but sometimes humans do enjoy some of those things just to sort of zone out and do them.

Amit Sarkar:

So I think, yeah, I think that that's a very good point. But then if you see, we have robot vacuums, it doesn't mean that we can't do vacuum. We have Dyson and it can do create vacuum, but then it needs a human to operate it. So I think you will still have those kinds of opportunities. If you want to do things by yourself. You will have certain things which are not smart enough. For them. So for those tools to operate by themselves, which

Amit Sarkar:

Are just good enough, because see, we have a dishwasher, which is kind of a robot. Yes automatically wash dishes, but we still wash dishes with our hands. And there is still dishwashing liquid being sold in the supermarkets. So dishwashing has not gone anywhere. I mean, people still do dishes by themselves, even though they have dishwashers. Especially in a country like UK where almost every house will have a dishwasher, or washing machine. People still wash the dishes or sometimes even wash their clothes. So yeah, there will be always opportunities for people to do such things if they so want. But I think the whole idea is that if you are if you have a limited number of hours in a day, you will spend them on ironing clothes or would you rather go out and do something like you want to go out for tennis, you want to go out to explore on a hike or a walk? I think those are things that humans enjoy even more than maybe ironing, right?

Rinat Malik:

Yes. That is true. Actually. That's actually a good counter of what I said. Okay, so let's, let's now talk about you know, the AI part of robotics. So we are making the robots look more and more like humans. Now, we are also at the same time separately, I guess in you know, research and technological advancement is happening so you know, nonhumans can start to think or process information like humans, and once those things do things are combined, that wouldn't be a leap into sort of humanoid robotics because it would not only look and sort of behave like humans are close to humans, but also be able to solve generic problems like humans, when given in any scenario even without context. So that would be powerful and, you know, that obviously comes with it’s certain challenges. The first thing that comes to mind is the processing power and where to house it. I mean, we house it in our brain on top of our neck, and one could argue that that may not be the best design and when there are certain structural differences between robots and humans, you know, the way we store and process energy and the way we sort of have sort of the healing ability and the way we sort of have our motor functions, these are all structured different, and for those differences, that could have an effect on the design. And that makes me think about the Boston Dynamics, sort of prototypes of humanoid robots which didn't have a head and have everything else. So it's another sort of super interesting thing to think about. How do you do the processing because yeah, we have made recently a lot of advancements in AI technology but that requires, you know, a very specific type of chip only designed by Nvidia, and apparently is, well, not only the NVIDIA chips can do it, but they are sort of dedicatedly good at it. And those are quite expensive, power hungry and space hungry and heat generating. So all of these questions and constraints, how would the sort of the next years you know companies solve those problem? It's interesting space to sort of tune in every now and then.

Amit Sarkar:

Definitely. And I think you've raised a very interesting argument, where, like, I mean, you need a lot of processing power and you can't store that processing power on the robot itself. So they will need to connect to some kind of internet or maybe some kind of network where they can connect and talk to a central server and just get a response. Because if you look at Chat GPT. ChatGPT is an AI tool. And it has it has scanned through 1000s of apps and millions of websites. And then through machine learning it has gained some kind of insight and now it can predict the next word that should come after a specific word. So say, I say hi. Normally the next word that comes after Hi is how are you? So that's the next words. And that's what it predict based on billions of text or sites it has browsed through. Now imagine. You have all that ability, but it's still a very specific problem. It is going through websites predicting what the next word is. It's not trying to and it's and it's doing only one thing it's trying to generate words. It's not doing anything. Generating words it's not generating sound. There is a different AI tool for generating images, there is a different AI tools for generating videos, but they're all again, software related. There is nothing hardware. They're still not able to hold a screwdriver. They are not still able to hold a drill. And you can say that okay, maybe it's not needed. Then it is needed because we live in a physical world. Are they able to drive a car? ChatGPT can't drive a car. So we again need a specific robot to solve that problem. So Tesla again with the cars, they're trying to go for autonomous driving full self driving. So that's again solving a very specific problem can take now go on a hike. So okay, again, a different problem. So I think the amount of problems that we have solved as human beings are immense and they have still not been touched by robots. I think we are at a very, very, very initial stage of what robots can do, I think the world is now obsessed with software, but the software has to go somewhere and explore and do something and for that you need hardware, and that hardware also has to go through the environment, withstand heat, withstand cold, power itself, do the processing onboard, if you don't have a internet connection, et cetera, et cetera. So Starlink would be I mean, I'm if you if you think out loud, maybe Starlink is the Internet backbone, on which the Tesla robots will work and connect and talk to each other. And with the problem solving or that they are trying to do by looking at their surroundings. I think they will build up and then once one robot runs, learn something, all the other robots have learned the same thing. So they don't have to train the robots again. So I think with that mechanism, it can work out really well. But yeah, just a plan……

Rinat Malik:

Yeah, little bit of working out too well, to be honest, because yeah, so far in my head, I was thinking about, you know, robots from different manufacturers, different companies. Doing different things. And they're all living in ecosystem like different types of people, you know, from different countries or different preferences, living in harmony in our society, etc. But now that you've talked about a globally interconnected network of an army of robots, that just sort of brings a little bit of sort of concern, where if it does fall onto the wrong hands, and at the same time also, if AI technology improves to a degree where it is a can sort of come up with its own objectives, then all of these could be a kind of a serious concern. And it just feels like there will be in a science fiction movie I don't want to mention which one but one of them for sure.

Amit Sarkar:

No, I think we have, I think we understand AI very little I think, whatever AI progress that we are seeing now is very software related. Very unlikely. It's hardware related, and hardware related there are very few companies because it requires far more investment. And there are far more variables to handle when you're just walking from say your door to the garden. So like you have to take a step to terrain. The floor might change from a carpet to a wooden floor. You will have to open a door, you will have to look through a glass you can maybe go through the glass because you can't differentiate between glass and the real object. So there's machine learning and machine vision involved. There is a lot of variables that a robot has to handle before it can just go from the door to the garden. know whatever Boston Dynamics video that we have seen. I think if you look at the behind the scenes video, they first create a setup. And then they train the robot to go through that setup. And the robot learns by going through that setup. Again and again and again and again. And once it has mastered it, then it works autonomously. But that training part still requires the robot to go through that kind of thing again and again and again. And that's where I think that's where machine learning or other things might help. But this is a hardware challenge. And for a hardware challenge, you have to do it. You have to train it through hardware only. You cannot teach the robot to balance it a certain way. Maybe you can think about all the variables but we don't do that in our head. It's all instinctive.

Rinat Malik:

Yes, yes, absolutely. That general intelligence far away. That is far away.

Amit Sarkar:

Specific problems. Yes, I think we are getting there. But General Intelligence very very far away.

Rinat Malik:

Very far away and ChatGPT by the way, I think we've talked about in our early episode about ChatGPT. ChatGPT doesn't have general intelligence. It may appear as if it has it but it doesn't at all. It's just sort of telling you what the most likely response of a sentence or multiple sentences should be, but not an understanding of what it's doing. But eventually when we do come closer and closer to general intelligence as well as become more technologically advanced and advanced with humanoid robots. It will be very interesting to see how when both of these things are combined, and as you rightly said, the carrying that purchasing power on board of the robot the structure is actually quite unlikely scenario and you know, unrealistic. So it wouldn't be connecting with the internet to have like a sort of, you know, bi directional communication to send the problem and bring back the output what it should do. And it would be quite interesting to sort of interact with such creature if like.

Amit Sarkar:

I think Elon Musk recently said in one of his interviews with the Wall Street Journal, maybe we'll share that video here. And it was a very fascinating interview. Maybe not Wall Street, maybe someone else I can't figure I can't remember right now. But in that video, he said that the Tesla AI, or the AI being developed by Tesla is maybe 10 or 15 or maybe ahead of OpenAI’s ChatGPT and any other AI tool. Because they have with hardware variables, they are looking at environmental variables. They are not looking at software, software, you can do programmatically that we have already automated a lot of stuff. But hardware is where the real challenge is where you have to interact with something that you don't have any control. You don't have control about when an earthquake will happen. You don't have a control when Tsunami will happen. You don't have a control of what the temperature will go outside the next day. You can't predict when it will rain and you can't predict the terrain will be wherever you go, especially in an unexplored territory.

Rinat Malik:

So yeah, that is those are challenges but at the same time, we're gonna still also think about that. They will have a lot more senses and a lot more powerful sense. Humans have five senses, including the tactile one, but they have so many different kinds of sensors, even the vision we only see the visible light spectrum but they will be able to see ultraviolet to infrared and much more maybe the microwaves as well. So you know their experience of surroundings to be vastly different than ours. Yes, as you said, you know, they might not be able to predict an earthquake, but they could potentially be a alert, you know, five minutes before that, you know, because they're going to be connected to the Internet and the nearby Richter scale. So they will have a lot more advantage in terms of information gathering. And then how to process the information and make the right decision. That's a challenge that…

Amit Sarkar:

That’s the challenges. exactly because let me give you an example. So recently, we got installed Harvey water softener, and the Harvey water softeners softens the water in your house especially the hard water because it gives light scale over all your accessories that use water. Now we also had a water, the drinking water tap and that requires a separate filter because that you can't you should not be drinking soft water and that filter has to be changed every 12 months. So I ordered a filter and I tried to install it by myself by looking at some YouTube videos. Unfortunately, the video that was there was not the same. So now I have a very specific hardware problem I have to change the filter. Okay, and I changed the filter and the water start leaking. And I call the support staff and they couldn't help me because of course, they can't see what the issue is. But in a similar situation, what would the human eye do? So it will have to try to understand what the problem is. So the problem is I tried to fit a filter, it starts leaking. Now I have to look at all the possible options. So I tried to stop the tap. Stop the water supply. Try to see if I'm not fitting in correctly try to look at the filter itself and figure out how to solve this problem. And the people in the support. Call from the support they said it would require about two to three weeks to get an engineer down there. And it would it would cost me 160 pounds plus VAT etc. Just for fixing a filter and I thought to myself that filter it looks really easy. I'm doing something wrong, but I have to figure out what it is. Now imagine this is a very, very specific scenario. I've never been trained to do anything like that. And I'm now encountered with this scenario, how do I solve it? So the first thing I do is take a video of what we're trying to fit the filter. So I took a video then I tried to figure out what the angle is, what direction I'm looking at, and is it the right thing. And then I try to see what's on top of the filter and where it has to go. So basically what happened is when I removed the old filter the two pipes that gets the water inside the filter that moved, I just had to turn it and then fit the filter and it worked. It's a very specific problem. For which you can't train anyone and encounter in the real world. I imagine a robot has to do the same thing, a very specific thing. Like suppose there is an earthquake it can detect. An earthquake has come but now you have to train the robot how to handle in those situations and that situation the robot has never encountered. That's general intelligence. How to train some humanoid robot to handle the situation that it has never encountered. Absolutely. You can train them on anything that you already know. But the unknown is where the intelligence comes.

Rinat Malik:

Absolutely, absolutely. One of the things I didn't I would I would pick up on the story, an aspect which you may not have thought about is that the incentive that you tried again, is because of the associated cost. Now when would a humanoid try something again after failing first time if it if it has less than 10% battery? Would it try harder? If it has no internet connection, would it then keep trying for internet connection to re-establish connection? so it can do the job or would it move to an…So the incentives that are for us, you know it would be different in they will be incentivized differently. They might not care that they only have 5% battery because we may programme them that you know you don't have to care if you if you're if you die, just keep doing the job until you can. So that might be over there. That's a scenario where I we might actually have a conflict with what a generally intelligent robot would want because it may not want to carry on.

Amit Sarkar:

Yeah, Exactly. I think that's where people I think they get excited or also I think they misunderstand AI. I think AGI is very, very far ahead in the future because

Rinat Malik:

to have once and needs incentives like that, that’s that no way…

Amit Sarkar:

Something out of the unknown, like if you don't know about something and so I think I read somewhere or I heard some podcasts or someplace where they said I will be worried when the AI can do something that it has not been trained on. If you have never given AI, a text about religion and it still comes up with the concept of God. That's something worrying, but if you expose it to a text or some kind of thing,

Rinat Malik:

many, many texts many

Amit Sarkar:

Then yes, that's fine, because you've trained it on that data. But if it has not encountered that data any time, how can it come up with that concept? I'm not talking about creating code. You can create a code by connecting the dots with the concept itself of code. If you've never given a concept of code, that you can actually create a programme to do something or you can have a concept of religion to and have gone if that whole concept is not there, then it's very difficult science comes from correlation. I mean, you correlate you predict you do experiments, and then you figure out things because we I mean, how would Einstein and have thought about space and continuum about time travel or whatever he thought about like, it's very difficult to imagine if you have not been exposed to those things, but we have a specific way of doing experiments and figuring out things that we have never seen or heard that our senses don't trust, or we can't believe our senses that okay, quantum computing can actually exist there is superposition that happened and I mean, there are so many things that our senses tell us but it's all wrong. When you figuring out all the things And that's a very difficult problem to solve.

Rinat Malik:

Yes, absolutely. And, you know, I'm sort of anxiously and cautiously waiting for what happens next and definitely tuning in and watching this space. And hopefully our audience will also be watching this space and tune in with us so we can share this journey together. This has been actually quite an interesting conversation. Amit, thank you for that. And hopefully the audience also enjoyed it very much. And we look forward to see you guys in our next episode. Thank you all for listening.

Amit Sarkar:

Thanks. Thanks, everyone. Thanks for tuning in. And yeah, Had a good conversation. Thanks Rinat very much. Bye

Chapters

Video

More from YouTube