Artwork for podcast Razorwire Cyber Security
Beyond Buzzwords: The Truth About AI
Episode 3710th January 2024 • Razorwire Cyber Security • Razorthorn Security
00:00:00 01:06:38

Share Episode

Shownotes

Hey there, Razorwire listener! In this episode, we welcome back cybersecurity experts Richard Cassidy and Oliver Rochford to follow up on our AI podcast back in November. Join us for spirited debates on the current state of AI capabilities, their imminent impacts on society and business, and thought-provoking speculation on the future of AI and its existential promise and perils.

We tackle AI topics ranging from innovations like large language models to the role of quantum computing, governance challenges and regulatory responses, workforce disruptions, and the potential for artificial general intelligence. You'll come away with an insider's perspective on AI progress and get beyond the hype to understand real-world limitations and applications.

From actionable business advice to philosophical discussions on the human condition, the Razorwire podcast offers incredible insights from industry veterans Oliver and Richard. Learn about investments, cybersecurity issues, ethical considerations, the AI "arms race," and transhumanist ideals spanning neural implants to robot bodies.


Whether you're making strategic decisions in your company, tracking public policy issues, or just want to sound informed on emerging tech, the Razorwire podcast delivers the context and perspectives needed to evaluate AI's present impact and future potential with wisdom. Tune in for enlightening analysis you won't get from sensationalised media reports. Every episode offers rare clarity to think smarter about technological forces shaping society.


"I don’t believe we know humanity is not ready for AGI. We haven’t evolved in the way that we think, and as I said, our colloquial, war-minded economics today to actually even have AGI benefit the planet."

Richard Cassidy


Listen to this episode on your favourite podcasting platform: https://razorwire.captivate.fm/listen


In this episode, we covered the following topics:


  • AI Development Accessibility: The current challenges of the development of AI technology.
  • The future of artificial general intelligence (AGI): The conversation delves into the future of AGI and its potential impact on society.
  • Ethical and Existential Concerns: AI's potential implications for society, humanity, and the labour force raise ethical and existential concerns.
  • Business Responsibility: Business leaders are responsible for managing AI technology and should view it as augmenting the workforce.
  • AI for Global Solutions: AI technology has the potential to address serious global problems if used responsibly.
  • Advancements in Human Health: Some advocate for the use of AI to develop new technologies to improve human health and capabilities.
  • Lack of Global Legislation for AI: Concerns are raised about the lack of global legislation for AI and its potential implications for businesses.
  • AI in Military and Autonomous Robots: We discuss the potential implications and ethical concerns of AI technology for building autonomous robots and weapons.
  • AI Regulation and Consequences: We explore the fear of and potential consequences of regulating AI technology.


Resources Mentioned

Moore's Law

Neuralink

Fermi's Paradox

GPT

LLM-based products


Other episodes you'll enjoy


The Use Of AI In Cybersecurity: Consultants Roundtable

https://www.razorthorn.com/the-use-of-ai-in-cybersecurity-consultants-roundtable/


Lessons from an InfoSec Icon: A Fireside Chat with PCI Guru Jeff Hall

https://www.razorthorn.com/lessons-from-an-infosec-icon-a-fireside-chat-with-pci-guru-jeff-hall/


Connect with your host James Rees


Hello, I am James Rees, the host of the Razorwire podcast. This podcast brings you insights from leading cyber security professionals who dedicate their careers to making a hacker’s life that much more difficult.


Our guests bring you experience and expertise from a range of disciplines and from different career stages. We give you various viewpoints for improving your cyber security – from seasoned professionals with years of experience, triumphs and lessons learned under their belt, to those in relatively early stages of their careers offering fresh eyes and new insights.

With new episodes every other Wednesday, Razorwire is a podcast for cyber security enthusiasts and professionals providing insights, news and fresh ideas on protecting your organisation from hackers.

For more information about us or if you have any questions you would like us to discuss email podcast@razorthorn.com.

If you need consultation, visit www.razorthorn.com, We give our clients a personalised, integrated approach to information security, driven by our belief in quality and discretion.


Linkedin: Razorthorn Security

Youtube: Razorthorn Security

Twitter:   @RazorThornLTD

Website: www.razorthorn.com


Loved this episode? Leave us a review and rating here


All rights reserved. © Razorthorn Security LTD 2024



This podcast uses the following third-party services for analysis:

Chartable - https://chartable.com/privacy

Transcripts

Jim [:

Hello and welcome to the latest edition of Razorwire. And today we have a really good topic. We are returning to the subject matter of artificial intelligence. We've done this before with these particular two individuals. It's Oliver and it's Richard, who I've gotten back for us to have a really detailed look at where we think things are at the moment. It and where we think things are going now. This is going to be quite a spirited debate. We have had a brief discussion about it before we got on this particular podcast. And wow. I think we're going to probably experience very different ideas of where things are going and the application of artificial intelligence, as well as what we feel about the legislation from the US and the UK and so on and so forth. So please stay tuned. It's going to be an interesting one. Welcome to the Razor Wire podcast, where we discuss all things in the information security and cybersecurity world, from current events and trends through to commentary from experts in the field, providing vital advisory on what it is to work in the information security and cybersecurity space. So here today, in order to discuss our return to AI and where we are and what's going on, and there's a lot going on at the moment, as I think most of you are probably aware. I have the fantastic Oliver Rochwood and Richard Cassidy to return to discuss the matter of artificial intelligence. Just to remind everybody, not that they haven't seen you on multiple videos or podcast recordings, do you just want to kind of remind them who you are? Should we start with Richard today?

Richard Cassidy [:

Of course. Look great to be back. And I think it's great timing to follow up on our initial podcast around AI. So I'm Richard Cassidy for those that haven't heard me before. And I'm currently working as a field CISO for a major backup recovery and cyber resilience vendor. But actually been in the industry for about 23 years now and really kind of heavily cybersecurity focused, doing a lot of work with customers and sea level suites, et cetera, in ensuring that they understand what they're doing, why they're doing it, and in the ad infinitem war that is the front lines of cybersecurity, my job is to kind of help shed some light and make it a little bit more business oriented. So yeah, that's my background and who I am.

Jim [:

And now on to Oliver, a returning co host who spent plenty of time in this field and considering this field.

Oliver Rochford [:

Oliver, former Gartner analyst as you mentioned, I've worked in the leadership roles in a couple of data science teams and I'm currently advising a couple of companies, Peco Security, Arcana AI and Alpha level, who are doing that kind of work. And I also run a newsletter called.

Jim [:

The Curious AI newsletter, which you've recently gotten quite a significant amount of people signing up to, have you not?

Oliver Rochford [:

I have. Considering it's LinkedIn, I think I'm about one shy of 1000.

Jim [:

Right. So maybe we can help you get that extra one or another thousand. Fantastic. No. Absolute pleasure to have you guys back on. The subject of AI has been pretty controversial for a while now. Some people like it, some people don't. You've got the whole argument about where we are in the whole kind of development cycle of AI from where people think it's going to be, where you've got some people who think it's going to start building robots and kill us all, you've got another group of people who think that all their jobs are going to disappear. We even had the actors and Screenwriters guild go on a big old strike, of which AI was one of the key concerns that they had, that they were going to get replaced. We've seen a lot of different kind of scenarios. I mean, OpenAI had all that problem with the CEO getting booted and then being brought back. And the US, the UK have released their AI guidance around security, which is predominantly what we're here to kind of be concerned about, the whole security of AI and its usage. The EU has got its own act, and I'm sure we're probably going to see quite a bit of other legislation coming out. And it does make me laugh, actually, because quite often with a lot of the legislation I read, I sit there and think, did you actually get any specialists in the subject to come and kind of think about this with you, or did you get Dave from around the corner? But that's my own view on it. I mean, for those out there watching where that's like, okay, well, where are we? Where actually are we in the grand scheme of things?

Richard Cassidy [:

Well, look, let me jump in there. I've been doing a lot of reading and listening on this topic and what are the stages of AI development? And some will tell you there's ten, some will say there's seven. I think largely academically, there's three relevant stages, right, artificial narrow intelligence, which we are at today and have been at for some time, and then where we're trying to get to, albeit, I think we need to be careful about what that looks like and we will cover that in this part of the podcast. But the next stage is artificial general intelligence, Agi. And that's kind of what the industry is saying happened around the OpenAI Qstar breakthrough, that it was potentially an AGI capability, and the board got worried by it.

Jim [:

Right.

Richard Cassidy [:

And what actually has gone on, and there's still a lot of data that we need to understand from that. But then if you go on from AGI, then the next stage of development is artificial superintelligence. And I think it goes beyond that. By the way, superintelligence can become greater than that as well. And some research papers suggest things like cosmic AI and godlike AI. I think what you have to understand is we don't know the universe in its entirety, and we really have no idea of theories yet to be discovered and patterns yet to be understood. And so it is very feasible that AI will be a tool that takes humanity to that level of understanding. And by doing that, there is a risk that it may look at humanity as potentially a blockage to its own survival at some point, and we may be jumping the gun a little bit in this discussion. But I do think there's some significant risks that we have to talk about in our AI development stage. I still think we're at the stage, and some would disagree with me, that we have the ability to control how it operates or to be able to look at an ethical AI approach, and that kind of lends itself to where does AI get most of its funding and resources in terms of development today? And there's absolutely no doubt it's in the military world, right? Because whatever anybody tells you, we've never got out of wartime economics as a human race, right? From the very first humans that walked through the caves to where we are today, it still is a very colloquial, siloed world that we live in. And we always want to outcompete other countries, our neighbors, other neighborhoods, other cities, and be better than the next person. And in the military, that's survival. It's been able to be first to market with whatever the systems and the autonomous weapons that you're trying to build. And so I think that's really where we need to be very careful about moving from this narrow artificial intelligence to artificial general intelligence. We've got to be very careful about where we want general intelligence to help provide benefits and to help humanity versus where we want to control it and potentially not have it look into assets or avenues that could be an existential risk to us.

Oliver Rochford [:

Interesting that you mentioned the next stages, because in reality, what you're seeing in the news already is that we're starting to see destabilization way before we're even anywhere near AGI. Purely and simply because you don't need that level of intelligence to fool people into believing something is sentient. We have to differentiate here. Intelligent does not mean conscious or sentient. They're two different things right now. It's like running a calculator. There's no there. And consciousness doesn't just magically appear. I'm sorry, it doesn't. That's an engineerist mindset. People who don't understand enough about cognitive science, but that's not the way it actually works. We have no current theory of the mind. And whether the hardware is relevant is another completely different question. Nevertheless, it's a form of intelligence that's, I think might be a better analogy. It's a new type of intelligence that's being created here. The dangers for risks, they're already occurring. If you look at, like the impact on a job market, it's slight right now, but it will increase. There are surveys coming out. There are surveys saying things like 42% of chief marketing officers expect to be able to reduce headcount next year. That's one survey which I just saw last week. There are certain jobs, legal professions, anything where you're reviewing documents, where this is going to hit hard. I personally believe this is actually going to be something where certain geographies are going to get hit harder. If you think of the data entry work, the customer services work that we actually outsource to some countries, that's going to be something that we're going to be automating very quickly. And I think the other risk is for end users is that you're going to be interacting with these systems, whether they're production ready or not. You're going to be logging onto your online banking, and that's what you're going to have to get through, because we know they're not perfect, they're not able to understand everything if their understanding is the correct word. So I think there's a whole bunch of imminent problems that need to be solved first. But most of all, it's the economics, man. If you look at the co2 carbon footprint, just of training a model, if you look at how much water is being used generating an image, if you look at the compute cost, which is essentially electricity usage, we need to tame that in a very real way. Otherwise the actual business applications are going to be quite narrow because they're not really just defined by technological capability, but they have to provide ROI. Right. And right now we have this experimental phase. There's this great paper by Andrews Horowitz where they're saying that 80% of their generative AI investments, 80% of that money, is going into compute. So basically this is going straight into Amazon credits. And when you read about Amazon investing in a startup, what's often in the small prints, what they're actually doing is donating compute for equity. So that is a currency, and that is going to be the limiting factor right now. So that's another thing that we're going to have to kind of fix. And the regulation, because unemployed voters are not happy. Voters think politicians are going to take that to heart no matter what other pressures are going on. Which brings us down to the other big story going on that we all saw on OpenAI. I don't think it was the main driver, but you saw two different philosophies at work there. What's called effective altruists and the EAC movements, who are basically effective accelerationists. And one says both believe that AI AGI is imminent. Where they actually disagree on is like, how we're going to deal with that. One says, we need to manage this, make sure it gets used for mankind's good. The other says that, well, we just need to accelerate it and just let it go. And so we definitely saw those, I would say, tensions play out there. Although if you actually dig into it, that wasn't really the main driver. It's not this big conspiracy where it's a bunch of people, like, plotting future history, but there are these camps. So it's definitely something which takes you into the mindset here, where we actually are, when people who are in the know are actually debating at that level. So I think you're right, Richard. There are some imminent risks, but I believe that they are more obvious risks, which we're going to have to work through first.

Richard Cassidy [:

Yeah, I wanted to follow up on a point around intelligence, because it's a really good area of conversation there. Right? I mean, gosh, intelligence, the context of AI, there's so many perspectives to it. Oliver, as you well know, being a data scientist and working in AI disciplines of various know, we've got computational intelligence, learning adaptation, narrow versus general. We talked about this contextual understanding and reasoning, and then ethical and emotional intelligence, which is the bit that AI significantly differs from humanity, is that we don't believe it's possible at the moment, at least, for AI to possess emotions, empathy or moral understanding.

Jim [:

Right.

Richard Cassidy [:

Because the end of the day, it's code and mathematics that govern how it operates. But interestingly, right, if we look at the intelligence of the OpenAI platform, Chat GPT four, as it relates to IQ, right, Chat GPT four as of last month, or whatever it was, I think it was as of October, has a verbal IQ about 155. Now, think about the average IQ in the world. That's an interesting figure. But what's more interesting, again, I was listening to a podcast with Mo Goddard, the ex Google chap who's pretty much telling the world that AI is bad and we're all doomed. And he's going around doing speeches about how we need to be ethical in our AI approach and this, that and the other. His estimation, at least in the research he's done, reckons that we're looking at 1000 times smarter. I think it's by 2027, 2026, than it is today in terms of intelligence, which is just incredible in terms of where it's going. And I think that's the bit of understanding that we don't really have a grasp on. What does that level of intelligence enable in AI? And Google has a good view on it, too. If you think about what Chat GPT is doing, they're staying in a particular area of AI development. But Google believes that the AI should be multiple, an amalgamation of multiple different types of AI approaches, which is where you kind of do cross that precipice from narrow, sort of focused AI that we have today to the AGI of what's kind of imminent. And I think that's the interesting part. It's that final section on kind of ethics and morality. What are we going to be able to do? How are we going to be able to embed that capability, that understanding, into AGI, so that it is something that we can work alongside. And I don't have the answer to the question, but it's a question the industry has to answer somehow, or else I think we're about to.

Oliver Rochford [:

We mustn't pretend we're going to have Agi in a couple of years. Right, I'm sorry, that's not borne out by what we have right now. Llms are not AGI. They don't understand. There's nothing there that actually has any reasoning capability. I think these are problems which they need to be answered. But in reality, if you think of people actually doing something actionable right now, what can the average person do about this? And even before it exists, it's just a philosophical discussion, like the IAC people of oscillator, because I am still skeptical whether we're going to see AgI in next five to ten years. And that's a hypothetical discussion. It's nice, don't get me wrong, it is a nice discussion, been held since the 50s, by the way. That same discussion, possibly a bit longer if you go into science fiction. But right now, the question is rather what impact is this having on society? What impact is this having on jobs? What impact is this having on cybersecurity? Not the AGI stuff, but already what we're seeing now, just the llms, just that level of sophistication. And if you kind of dig into that very deeply, you're already seeing like, it's quite disruptive, right? From people having digital girlfriends or boyfriends where they're being switched off overnight because the company is folding and they're having basically emotional trauma because of it. We're seeing the holiday season. What we've seen is an uptick in criminals basically using it to generate fraudulent product ads so that they can scale up the actual scam business of doing fake orders and so on. Right? We're seeing people being displaced, not necessarily on a massive scale because of AI, but we have seen on the order of thousands of people, including in official us jobs reports, for the first time ever, AI was actually listed as a loss in the thousands. More importantly is that it's going to depress wages. That's going to be the imminent factor. Right? And there's actually economic reports coming out about it going to depressing wages. This is way before we're even talking about AGI. And if we have 510, 1520 years of iterative improvement before we hit AGI, think of the impact on society in other areas and what that impact will have on the development of AGI as well. This isn't like linear projection is never a good idea. Every single incremental step here is going to have 2nd, 3rd order consequences which will impact that timeline itself, including resistance, regulatory scrutiny and so on. So I think this is a far more. That's why we talk about the singularity, why you can't predict the future beyond a certain point. The impact is so massive across so many areas, it's hard to extrapolate a single area. You can't untangle these. What you can do is try to look at the next 618 months, maybe 32 months, and you can talk about what's going to be the near impact before that. And I have to correct you, I'm not a data scientist, I'm an engineer. I just work with data science teams. I do want to clarify, Oliver yeah, because data scientists are far smarter than I am, honestly, across many, many areas. But you know what I mean, Richard, because I love the AGI discussion, don't get me wrong over a whiskey, but we could go so broad across it because the impact is massive. But what I'm really intrigued by is the stuff I'm already seeing now. Just the idea of AGI is causing regulators to act. Have you ever seen that before? That regulators are acting on something that may occur?

Richard Cassidy [:

I agree with you. I mean, look, we're going to split hairs on this AGI timeline. I mean, look, 2070 seems to be the timeline that I've read. If you'd look at three sources, verification, 2020 is the year. But let's deal with the facts, right? AI has left Moore's law, right. And we know what Moore's law is, right? Predicting peace and power double in two years. Right. The most cutting AI systems today, and this is something you can see reports on, is 5 billion times more powerful than that of a decade ago.

Oliver Rochford [:

Okay?

Richard Cassidy [:

And we are accelerating the level of innovation in AI. I genuinely, and I'm happy to be proven wrong, I would like to be, because I think we need as much time as possible to manage AGI as it will become, or whatever it will become. I hope it's 2070, but I truly believe it's going to be a lot earlier than that. But Oliver's point is absolutely right. All that is hypothetical. What is really important is the reality of here and now. And he's absolutely right. It is changing the game in terms of the employment industry, for a start, which is interesting in of itself, because what does that mean for the people that are affected? Does it mean that that's it, they're going to end up on state benefits for however long? Or does that free them up to go and do other things? Should they leverage AI to find new ways to innovate and make money? I mean, I think the answer to that is, yes, there are some people that will. There's those that won't. But it's where we focus. The use of AI that I think is critical. And right now, more than anything, we need it in areas like medicine and mathematical research. But that's just my opinion.

Jim [:

Wow.

Oliver Rochford [:

Very interesting that you mentioned medical research, like in the newsletter. One of the things I tried to do, because if you think of right now, you have this one side of people saying, telling you, this is going to be Agi. Imminent. It is imminent. You have another group of people who are saying, right, this is the next crypto it's all hype, right? And I tend to be in a bit in the middle. I've seen people do amazing stuff with this. I use it to craft the newsletter, literally. I've built a template. I have a prompt where it actually does all of the summarizing, everything automated. For me, it's fantastic. I use it to do, if you look behind me, I have all of these like tabletop role playing games. You can use it to basically do solo gaming and all kinds of stuff, which was impossible before. At the same time, though, obviously there are limitations, right? We know if you think of what llms bring to the table, the best typification I've heard from someone is Magnus Rivang, ex Gartner colleague of mine. He said that this is going to kickstart a whole bunch of other areas of AI again, which have stalled because they had problems with data. And I'll give you an example. If you want to do data classification, you want to have a log come in and you want to say, what does this solution is, this log from? And LLM will solve that like that. You want to translate that log into a different format and LLM does it like that. So data transformation, data integration, to a degree, has been solved. And the best thing is you don't need to run it every time. You only need to run it once. You only need to use it to write a parser. You would only generate a new parser if it changes, right? And so it's going to basically solve all of these back end problems. But at this point, you're not talking about a product that you're going to hand over to an end user. This is going to be used to build products. And that's one of the amazing things. The other area, though, the end user facing one is natural language input, right? If these chat bots, which were quite simple before, like structured, they're going to be a lot smarter. And from a security point of view, if you look at the copilots, that's what everyone's building, right? Rather than having to do a SQL or query or splunk query, you can now say, hey, show me my vulnerable hosts in network a. So these are pretty to me, I already think this is pretty awesome, right? We're not even anywhere near Agi. This is already solving problems which if you wanted to solve this from a coding point of view, if you've ever built passes, signatures. I worked for companies, we had teams of 120 people doing this manually. And now you can put it into an LLM and it's just basically going to spit it out. That's going to have an impact on us, on the skill shortage, it's going to help alleviate it. It's going to change the face of detection engineering. It's going to change how vulnerability research is conducted, threat intelligence management. It's going to have a huge impact on us imminently within the next one, two, three years.

Richard Cassidy [:

Well, Oliver, do you think, because this is an interesting point, right. Because the business adoption of AI is the critical point here at least now, right, in the next three, five years. I'll tell you what I'm seeing in a minute. But are you seeing the C level suite that you've interacted with and engaged with through your various roles and board level, actively looking to adopt AI as a means to reduce workforce costs or something else? What are you seeing?

Oliver Rochford [:

I mainly advise startup vendors. Right. Early stage startup vendors and a lot of them are trying to work out where do I get the budget from when you have teams which are already under resourced and you can't go in and say to a Soc manager I'm going to shrink your team, that's the worst thing you could say to a SoC manager. He's going to show you the door.

Jim [:

Right.

Oliver Rochford [:

So it's about actually saying that there are certain tasks you're not going to have to do manually anymore. And that means that that role, that set of things that your guys are going to be doing is going to be shifting hopefully up the stack to the more important, the more valuable stuff where you still need a human to do the reasoning. Right? Sea level people right now, I think they see it as something which is going to help them alleviate a shortage. But here's the problem. Where do you get the budget from? Is this going to come from the sim budget? Is this going to come from the soar budget, vulnerability assessment budget? Because there isn't going to be a magic new budget for AI. They're looking for cost savings, cost reductions. And if you look at the big guys, that's the reason why Microsoft and co have been very reluctant to put a large price ticket on this. Why some of the providers actually running at a loss, because it's hard to quantify in the moment the benefit you're getting out of it. Unless you talk about saving hours or know improving meantime to response or something like that. And I haven't heard very much from the field there from people actually actively using it because Microsoft copilot for security, it's an early release. A few companies have it in their hands. It's not widely available yet, so it's hard to say.

Richard Cassidy [:

I think my view is slightly different, say my view, my experience of conversation. So I agree with you, Oliver. There's definitely a want to enable and hyper enable certain teams to work smarter and faster and more competitively, and llms and natural language interfaces absolutely enable that. An interesting side note, but I showed my 15 year old son how to create his own Chat GPT, so he did it for his school project, uploaded all the documents that he would need to reference and research for a project that should have taken four weeks, and using his own GPT, completes it within a couple of hours. I'm not advocating that that's how you should learn and progress in academia, but it just goes to show you that even children are leveraging AI in ways that we didn't really consider they would. But the boards I'm talking to are really sitting on the fence because of a lack of something we talked about earlier on, which is legislation. We really still have a patchwork approach, in my opinion. I mean, certainly the AI act in the European Union is a great step forward. You've got things happening in Beijing, but there isn't really a global consortium just yet. There's talk of having one, and I think that's where we need to get to an industry. And I think a lot of businesses are worried about if they embed too much of AI capability as it stands today, that they may fall foul of legislation maybe in the next year or two. So I think a lot of just putting things on ice and approaching it slowly till they understand what the industry is going to accept and not accept. And my final point on the legislation bit is around the military applications, okay. And for me, this is something that's really close to my heart in terms of a challenge. Whilst we're seeing rapid advancement in AI legislation in the commercial world, the state of AI legislation in military is really ambiguous, and we don't have any real specific frameworks, at least, that I've seen in the last couple of weeks of research that call this out. And so I think there's a big risk there. And the industries, the governments, the world, the users, really need to put pressure to make sure that we're putting the same level of controls in military applications as much as we are in commercial, medical, et cetera, et cetera.

Oliver Rochford [:

There were actually some articles last week on autonomous war bots and military drone usage. My newsletter had a section on it, and there's this great book by Kenneth Payne called I Warbot, which talks about the history, the whole strategic thinking around autonomous weapon systems, which I can really recommend it if you want to primer and bring that up to date. You're right, it's a harsh topic. But what was interesting is what you said around the guidance. I've spoken to a couple of startups who are trying to build LLM based products, and they're struggling to find design partners because the design partners are saying right now we have a blanket policy of no LLM because they don't have that guidance.

Jim [:

And I want to sort of pull back a little bit to some of the stuff that was mentioned quite close to the beginning. And I am going to admit this on this podcast, I myself am a singularitarian. I think, as you would term it. I'm actually looking forward to it. I'm looking forward to the technological singularity. I'm looking forward to the applications of it. I'm looking forward to utilizing it or interacting with it, because I think that there's going to be some rather significant benefits for people who are willing to do dear. But my problem is in this current phase of development of AI, because Richard mentioned earlier, know we need to look at the moralities and ethics and all the rest of it now, the moralities and the ethics of people in the west and the moralities and ethics of people in the east or in the south or in the north, or whatever you're going to look at in any direction are going to be specifically intrinsically different. And my problem with as a singularitarian is I don't trust people. I work in the infosec field, of course I don't trust people. I trust a lot of people, I'd like to point out, but I also have seen the darker side of humanity and how that can be in utilizing this kind of technology. And we've mentioned it before on the channel where we debated what would happen when we get access to good AI, what happens when the bad guys get access to it as well. You're going to wind up with this giant fight between two sets of AI environments, and depending upon where we are in the whole AI development field, it could get quite gnarly quite quickly. I think we've reached a point in AI development where it has scared the willys out of groups of people, specifically governments, because we all know and love how governments like to maintain control. You hear every now and then some 80 year old individual in government say we need to regulate the Internet. Yeah, good luck with that. Mate off your toddle. You tell us how to do that, you should have probably done that right at the beginning, but, hey, that horse has bolted. And I think that's what I'm trying to get at. I think the AI horse could bolt very quickly. Once we have reasonable quantum computing, we may actually see the changes in the speeds that AI suddenly moves. We may develop aspects of artificial intelligence that suddenly, rapidly take massive leaps and advances beyond where we currently are now. I mean, if OpenAI has average AI of, what was it? 153? Richard?

Richard Cassidy [:

155.

Jim [:

Yeah, 155. Well, I've had my iq measured at least three times and the average, 143. So it's already more intelligent than me, which isn't a shock horror, because I'm pretty.

Oliver Rochford [:

No, it's not more intelligent, only the vocabulary. But see, most humans learn that vocabulary based on a sample set of ten or 20,000, whereas AI needed 4 billion parameters. You say you're a singularity. I'm a bioshovinist. Evolution kicks computers every time.

Jim [:

I agree with you, but I've read Ray Kurzel's the singularities near a number of times, and there's a bit in there where he kind of discusses what. Because I think a lot of people think it's either going to be us or them. We're either going to get pushed back into the stone age by some nuclear war by Skynet, or it's going to take us over and we're all going to be put into little boxes and kept as pets. Oh, look at our creators. Look how cute they were back then. I don't believe, necessarily, as long as we're careful, as long as we do this properly, and as long as we don't do something to completely screw ourselves over. I love the fact that I don't have to sift through a load of crap to figure something out. I mean, I learn much faster than most, and it's something that I found very difficult throughout my life, because I've got to take what one person thinks and another person thinks, and then figure it out for myself. And the further back you go, the harder it was. I remember times when I didn't have the Internet and I had to go to the library, and learning when you have to go to the library is really tough, actually, because you don't have the level of wonderful information that you can disseminate that you do now with the wonders of the Internet. There are people back then who used to turn around and say, oh, the Internet, that's never going to take off. Well, how the crap that is. I'm terribly sorry. It took literally all of five years to kind of completely eclipse absolutely everything that we were previously sending. A letter. I could send an email. It can get to you in 5 seconds flat. Why the hell would I send you a written letter? Oh, but it's much nicer. When was the last time you wrote a letter?

Oliver Rochford [:

Honestly, I can't even write anymore. I tried it just recently and I noticed that I've almost forgotten.

Jim [:

I can't write at all. I can't write longer than my signature and maybe a few words before I get really uncomfortable and my hands start to cramp type. I could do that all day long. And I see artificial intelligence is going to be a very good complementary to us, I think the biology and the mechanical, or the digital, rather, not the mechanical, because then that takes us down the weird ball group route. Biological and digital can work together in really good units. Digital is very, very good at some aspects of operation. Biological is also very good at some aspects of operation. And if they actually come together and actually knit together, maybe with something like neuralink, the ability to utilize that kind of artificial intelligence technology is going to be fantastic. Will it gain sentiments? Will it not? No, I don't think it will. But it'll become so much a part of your way of working and human going forward. I think it will become sentient byproduct, because it'll become part of what we are and how we disseminate information. Cripes, I'm going into far into advance now, but my problem with this whole thing as an infosec professional, it can be such a misused. And rich made a very good point here. We're sitting here, old Biden gets up on top of his podium and says, right, this is what we're going to say about artificial intelligence, and this is what we're going to do. The UK fast behind them, of course, they say, oh, we were before them, but I'm not going to get into that. The UK pipes up, goes, well, this is what we want to do, and it's probably going to be similar in line with what was Biden said, and the EU are going to come up with their own one, and that's going to be great for the commercial world because that's the regulation that you're going to have as a commercial concern for developing AI and for building technology around AI. But as Richard points out, you're not going to have the same bloody thing when it comes to the military. They're not going to be under the same regs, they never are.

Richard Cassidy [:

To answer that, James, what about the adversarial? Underworld, they don't follow any regulations, essentially, right? This is the issue. Regulation is a small part of the answer, and I want to cover that point because I think it's important for the listeners to understand, right? Yes, chat GBT is a variable iq of 155 today, and it'll be 1000 times higher than that in two, three years. But let's think about what we have today, right? The age of advanced neural networks and AI systems. And Chat GPT four is an example of that. They all have abilities that were once the realms of science fiction. We know this. They can understand and generate human language with incredible fluency. Create art, compose music. A case in point, I sat at my piano last week and I heard midnight by cats and then sacrificed Belton John. I went, well, wouldn't it be great if I could combine these? So I asked Chat GPT for, I said, could you give me a song in a similar chord progression of these two, but they're significantly different. And my God, did it deliver. I mean, I played and was like, this is insane. And AI today, we've had this conversation, is even helping drive cars, right? And then just consider the NLP, the natural language processing. The AI systems today not only understand and generate human language, but do so in context. And the reason I'm telling you this is now let's look at Q star, right? Q star is able to solve questions it hasn't seen before, or mathematical reasoning, in this case, of grade level students in the US. Okay, so we're getting to a point where we're adding additional functionality where it becomes a beast. I think, Oliver, that even with regulation, we're going to find extremely hard to tame.

Oliver Rochford [:

I don't believe just because you have something at here like this, you can just improve it. Unlimited. There are always limits, right? And my favorite example is this reminds me of when we landed on the moon. And for the next ten years, everyone thought, within 20 years we're going to have space cities and we still don't have them. Because what's actually happened here is that this is the first piece of AI that's production grade and is accessible to consumers. At the same time, there are other limiting factors that are independent of the algorithm and so on that still need to be solved. But the truth is that for military usage, an intelligent missile that's targeted for you does need to be sentient. It's bad enough that it can basically be focused on you. And the scary bit about an autonomous soldier or an autonomous weapon is that there's no consent required in a human society, you can't go to war by yourself anymore. You need a bunch of people to go with you to basically an army. There's always somebody that's limiting your decision. There's not a single person. Nuclear weapons have changed that equation a little bit, but at the end of the day, that's a mutually sort destruction. That's behind that. But autonomous robot, I don't know. You give that to the wrong person, that's something worth. I'm not even sure if regulating is the right word.

Jim [:

I think regulation in the long term is ultimately going to fail. Oh, gold. Now we're going to get really deep. I honestly think that the moment artificial intelligence reaches a certain point, and I'm not saying when it becomes a true AI sentient or whatever, I'm talking about as a technological advancement, or society around the world will just break into ideologies, you're going to suddenly lose governments because they're not going to be able to control any of it. They were not going to be able to control people utilizing this technology. They can try, but like the Internet, which is a very simple, basic piece of technology, they couldn't control the usage of that and what goes on. They still can't. How the hell are they going to control AI? So what I suspect will happen is you will have the old schoolers who will want to retain life as they've been used to, like living out their lives without the usage of AI or very limited usage of AI. You'll have those like myself, I'll be honest, that will embrace it entirely and go, right, fantastic. Let's see where we can go with this. You'll have people who will embrace it with the intentions of furthering their own goals. You'll have people who will turn away from all of it entirely for religious purposes because they don't want to give up their religion. And let's face it, if AI does reach a certain point and kind of almost merges in some way and augments human capability, you're going to have far greater capability than you ever have. You're going to be limited by your organic components. But hey, there's always going to be someone down the line who can build you something newer and better. Your heart's packed out. Not problem. I've got an AI for that, and it'll help us develop new technologies. Look at printing at the moment. Look at 3d printing. It's still very basic.

Oliver Rochford [:

You're right. That generation. Right. Using it to actually, to generate other stuff. Yeah.

Jim [:

Biological beings are very good at dreaming up possibilities. Technological or digital beings are going to be very good at telling you actually how to do it. It can't come up with the concept itself at the moment, but if you could dream up, if I went on to Chat GPT and said, what could I feasibly, how would I develop cold fusion at the moment? It won't be able to do a thing. It'll tell you what the current it knows and what it's learned, and it can go out and tell you all the different projects and publicly available project information as to where we currently are with it, and it can surmise it for you beautifully, but it won't be able to help you develop it. Picture that kind of technology without the limitations of Chat GPT, which is enforced on it. I'd like to point out that they had to, because, let's face it, people were starting to use it for some crazy things, but in the hands of an actual scientist who has, right, I need to build this. I don't know how to build any of it, but it takes me back to the fly. You remember in the fly, there's a beautiful scene where he says, I don't actually know how half of this technology works. I just ask an engineer to build me this. I ask another engineer to build me this, and I put it and cobble it together and it works. I think that's kind of where we're going to be going with this. I don't know how to code. I kind of understand the concept. I did a little bit of it. I don't need to know that anymore. I can turn around to my AI and say, right, I want a program that will calculate PI to the last digit, utilizing quantum technology. How would you do it off your toddle?

Richard Cassidy [:

Yeah, look, right, I agree with you. AI will get there. I have no doubt. It's just an inevitability that it will get to a point where it can think of concepts and come up with ideas that we haven't considered in humanity.

Oliver Rochford [:

Potential. It's not an inevitability, it's a potential. Well, we agree.

Richard Cassidy [:

We're going to disagree. It is in my contention.

Jim [:

There's contention in the ranks.

Oliver Rochford [:

So if you want to go far out, right? So there's a guy called Fermi, astronomer, came up with a thing called Fermi's paradox. There's a story that he walked along and said, where are they all? And somebody said, who do you mean? He said, the aliens. They're aliens. Where are they? And somebody said, well, think of the distances. And he said, well, it doesn't matter, because if you have enough time, distances don't matter. Even if you have just a simple stupid robot that replicates itself, it would only take you about, I think it was 50 million years to basically go across a universe or something like that. The most likely form that would have taken would be an AI, and we don't see anything. And so there's an indirect proof there that no one in the history of the universe has ever managed to build a sentient AI that was able to self replicate, because otherwise the thing would have colonized the entire universe.

Jim [:

You should watch the congress thing.

Oliver Rochford [:

Right, but so there's an indirect proof there that it may not be possible in the way that we think. And that is a big question. How much does biology play? How much does quantum physics play a role into it, as Roger Penrose, for example, said, and we also have to be careful about intent. There was an article recently which said that the LLM lied in some tests. The LLM doesn't lie. It only follows what your prompts tell it to do. Right. The other fallacy is, for example, you will see articles which say that an AI is better at spotting new planets than a human. And it's not that AI is trained to spot differences in an image. It doesn't even know what a planet is. All it does, it's trained on a model to say that this type of image has been labeled in the past as a planet. And if I see a similar image in the future, I will be able to. But then when the moment somebody says, well, it's better spotting planets than an astronomer, that conjures up an entire different way of thinking about it. We need to be careful not to project these very human things onto something which essentially is just a piece of software a little bit more sophisticated than the stuff you know from before, but still essentially just an iterative evolutionary step.

Richard Cassidy [:

Onward from that, sure, but have we invented anything we've invented today? We've waited for gifted human beings to come along in most cases that have, all they've relied on is data in the past. They've consolidated that data in their mind to come up with new concepts. Why is that an impossibility for AI?

Oliver Rochford [:

It's not. Recombination is the simplest form of creativity, and AI is going to be already capable of doing that. But that doesn't mean to say it's going to be able to, just so the fallacy is thinking that we're just going to bolt features on until it's human level intelligence. Our algorithms are the way they are. To develop new algorithms for a new level of intelligence is a very different step. But sure, it has lots of applications, but at the end of the day, I still say, not in our lifetime, honestly. But we're going to see amazing things way before AGI. We're going to see amazing things way before AGI.

Jim [:

Weirdly enough, I actually agree with both of you, but I don't agree on one thing. I think that integral component is, the final component is going to be the human itself. If we've got this technology, it won't need to get to an intelligent point, because it's already got it in the form of us. I'm talking about very close bonded. I mean, neuralink does work, we know it works. It's probably still really clunky. It needs refining. This is where that kind of artificial intelligence technology is great, if we can get it to a certain level, because it will refine that technology, which we will then plug into ourselves, and then we'll go to the AI and go, hey, what else can you do? Can you kind of remember things for me? Because my meat brain is terrible at remembering different languages. A bit like the Matrix in many respects. Show me how to speak Japanese, you won't even need to, because the AI will happily do it for you.

Oliver Rochford [:

So you're transhumanist. Richard is an AI ascensionist, and I'm a bioshovanist.

Jim [:

We should argue like this more often.

Richard Cassidy [:

It's great, but listen, Oliver is 100% correct that I think, I believe emphatically, that humanity needs to look at AI as an augmentation play for the things that better serve humanity. And if we could follow that path in terms of ethics, I don't think we'll be too bad whenever AgI becomes a thing. And maybe it is 100 years plus away, maybe it isn't. But if we augment, we're on the right path. But if we aim to compete against each other and nations, we're on a very destructive path.

Oliver Rochford [:

It's just that the path to AGI is going to be so strange that it's almost impossible to predict beyond the very short horizon which way it would go. And if you look at just the things happening now, even if we only manage to progress a little bit further than we are now, the impact of this is going to play out over the next five to ten years, because it's going to take time for us to work out the best way to use it, the most economic way people are trying it for every use case I saw somebody trying to build a saw with an LLM, and I thought myself, you crazy person, that is unaffordable. You cannot process every single event with a call to it, but people are trying out, and in a year or two, you're going to start seeing what works, what doesn't, where does it make sense? Which use cases. If you think of what you mentioned about Hollywood, right. Visual. So if you video and sound are solved, they're solved. They are production ready and solved. Image recognition, and more importantly, image difference recognition is essentially solved, right? We're doing loads of stuff in the medical area, we're doing an astronomy, stuff like that. If you look at the military area, a lot of problems that they had before around decision intelligence, also very much solved at a low level. Now you can see it with some of the robot videos that they have, right? They're not just standing there stuttering for two minutes and then moving forward like they did 2030 years ago. So this is already revolutionary. It's just, I think people are expecting stuff to happen really quickly because we have that tendency as a species. But even from a business regulation point of view, this is going to be a thing that builds up momentum over the next years, steadily.

Jim [:

I don't know. As I said, I think there's two things I would say. I think we are going to have the fear and the regulation early, and it's going to fail like it always does. Governments are going to be. I think governments are more fearful for the usage of AI because it will take power away from them. When you have a whole populace who have access to artificial intelligence, and you don't put like significant. I mean, look at what happened over the pandemic. We all know it now. Back then, many people were gained because it was like, no, you can't talk on social media about certain aspects of the pandemic, that it came from a Wutan lab. Wow. God. If you said that back then, that was it. You were earmarked in some government institution. If you even expressed a slight kind of. I'm a bit dubious about having a vaccination that has not had the relevant level of testing that other vaccinations have. You were demonized as an antivaxxer. I'm not saying I was on one side or the other. I see a point from both sides. I don't think that government getting involved in technology is going to work. I think it will initially, but then the technology is going to completely outstrip them. What are they going to do once the horse has bolted? What are they going to do? Look at it with the Internet, they couldn't control that. They like to think they can, but we know they can't.

Oliver Rochford [:

But who is this going to empower? The other week, no, everyone thinks that. Everyone thought that about the Internet. And at the end of the day, it belongs to big corporations now, right? I mean, that's always a fallacy to begin with. Money will load it eventually. But I was saying, I read this crazy article last week where somebody was talking about automating a CEO. And when I extrapolate this into a scenario, so you have an AI CEO who sends an AI job to agents to build a software. Why, if I have my own AI, why do I need to buy your software? Right? And more importantly, if you have all of these ais interacting, what, we just sat around and our AI makes money for us? That sounds like.

Jim [:

I don't think money will be a thing. Fiat currency won't be a thing. It's already on its way out now. I mean, you look at the currencies around the world and the way they've been pumping them. When we finally evolve as a species, our next stage of evolution is going to be with technology. It's not going to be against it. We're not going to spend 10,000 years developing a third eye in our head and telepathy. We're going to find some way, technologically to do that far, far earlier. We can already do it now in many respects. You can have an implant into your inner ear that you can use as a phone. There was that crazy professor over in some Oxford university somewhere that did it with his hand first technical cyborg. All of this kind of technology is coming. And I think the final piece will be when you can utilize artificial intelligence. Because part of what you're saying, the mitigating factor with technology at the moment is materials, and people can't afford the materials to be able to process this stuff, right? Okay? And it takes lots of money and corporations at the moment to do that. There is nothing more powerful on this earth than the human brain, and we only use bugger all of it. The rest of it is completely useless. So the moment we start implanting technology to expand our consciousnesses through neuralink and being able to record everything that you see and hear so you can replay it back, your AI is starting to learn, and you're becoming kind of intrinsically joined to the little artificial intelligence that you will inevitably have in your brain that will help you or in your technical part of your brain that's going to help you remember this stuff and research things and all the rest of it. It will learn how to utilize your own brain in order to process that information. And the moment that happens, you don't need organizations, you don't need money, you don't need anything. You can come up with pretty much any concept you want with the supercomputer that you have in your noggin.

Oliver Rochford [:

But if I'm in charge, why would I let that happen?

Jim [:

How are you not going to let that happen?

Oliver Rochford [:

Because building that aG, the first person to get an AGI will be the last person to get an AGI.

Jim [:

But look at it this. They, governments regulate drugs, they regulate guns, but people still deal in them. They can't stop it. They can prevent it by sting operations and all the rest of it. Once you have a technology that's intrinsically developed, it is very difficult to put that cat back in the show dinger box.

Oliver Rochford [:

You have to differentiate between simple technologies like a gun, which I can build together in my garage, and a technology for which I need processes, which only a few companies on the planet can actually build. There's a fundamental difference. It's more like nuclear weapons.

Jim [:

Yeah, let's track that. Let's track that. Building a gun used to be difficult. You can now print one. You can print one. You can print a 3d. On a 3d printer. A gun, if you know how to do it, you can get it to print it. The ability for you to gain access to firearms legally or illegally, if you've got a 3d printer and the ability to utilize that technology is completely useless, how are the government going to regulate it?

Oliver Rochford [:

That doesn't invalidate my argument.

Jim [:

It doesn't invalidate it, but AGI is.

Oliver Rochford [:

Not something you're going to fit onto a usb keyring and it's going to play on your laptop. It's going to be intrinsically.

Jim [:

No, I'm not saying that at all.

Oliver Rochford [:

Yeah. It's going to be tied into that hardware, will be purpose built for it.

Jim [:

But with quantum technology, that hardware gets intrinsically smaller and smaller and smaller and smaller.

Oliver Rochford [:

Do you know how we're already looking? No. Those quantum computers, for the first, I don't know how many years will be isolated to research organizations. You're not going to be able to buy one.

Jim [:

No, I agree.

Oliver Rochford [:

And like I said, Mr. Powerful builds aids. Quantum computer gets agi. Why is he going to sell you? At what point? Why would he give it to you? He can dominate the whole world and you need that kind of moonshot money to build the first, this is not an open source project someone's going to release, and then you're going to be able to topple arguments. That sounds like a cryptocurrency argument, where that doesn't give you a military.

Jim [:

This is where you and I also disagree on the subject of crypto. I'm in a big advocate and all the rest of it, and I know that you're very suspicious of it.

Oliver Rochford [:

I'm not suspicious of it. I just don't believe it can do what people say it does.

Jim [:

There we know, and I agree in many respects. Now Richard's turn to sit there and go, oh, my God, these two are arguing with one another.

Richard Cassidy [:

Look, I think it would be unwise to apply historical analysis to a technology of the future that we just don't understand. And that's all I'm saying. I get that we can use simplistic analogies of historical incidences, but this is not what we're dealing with here. And I don't believe we know humanity is not ready for Agi. We haven't evolved in the way that we think, and as I said, our colloquial, war minded economics today to actually even have Agi benefit the planet. Because to your, you know, militaries are probably going to be the first organizations that will find, and the first military to get AGi is going to become the new military superpower.

Oliver Rochford [:

Clearly.

Richard Cassidy [:

But that's why it's all wrong. And that's why it's on everybody that listens to this podcast that is looking at AI, where it's going to start to put pressure on industry, on decision making, that make sure that we take this, we don't repeat the mistake of Oppenheimer's project with Albert Einstein, and we don't create a cascade that's going to eventually end up causing a huge risk humanity.

Oliver Rochford [:

But who's we in this? So that's what I mean. There's always this hazy we. There's no explanation of how you're going to stop a government or someone like Elon Musk building this. It's not just a hand wavy thing, right? If you believe that this is going to be so powerful, the first person to get AGi doesn't just get AI dominance, they get dominance. That Agi will be able to build the missiles, poisons, everything. Why would you sell access to it? And that's a big. If you believe this is the imminent end goal, right? The big game going on is that every country is not going to pull the brake because the first person to get it will dominate everybody else. That's what the EAC movement is saying. That's why they exist, why they don't want the brake to be pulled. And this idea that some of this is going to happen, bottom up, somebody's going to build it in their garage. I'm sorry, that's not why. Nvidia is currently the most valuable company on the planet. Overnight, you need specialist hardware, you need phds to do this. And the first person to build all of that together is going to be very cautious. Who he sells it to and how the power consumption just for the training for that kind of model puts it out of reach of a lot of people. Like, generally, it costs somewhere around $3 million to do a training run for GPT four.

Jim [:

One training run currently, now. But even with Moore's law, that's going to get cheaper and faster every year. Obviously, as Richard pointed out right at the beginning, we've kind of busted straight through Moore's law already.

Oliver Rochford [:

But it becomes cheaper for the. But it becomes cheaper for the bad people first.

Jim [:

Well, yeah, because they have no morals.

Oliver Rochford [:

Yes. Once they have it, they're going to stop you getting it. That's what I mean. It's not a trivial thing to completely replicate, but whoever gets it, hypothetically, if someone gets it, their ability to build autonomous robots, weapons, anything, is going to be completely unparalleled. It would be a light year jump.

Richard Cassidy [:

Right. And therein lies the problem. At least worth thinking about this, I don't think, genuinely, hand and heart, whether you're the bad guy side of the fence or the good guy side of the fence, even if there is such a thing anymore, you're really thinking about the consequence of your actions or how you're going to apply this, because the worst case scenario is there won't be a planet left for us to inhabit. And so that's why legislation is the beginning of it. But the conversation I'm having with my children and with my friends around the ethics and moralities of AI, I really hope, and I know it's probably a naive thought process, that we mature as humanity to understand that this is something far more existential of threat than I believe nuclear weapons are.

Jim [:

That's assuming we get to the point we actually have this technology, because it's not looking particularly good at this moment in time.

Richard Cassidy [:

It's great at the moment, certainly.

Oliver Rochford [:

The thing is, there are a whole bunch of risks way before that. We're already facing them now that people should be focusing on not the what if but the actually is because if you look at just the example of the writer strike, this is going to have a huge societal impact. It cannot be understated. Within the next ten years, the impact on the labor force, unemployment, pros and cons, this is going to.

Jim [:

I agree with that.

Oliver Rochford [:

And our discussions right now shouldn't be about Agi and politicians. I'll be honest with you, most people, the reason why I think we can commentate, we can view what other people do. We're not going to be the people solving this. And the people who are going to be solving this, I'm not quite sure, because if you look at that declaration recently, there were a lot of politicians, a lot of business leaders, not a lot of phds who actually work with machine learning.

Jim [:

I'll volunteer to upload myself into a robot body as a prototype. I have no problems with that.

Oliver Rochford [:

I believe the future is genetic modification. I don't want a robot to go and discover the universe. I want some descendant of us to go.

Richard Cassidy [:

Agree?

Jim [:

I don't care. As long as I'm in my robot body with the arms that can destroy rocks.

Oliver Rochford [:

What would you be without your body? You're not just, I don't care about.

Richard Cassidy [:

It, you're all your emotions.

Oliver Rochford [:

You're waking up in the morning and having that scratch on your back. That's huge.

Jim [:

Get rid of it. I don't give stuff.

Oliver Rochford [:

I love my body. I'm very attached to.

Jim [:

To a simple cut round here is know. Make me a Cyberman. I'll happily do that.

Richard Cassidy [:

He's talking about positronic brains and data and Star Trek. But look, I think to Oliver's point, the here and now is what's important. Yes, Agi is a way off. How far that is is up for debate. I think it really sits at business leaders at all levels of business startup to multinational enterprise corporations. Let's look at AI as something that will augment our workforce. Augment. Please, please don't think of it as a tool to replace and cut costs, because human brains for human problems always.

Jim [:

But equally, maybe we should be taking the power of this technology and applying it to actually figuring out some of the serious bloody problems that we've got in the world already. With climate change and all the idiocy that's going on, where half the world is in flames, not even going to go into that ideological nightmare.

Oliver Rochford [:

There's an old wisdom never to expect technology to solve social problems. That's the thing. That's what's so comical to me. There's these bunch of geeks in Silicon Valley who want to basically build this digital God because they want somebody to go and punish everyone they don't like and write the world. But in reality, we're going to have to solve this ourselves, maybe with the help of AI, but it's going to come down to why not we do with it.

Jim [:

Exactly. Why not use it to develop new technologies, to try to help out, to cure diseases, to deal with neuralink alone. If we can couple it with some of this technology, people with mental conditions that degenerative mental conditions as they get older, it can be used to help them. It can be used in a myriad of different ways to help us be better and give us a little bit more time to be able to help make everything else better. God, I sound like I'm preaching now. Join the singularity.

Oliver Rochford [:

Yeah, but it's an interesting point you make. You see, I dabble as a futurist, and when I look at that kind of as a tipping point, to me, a tipping point is when people start having prosthetics built in even though they're perfectly healthy. So right now, as a problem. Exactly. You'd be perfectly willing to do that. But you understand it's a tipping point when you start taking off functional parts of your body and you start putting electronics in. That's definitely a new era of humanity. And I find that an intriguing point because, of course, I know there are people willing to do it. There are already people who are experimenting, implanting rfid and so on. Of course, when that brain implant comes, it's going to be interesting to see how that pans out, who's going to be using it and so on.

Jim [:

I think we'd better stop there and maybe even approach this again. I think there's so much to this subject matter, the security behind it, the technology behind it, the ethics behind it, the futuristic kind of. Where do we think we're going with this? I mean, we're probably going to have to visit this on several other occasions as new things appear and society kind of comes to grip with it, as well as the technology moving on. But it's always a really good thing to debate with you guys. And, yeah, I think hopefully for all of you out there, it's given you a bit of food for thought. This is not an easy solution. You can be on various different sides of the fence, but you can still agree every now.

Oliver Rochford [:

And.

Jim [:

So, Oliver, Richard, as always, it's been fantastic, and let's revisit this in another six months or even earlier. It depends on what goes on, because quite frankly, who knows at this point?

Richard Cassidy [:

Thank you. Great talking to you again, Oliver.

Oliver Rochford [:

Yeah, awesome.

Jim [:

All of you out there listening in to us, arguing over artificial intelligence, please feel free to come back and see later on how this particular story plays out. We're probably going to be producing additional content, as you just said, with regards to this. If you do like this content, if you want us to debate something, debate some new technology, some new concept that's coming out, or just want to drop us a comment, please feel free to do that. So you look after yourselves, and we'll speak to you all. Thank you for listening to the Rosewire podcast. If you like the podcast, if you love the podcast, please feel free to subscribe. And if you have any questions, please get in touch. Thank you very much and have a great day.

Links

Chapters

Video

More from YouTube