Have you ever wondered how the next generation will be impacted by AI? Michael Ostapenko, founder of Satryx, is building that future, and his vision goes beyond the hype. In this thought-provoking conversation with Melinda Lee, Michael dissects the current limits of AI and makes a compelling case for why our children won't be raised by rogue superintelligences, but instead, by powerful tools that lack human-like motivation.
This episode is a must-listen for anyone concerned about the world we are building for the next generation and how to prepare them for a partnership with intelligent technology.
In This Episode, You Will Learn:
The AI Tutor of Tomorrow
Why the next generation will learn from AIs that are powerful logical reasoners, not just conversational chatbots, transforming education from memorization to master-level critical thinking.
Preparing for a Partnership, Not a Takeover
“Agency is a property of life, not intelligence.”
How to alleviate anxiety for ourselves and our children by understanding the fundamental difference between a tool and a living entity.
The New Digital Divide
“Quantum computers can solve powerful classes of problems, but you have to ask the right questions.”
As quantum computing and advanced AI unlock solutions to global problems, the next generation's challenge won't be access to information, but the ability to ask the right questions and wield these powerful tools ethically.
Instilling "Creator Responsibility" in the Next Generation
“At the end of the day, the responsibility is still on you.”
Why teaching kids to code is no longer enough; we must teach them the ethics of the goals and constraints they program into intelligent systems.
About the Guest:
Michael Ostapenko is the founder and CEO of Satryx, a company on the cutting edge of artificial intelligence. With more than two decades of deep experience in science, engineering, and leadership, he is advancing automated reasoning on conventional hardware toward practically quantum-enabled performance. At Satryx, he is building a foundational platform that fuses these logical breakthroughs with modern machine learning. His long-term vision is to enable the next generation of AI systems that approach true human-level intelligence, capable of both semantic understanding and rigorous logical reasoning.
🚲 Gravity-Defying Survivor: Beyond the lab, Michael has tested his own limits, surviving a dramatic bike crash that involved executing a near-full flip in midair.
About Melinda:
Melinda Lee is a Presentation Skills Expert, Speaking Coach, and nationally renowned Motivational Speaker. She holds an M.A. in Organizational Psychology, is an Insights Practitioner, and is a Certified Professional in Talent Development as well as Certified in Conflict Resolution. For over a decade, Melinda has researched and studied the state of “flow” and used it as a proven technique to help corporate leaders and business owners amplify their voices, access flow, and present their mission in a more powerful way to achieve results.
She has been the TEDx Berkeley Speaker Coach and has worked with hundreds of executives and teams from Facebook, Google, Microsoft, Caltrans, Bay Area Rapid Transit System, and more. Currently, she lives in San Francisco, California, and is breaking the ancestral lineage of silence.
Thanks so much for listening to our podcast! If you enjoyed this episode and think that others could benefit from listening, please share it using the social media buttons on this page.
Do you have some feedback or questions about this episode? Leave a comment in the section below!
Subscribe to the podcast
If you would like to get automatic updates of new podcast episodes, you can subscribe to the podcast on Apple Podcasts or Stitcher. You can also subscribe in your favorite podcast app.
Leave us an Apple Podcast review.
Ratings and reviews from our listeners are extremely valuable to us and greatly appreciated. They help our podcast rank higher on Apple Podcasts, which exposes our show to more awesome listeners like you. If you have a minute, please leave an honest review on Apple Podcasts.
Transcripts
Melinda Lee:
Welcome, dear listeners, to the Speak and Flow podcast, where we dive into unique strategies and stories to help you and your team achieve maximum potential and flow. Today, I have a leader in a very hot topic, AI, and I can't wait to dive into what is happening in the landscape. He's got some really great vision.
2
:
Melinda Lee: for where it could go. And so, welcome, Michael Ostapenko!
3
:
Michael Ostapenko: Thank you, Millian.
4
:
Melinda Lee: Hi, Michael, founder of Citrix.
5
:
Melinda Lee: Patrick's, right? Yeah.
6
:
Michael Ostapenko: Yeah.
7
:
Melinda Lee: Tell us, what are you excited about? Like, what is the vision that you want to take it?
8
:
Michael Ostapenko: So, my vision is, to create,
9
:
Michael Ostapenko: A system, like a artificial intelligence system, which would, be able to produce, Results, which are both
10
:
Michael Ostapenko: Meaningful and have sense.
11
:
Michael Ostapenko: This is, like, while, while today's, today's,
12
:
Michael Ostapenko: systems, they're… they are more restricted to the limit, or limited to the
13
:
Michael Ostapenko: former, which is the meaning. They're good at semantics.
14
:
Michael Ostapenko: But they're really, really bad at, logic and, logical reasoning. So…
15
:
Michael Ostapenko: That's when… why when you, like, use,
16
:
Michael Ostapenko: chatbots, like ChatGPT or anything else, like bot, you can, you can, you can see that
17
:
Michael Ostapenko: the output they produce, it's, it's really meaningful. It has, these connections, which are very natural.
18
:
Michael Ostapenko: But at the same time, it… Sometimes,
19
:
Michael Ostapenko: Create this, make these subtle errors.
20
:
Michael Ostapenko: Which… An expert can, nowadays.
21
:
Michael Ostapenko: What… which may…
22
:
Michael Ostapenko: gone unnoticed by… by, like, general population, that… which is why it's so popular, but in general population, but not so much adopted by,
23
:
Michael Ostapenko: Companies and businesses.
24
:
Michael Ostapenko: It's not really reliable.
25
:
Michael Ostapenko: And the real problem for that is the technology, the underlying technology.
26
:
Michael Ostapenko: The neural networks in general, no matter What architecture they have, no matter…
27
:
Michael Ostapenko: What algorithms, optimization algorithms, you use to, train this, network.
28
:
Michael Ostapenko: No matter, like, What other, like, primitives which are used in these networks, you use.
29
:
Michael Ostapenko: We never be able to,
30
:
Michael Ostapenko: Reason… logically, because it's just not what they do.
31
:
Michael Ostapenko: Mathematically.
32
:
Michael Ostapenko: And, it's… it's just impossible.
33
:
Michael Ostapenko: Now, It's possible to create a system which Somehow incorporates neural network.
34
:
Michael Ostapenko: And there are various, like, ways to do it, but, the point is…
35
:
Michael Ostapenko: If you do, like, then, you can leverage this… Neural network's ability to… Model semantics.
36
:
Michael Ostapenko: And took somehow, Converge with this…
37
:
Michael Ostapenko: Formal system's ability to do, like, rigid, logical, reliable reasoning.
38
:
Michael Ostapenko: and analysis.
39
:
Michael Ostapenko: And when you merge these two, you kind of get… The intelligence, the actual intelligence.
40
:
Melinda Lee: It.
41
:
Michael Ostapenko: We humans persist.
42
:
Michael Ostapenko: And, basically, that's… that's… that's… that's my vision for the artificial intelligence, for the future of the artificial intelligence. And,
43
:
Michael Ostapenko: I… I'm not convinced that we're…
44
:
Michael Ostapenko: there yet. I mean, or will be there in the near future. It… it…
45
:
Michael Ostapenko: In my opinion, there are, like, many pieces which are still missing.
46
:
Michael Ostapenko: But… Karen said that… There are still, like, partions which start to emerge.
47
:
Melinda Lee: Well, I'm curious, if you start to create something like that, like, that is mimicking, like, the… like you're saying, it's mimicking the human brain.
48
:
Melinda Lee: But even probably more powerful, because…
49
:
Michael Ostapenko: I would, I would, I would rather, like, not use, like, this, Hmm…
50
:
Michael Ostapenko: ESN trapezens, because, I don't know, honestly. I have no idea whether it will mimic a human brain, whether this model will mimic a human brain or not, or even,
51
:
Michael Ostapenko: Let's say, human intelligence as a product of human brain, or brain activity, right?
52
:
Melinda Lee: But.
53
:
Michael Ostapenko: I'm quite confident that The result, the output of such a system.
54
:
Michael Ostapenko: Would be, like, pretty much indistinguishable from what Human intelligence produces.
55
:
Melinda Lee: Okay, so that's my fear, that's my fear.
56
:
Michael Ostapenko: Like…
57
:
Melinda Lee: We… we don't know what we're creating, and we don't know on its own what it's gonna do.
58
:
Michael Ostapenko: Oh, so…
59
:
Melinda Lee: Intelligence, like, huh? AI, huh?
60
:
Michael Ostapenko: Right, so… I wouldn't fear about that.
61
:
Michael Ostapenko: Hmm.
62
:
Michael Ostapenko: So, this is, like, another concept, actually.
63
:
Michael Ostapenko: People would usually, like, mix up this.
64
:
Michael Ostapenko: And, so… The… the… You're talking about, like, agency and the ability for…
65
:
Michael Ostapenko: For, like, a machine to take, actions which are, like, beneficial to it.
66
:
Melinda Lee: which are, which are, yeah, I'm talking about the, the intelligence, the AI intelligence, let's just call it that, to start to…
67
:
Melinda Lee: Do things that we're not programming it to do, because it's starting to…
68
:
Michael Ostapenko: It's not gonna happen.
69
:
Melinda Lee: It's starting to create networks on its own.
70
:
Michael Ostapenko: Yeah, I mean, yeah, it's more for a sci-fi story.
71
:
Melinda Lee: Really.
72
:
Michael Ostapenko: Yes, it's, it's, it's not gonna happen.
73
:
Michael Ostapenko: As I said, like, in order for it, like, you can program it, like, to make something that works the way you described, like, right?
74
:
Michael Ostapenko: But you still be the one who programs it. Now, Frankly speaking, this, agency thing, it's,
75
:
Michael Ostapenko: It's, in some sense, you can… you can argue it's emergent, but…
76
:
Michael Ostapenko: On the other hand, it's… it's… it's really not, because
77
:
Michael Ostapenko: what, what's, what's the motivation for… there won't be, like, like, real motivation for…
78
:
Michael Ostapenko: Intelligence, these artificial intelligence systems to do anything.
79
:
Michael Ostapenko: Because… simply because it's more of a picture of a life, of life.
80
:
Michael Ostapenko: Life is what, basically, Drives us, gives us motivation.
81
:
Michael Ostapenko: And, intelligence is a tool.
82
:
Michael Ostapenko: Which life uses to achieve goals, to basically reproduce, to prolong its existence.
83
:
Michael Ostapenko: So, when we are talking about these artificial intelligence systems, I'm talking about life. At least I'm not.
84
:
Melinda Lee: Hmm, got it.
85
:
Michael Ostapenko: So… These systems will lack this inherent motivation. Just like…
86
:
Michael Ostapenko: Modern artificial intelligence systems neural networks lacks this ability to logical reason.
87
:
Michael Ostapenko: It's… it's… it's just impossible to add this ability to neural networks, like, directly like… like that. And it's impossible to…
88
:
Michael Ostapenko: To add motivation like this agency to artificial intelligence, because it's not a property of intelligence.
89
:
Michael Ostapenko: It's a property of life more, more property of life than intelligence.
90
:
Melinda Lee: Got it, got it. So you're saying that, it doesn't lack the sense of… it's a separate thing. It's a separate thing to be able to have the motivation to do something that it's not…
91
:
Michael Ostapenko: Yeah, it's, it's a separate thing. It's a… it's a separate… like, from the philosophical perspective, like, there was, like, a philosopher, I think Kant.
92
:
Michael Ostapenko: He described these things in its… in themselves.
93
:
Michael Ostapenko: Idea?
94
:
Michael Ostapenko: And it's basically when… when you define something which cannot be, like, defined in other terms, like, right?
95
:
Melinda Lee: Right.
96
:
Michael Ostapenko: So that… that's… that's the situation. We… we are, we are… when we go into this area of intelligence and things like that, these are, like, very fundamental.
97
:
Melinda Lee: Thanks, and .
98
:
Michael Ostapenko: Very often, they… they… they can't, like, be,
99
:
Michael Ostapenko: One thing can't be expressed as another.
100
:
Melinda Lee: Yeah.
101
:
Michael Ostapenko: possible.
102
:
Michael Ostapenko: You need both.
103
:
Melinda Lee: because I saw a documentary about the… they were programming some robots to… to just,
104
:
Melinda Lee: play sports. Like, there were two robots that were playing against each other for… the goal is to… it's soccer. Like, the two robots are playing soccer, and the goal is for each of the players to get a goal, like, to hit the ball into the goalie, and then… so that's the game.
105
:
Melinda Lee: And then they said the AI robots were playing, and they only programmed one goal, to get the ball into the goalie. But then, over time, when they started to play each other, they started to form neural networks. Oh, this worked.
106
:
Melinda Lee: this got me a goal, this didn't. And then over time, like, they started to create patterns that then, therefore, they started to do some other things, because they were trying to figure out one problem to solve over another, and then it became a different type of
107
:
Melinda Lee: Yeah, the original goal got skewed.
108
:
Melinda Lee: So then they're saying that that's how could be possibility of when they start to solve different, you know, continue on to build on each other.
109
:
Michael Ostapenko: Yeah, I hear you, and I remember this story.
110
:
Michael Ostapenko: which be it, like, not long after this appearance of the, I think, GPT 3.5 or something, that,
111
:
Michael Ostapenko: There was, like, some kind of Pentagon, experiments with AI.
112
:
Melinda Lee: Yeah!
113
:
Michael Ostapenko: Where they had, like, these, drones flying, and they had this, objective to,
114
:
Melinda Lee: hit the target. Yeah. Or something like that, and .
115
:
Michael Ostapenko: Then, and that's how they scored, and that's… that's how they… what functions work.
116
:
Melinda Lee: Right.
117
:
Michael Ostapenko: the more they hit, the more they would get, and that was the goal, right? And then, at some point, the operator, the human operator said, like, for whatever reason, that, do not execute.
118
:
Michael Ostapenko: And, the, like, the artificial intelligence system, like, decided, like, it's still a reward, for it, right? So, what they did, like, destroyed the,
119
:
Michael Ostapenko: communication tower connected to the operator. And still, she's the target!
120
:
Melinda Lee: Exactly! That's what I'm… see? It's do- that's what it does! It could be possible.
121
:
Michael Ostapenko: I mean…
122
:
Michael Ostapenko: Well, it's not… it's not possible with the current systems, it's definitely out of reach. It's… it's a fairy tale.
123
:
Michael Ostapenko: But,
124
:
Michael Ostapenko: Say, if we, like, do this thought experiment that, we have, like, this intelligence, which is,
125
:
Michael Ostapenko: quite powerful.
126
:
Michael Ostapenko: To do more than what is currently possible.
127
:
Michael Ostapenko: And, it's given that goal.
128
:
Michael Ostapenko: then, well… And it's given the, let's say.
129
:
Michael Ostapenko: Physical means to achieve that goal.
130
:
Michael Ostapenko: Yeah, I mean, technically, it's possible for it to… and it's… it can learn on its own, on the go… on the go, right?
131
:
Melinda Lee: Right.
132
:
Michael Ostapenko: So, in order to achieve that goal, it could, like, experiment and things like that, and can, it can, do something like that. Yeah, technically, it's possible.
133
:
Michael Ostapenko: But, and this is, like, a really big pot.
134
:
Michael Ostapenko: I mean…
135
:
Michael Ostapenko: In the real world, if something like that, like, happens, like, there is, like, a deviation from what's expected in human society.
136
:
Michael Ostapenko: it will be noticed right away, just like you notice what… if some person does this.
137
:
Michael Ostapenko: So, it's gonna be punished, right?
138
:
Michael Ostapenko: So, so, so…
139
:
Melinda Lee: Yeah, hopefully!
140
:
Melinda Lee: Yeah.
141
:
Michael Ostapenko: So, so…
142
:
Melinda Lee: As long as we know how, as long as we know how…
143
:
Michael Ostapenko: Yeah, I mean, like,
144
:
Michael Ostapenko: And then, yes, that's another story, like, like, from a sci-fi movie, which is,
145
:
Michael Ostapenko: Like, depicts this, artificial intelligent,
146
:
Michael Ostapenko: Beings, like, with, like, human-looking bodies, but who, who possesses, ability to, like, think…
147
:
Michael Ostapenko: In such a…
148
:
Michael Ostapenko: Complex and deviant ways that no human can possibly, like, outsmart them and prevent them from reaching their goals.
149
:
Michael Ostapenko: Sure.
150
:
Michael Ostapenko: But, then you need to think about that, like,
151
:
Michael Ostapenko: The, the, the, this, goal,
152
:
Michael Ostapenko: You, you're the one who's, setting this, this goal, and you're the one who's, limiting or giving this, system the,
153
:
Michael Ostapenko: Capacity, physical capacity to achieve that goal.
154
:
Michael Ostapenko: So… At the end of the day, the responsibility is still on you if something goes wrong.
155
:
Michael Ostapenko: Yes.
156
:
Melinda Lee: Yes, yes.
157
:
Michael Ostapenko: Because if the system doesn't have the capacity to achieve that goal, even if it Technically, like…
158
:
Michael Ostapenko: Can't learn something like that.
159
:
Michael Ostapenko: eat, eat,
160
:
Michael Ostapenko: It won't be able to do that. And it won't even be able to learn to do that, because it just lacks the capacity. And,
161
:
Michael Ostapenko: You need to also… Consider other factors, like,
162
:
Michael Ostapenko: Like, energy and, things like that, because…
163
:
Michael Ostapenko: We… we… as humans, we, like,
164
:
Michael Ostapenko: the custom to think that, here, we come, we think, and the machine will go and think and do single projects the same way. But,
165
:
Michael Ostapenko: There is, like, energy… Constraints. We need to eat.
166
:
Michael Ostapenko: We need to breathe, and things like that. Now, you put a battery into a robot.
167
:
Michael Ostapenko: How long will it last?
168
:
Michael Ostapenko: You put some kind of, chip into a robot to execute those, complex algorithms and, or computations over this neural network.
169
:
Michael Ostapenko: for how long will it last to… for it to be able to produce these superior results? Now, you can say.
170
:
Michael Ostapenko: why, it can outsource it to some data center. Yeah, but, this is physical network connection. It's always controlled, right? The data center is, also
171
:
Michael Ostapenko: The consumption of electricity data centers, the consumption of resource, computational resources and data center.
172
:
Michael Ostapenko: It's all… it's all…
173
:
Michael Ostapenko: controlled, it's all under surveillance. If there are, like, any deviations, like, and how it will even use it if it's, like, it has to pay for it.
174
:
Michael Ostapenko: who's gonna pay for it? I mean, if you start thinking and, like, going really deep into all these things, you'll see that it's,
175
:
Michael Ostapenko: Impracticable.
176
:
Michael Ostapenko: for something like that to happen. And that's actually…
177
:
Michael Ostapenko: what life, kind of, is about. It's, it's, it's about, like, complexity.
178
:
Michael Ostapenko: And,
179
:
Michael Ostapenko: That's why things like, we see in horror movies about these monsters and things like that. They don't really exist in real life.
180
:
Michael Ostapenko: Because… Things like that, they just…
181
:
Michael Ostapenko: unsustainable in real life. You can't… Can't survive.
182
:
Michael Ostapenko: And this, ultimate… killing machine, which, is presented, by some, like, doomsayers. They…
183
:
Michael Ostapenko: It won't be able to survive either.
184
:
Michael Ostapenko: In this world.
185
:
Melinda Lee: Okay, thank God. I could… I can sleep tonight.
186
:
Melinda Lee: And so what is this whole thing about, like, quantum… did you say something about, like, quantum AI?
187
:
Michael Ostapenko: So it's not, like, about quantum AI, it's about, like, quantum computations.
188
:
Melinda Lee: Huh?
189
:
Michael Ostapenko: Yeah. So, quantum computations, it's,
190
:
Michael Ostapenko: Let's say, we have these conventional computers.
191
:
Melinda Lee: Right?
192
:
Michael Ostapenko: Which, and we have… which are… Well…
193
:
Michael Ostapenko: let's say they… they are based on, like, classical physics, not so much on quantum… they do…
194
:
Michael Ostapenko: exploit certain quantum effects, I suppose, on the very, very low level, like, in micro-sheets, but it's not used to speed up the computation, like, exponentially. What we think about, about, when we mention, like, quantum computers is, like, this exponential speed up.
195
:
Michael Ostapenko: over… the conventional, computers. And, this, this allows, like, to…
196
:
Michael Ostapenko: Why is it important, in general? So… In mathematics.
197
:
Michael Ostapenko: And in computer science, specifically. Like, there are also so-called, like, complexity classes.
198
:
Michael Ostapenko: computational complexity classes of problems. And the idea behind those classes is that,
199
:
Michael Ostapenko: If you have some… some class which is… describes very hard problem.
200
:
Michael Ostapenko: It, it, it usually has this, How do you describe it?
201
:
Michael Ostapenko: It has an associated language, common language, which… which is extremely expressive.
202
:
Michael Ostapenko: Because, it allows to express this powerful problem.
203
:
Michael Ostapenko: And it's… you can use this language to model real-world and solve real-world problems, but
204
:
Michael Ostapenko: Formally, rigidly, accurately.
205
:
Michael Ostapenko: And, what's… the main idea behind this class is that
206
:
Michael Ostapenko: If you have the solution for just one problem from this class.
207
:
Michael Ostapenko: We have solutions for all the problems which fall into this class.
208
:
Michael Ostapenko: like, efficient solution, because you have this
209
:
Michael Ostapenko: easy, and it's, like, in practice, you have this easy… And,
210
:
Michael Ostapenko: The ways to transform one problem of this class into another. And that's the crux of it.
211
:
Michael Ostapenko: So… Quantum computers, they… they solve… they basically can solve one of these.
212
:
Michael Ostapenko: powerful classes of problems. That's why they… everyone is after them. If… because if this… if you build a quantum computer, it can solve one problem from this
213
:
Michael Ostapenko: Very hard class of problems.
214
:
Michael Ostapenko: And you can solve all of them. And these problems, they… they're…
215
:
Michael Ostapenko: it's, it's the… so, it's, it's so expressive, it can, it can describe, like, as I said, like.
216
:
Michael Ostapenko: Biological processes, physical processes, social processes, anything that is extremely complex, and you can describe it, and you can then optimize
217
:
Michael Ostapenko: Things, like, the different aspects of these processes.
218
:
Michael Ostapenko: Build new drugs.
219
:
Michael Ostapenko: And, I don't know, build new materials, things like that.
220
:
Michael Ostapenko: And, that's why everyone is after this, quantum computers. But… At the same time, Even though this…
221
:
Michael Ostapenko: like, classes are so powerful and so expressive.
222
:
Michael Ostapenko: There is no, like.
223
:
Michael Ostapenko: To this day, there is no mathematical proof for certain of the… some of these classes that they can't be
224
:
Michael Ostapenko: Solved, efficiently using classical means.
225
:
Michael Ostapenko: And, now, this is one of the, like, so-called millennial pro- millennium problems.
226
:
Michael Ostapenko: It's, like, with the price of, like, $1 million for each.
227
:
Michael Ostapenko: Yeah, but… Now, I'll say right away, we aren't trying to solve this. It's beyond our scope. We are just trying to create, like, what's practically possible and what's feasible right now. Yeah, but,
228
:
Michael Ostapenko: In general, just to set a, like, a stage, like, there, there is this price, and there is this problem.
229
:
Michael Ostapenko: And, no one knows the solution. No one knows if the solution is even possible, or even if it's even possible to prove that it's impossible. No one knows anything about it. So, but, what we believe, and what we…
230
:
Michael Ostapenko: like, C?
231
:
Michael Ostapenko: Is that current solutions to these kind of problems, they aren't optimal yet.
232
:
Michael Ostapenko: there's still… Ways to improve.
233
:
Michael Ostapenko: efficiency.
234
:
Michael Ostapenko: And, we are aiming to do just that.
235
:
Melinda Lee: I love it.
236
:
Michael Ostapenko: We're not… we're not trying to, like, build, like, quantum computers, which are a different topic, and I have, like, my opinion on that too, but… although I'm not, like, I can't be considered an expert in this field, so…
237
:
Melinda Lee: That's what you're doing, that's what you're doing at Citrix, right?
238
:
Michael Ostapenko: Not quantum computing, the, the classical, approaches. Yeah, but I'm just, I'm just explaining it using quantum computers, it's, because it's a hot topic, it's kind of ironic, because, what we do, it's,
239
:
Michael Ostapenko: which is logic and algorithms and things like that, mathematics. It's kind of ancient.
240
:
Michael Ostapenko: Quantum computing?
241
:
Michael Ostapenko: They're relatively new, and everyone is talking about them, because it's, like, a new child. But, yeah.
242
:
Melinda Lee: Fascinating. So fascinating. I think I'm so appreciative of this conversation. I'm learning a lot, and it's because it's so different from what I do with regard to people and communication, and, you know, we have these systems, AI systems, computations, and computer… I mean, just really fast-tracking.
243
:
Melinda Lee: How well, we… do things as humans, right? How well we're able to solve problems, and these…
244
:
Melinda Lee: these really challenging, complex problems, and that we have the leverage of the technology to do that. And so, I really appreciate your expertise and your experiences digging into… to using this, to learning this, to applying this to help society.
245
:
Michael Ostapenko: Yeah, that's, basically the goal, because,
246
:
Michael Ostapenko: I mean, technology on its own, it's,
247
:
Michael Ostapenko: It's nothing. It's not just useless, it's really nothing, because, unless there is, there are people to use it to benefit from it.
248
:
Melinda Lee: Yeah, yeah, yeah, I agree. And I thank you so much for now I can sleep tonight, because I'm not afraid of these AI robots taking over the world.
249
:
Michael Ostapenko: Well, I'm glad. I mean, you can definitely sleep for, like, tight for the next decade, or several decades, because the current systems aren't there yet, and I haven't seen any fundamental progress
250
:
Michael Ostapenko: Which would allow anything like that to happen.
251
:
Michael Ostapenko: Yet.
252
:
Michael Ostapenko: Oh.
253
:
Melinda Lee: There's a lot of, like.
254
:
Michael Ostapenko: Dogs, there is a lot of hype about this field, but
255
:
Michael Ostapenko: Yeah, there was a breakthrough, like, in this,
256
:
Michael Ostapenko: transformers and things like that. But,
257
:
Michael Ostapenko: Which… which kind of showed that semantics can be a model, like, efficiently using these systems, but beyond that.
258
:
Michael Ostapenko: It's really, like, more, more, like, a lot of hype, Nothing, nothing, nothing, substantial.
259
:
Melinda Lee: Yeah.
260
:
Melinda Lee: Yeah, good. Good, good.
261
:
Michael Ostapenko: I really appreciate.
262
:
Melinda Lee: And Michael, so how would people get ahold of you? Where do they find you if they want some more, experience or expertise from your company?
263
:
Michael Ostapenko: Well, they can,
264
:
Michael Ostapenko: I'm usually, like, available, through email. I always, check it. It's… you can write to CEO at Cetrix.com.
265
:
Michael Ostapenko: So, Cetrix is spelled as S-A-T, as in Tom.
266
:
Michael Ostapenko: R, Y, X, etc.
267
:
Melinda Lee: Yep, perfect! And we'll have your, email in the show notes, too.
268
:
Michael Ostapenko: Yeah, thank you, Emil Day.
269
:
Melinda Lee: Okay, thank you so much, Michael. It was such a great conversation, and I really enjoyed it, I learned a lot, and I appreciate your time.
270
:
Michael Ostapenko: I'm happy, I'm happy to be…
271
:
Michael Ostapenko: invited by you, and I really… it's,
272
:
Michael Ostapenko: It's, it was, it was really a pleasure talking to you.
273
:
Melinda Lee: Thank you, and thank you, audience, for being here. I trust that you got your takeaway from today, and so continue to use and leverage technology however it serves you best, so that you can have more deep, relationships and be able to enjoy your life.
274
:
Melinda Lee: And so, until next time, I'm your sister in flow. May prosperity flow to you and through you, onto others, always. Thank you. Bye-bye!