Artwork for podcast The AI-Freecast
"Pay rent forever": Tante on Luddites and AI Data Centers
Episode 510th February 2026 • The AI-Freecast • Myles McDonough
00:00:00 01:16:43

Share Episode

Shownotes

In this episode, Tante discusses the economics of data centers, the potential role of AI in government surveillance, and the importance of friction in a meaningful life.

Tante is an independent theorist working on the intersection of technology, politics, and the social. He is a de-evangelist and luddite. He writes and speaks (but mostly writes) about the social consequences of the digital turn as well as the structure of the digital itself.

Links

  1. Join the AI-Freecast Patreon
  2. Bansuri Meditation Live @ ALPACA 8/24/19
  3. Eoson.app

Transcripts

Welcome to the AI-Freecast—a podcast where skeptical humans come together to look critically at AI technology and its place in our lives. My name is Myles McDonough. I’m a ghostwriter, author, and improv teacher, and I’ll be your host.

Quick plug: We keep generative AI out of our production process here, and that means paying real-live human beings to record, edit, and transcribe these episodes. Right now, that comes to about $200 per episode, which I’m currently paying out of pocket. If paying humans for their work is important to you, and you’d like to chip in, I’ve set up a Patreon at patreon.com/theaifreecast. Please become a Patron today and help support human creative work.

Today, I’m thrilled to welcome Tante to the podcast. Tante is an independent theorist working on the intersection of technology, politics, and the social. He is a de-evangelist and luddite. He writes and speaks (but mostly writes) about the social consequences of the digital turn as well as the structure of the digital itself. Together, he and I talked about the economics of data centers, the potential role of AI in government surveillance, and the importance of friction in a meaningful life. Let’s dive in…

Myles McDonough (MM)

All right, we are on with Tante. Tante, thank you so much for joining us here on the AI Freecast. It's lovely to have you today.

Tante (T)

Thank you for the invitation.

MM

Let's jump right into it. Right before logging on here, I was doing a quick review of your bio on your website, and I hadn't realized before this point, you are a self-identified Luddite, which is awesome. Tell me, who were the Luddites and why do you consider yourself one of them?

T

The Luddites were textile workers in England a few hundred years ago, when automation hit the textile industry in England and many people know the Luddites as, they hated technology and they just wanted to destroy it. But that's basically the government propaganda from back then, because what the Luddites really cared about was, okay automation technologies like the power loom was introduced and they realized this thing will be used to make our lives worse. We will create worse products, our working conditions will be worse, we will get less money, everything will just be worse and they didn't agree with that. They said okay If this thing makes people more productive, which one could argue but one could also disagree, but if it makes people more productive, we should also get some of the benefits. Because as this thing was set up they realized quickly, any benefit that this will give financially, whatever, will go to the person owning the machine and not us, we'll just have less power than we had before. And that's why they fought the, they didn't fight the machine, they fought the social structure that they were forced at, like the economic structure that they were forced into and that was forced upon them. The historian David Noble called them the last people in the West who looked at technology in present tense, they didn't look at technology as this is something that could be the future, utopia, blah blah, they looked at technology in the world as it was. That okay given the power structures, the economic structures that we have, if this technology in their case the power loom is introduced, what will happen is that our lives will get worse and we will get nothing for it. And they didn't agree with that, and I think that kind of thinking applies very well to our modern world where there's also a lot of technologies being thrown at us, sometimes we like them, sometimes we wonder, who asked for this, why does it have to be this way? Why can't that be better? It used to be better in some ways or sometimes like, okay where's the benefit and for whom? And I think that the Luddite approach of looking at technology is not as these Abstract objects that you look at like in a lab, this is the technology and I try to understand what the mechanism does, but no, this is a thing and of course you try to understand how it works. But also, if I put this thing into this world that we live in, what will this do? And that's often very different from if you just look at these things without context, without connection to the real world. Like sometimes if you look at the technology, it looks like, this could be cool, this could be useful. Like say, when Uber came out, Uber's story was, if we give people the opportunity to use the car that they have in their off time, whatever to make some extra money, this will get people to not buy cars so we have fewer cars, that was the sales pitch. And if you just look at it in the abstract that makes sense. If you look at it in the real world, it's like no people will buy cars to be Uber drivers, so you have extra cars. And that's the kind of thinking that you need to do because I think looking at tech just as these artifacts, these machines that you analyze as machines, clouds your judgment of what these things might actually do. And a lot of the time you don't need to understand these things so deeply. Oftentimes people who work in an industry will tell you directly what this thing will do, even without fully understanding how the machinery operates, they know how the process, how the work is organized and what a change that organization will do to them.

MM

What was your own process towards becoming a Luddite? Did you always think that way and feel that way or was it a gradual change?

T

I used to fall for the propaganda view on Luddism, like, these are people who are just scared of technology. And I'm an engineer by trade. So I'm a computer scientist. I love technology, I really like digging around with computers and all that kind of shit, still do, so I never called myself that. But for at least a decade, if not more, my focus has been very much on looking at technology in like, okay, but in this world, not in fantasy world, but in this world, what will this do? What is the effect of this? And at some point, I think I saw a talk by Dan McQuillan, or whatever, who talked about the Luddites and what they actually were about, what their perspective was, I'm like, okay, that's me. I'm a shitty textile worker, but their process, their looking at the world, that's me. And then I realized, maybe I should call myself that, maybe that's a neat provocation. And then more people, I think that was basically at the same time that more people came up using that term way more affirmatively. And Brian Merchant's book came out Blood in the Machine that gave, I think, more people an insight into, that Luddism, this is a useful way of looking at societal change and the way that we introduce technologies and what it does. I think I've been doing it for two or three years now before I really thought, okay, Luddites, those are just weird people who hate tech. And then it's like, they don't get vaccinated or whatever. And I thought that is dumb. But then realizing no, no, they had a very political view on tech. That's what they cared about. The Luddites, people often say, they just destroyed the machines and they destroyed all the factory halls or whatever. But they specifically told the owners, we will destroy this power loop, this one thing will die. And then they destroyed this one machine, all the other machines they didn't touch because that didn't affect them in the way. And I've been developing, I'm not the world's go to Luddite expert, but I've been using the last few years to try to reframe Luddism a bit, at least in my mind to be less of a defensive movement, because it often is. Like, it's okay, we need to destroy XYZ, whatever hurts us, whatever is bad for us for whatever reason. And that has value, sometimes you just need to stop something. But I think also how can we use that kind of thinking more forward looking, because that's not just the tech we don't want, but also the question of what do we want and what role does maybe technology play in that? Can it play a role and what role could that be? And do we have the technologies for that or do we not? And if we don't, how do we get to them?

MM

So thinking about the purposes of a given piece of technology in the present like the Luddites did and swinging back around to AI, what is the purpose of AI today?

T

That is a very complex question, because I think the term AI itself clouds more than it clears up because AI is a lot of stuff. Like a lot of the stuff that today is labeled AI is an Excel macro or it's people in India doing some work or it's what I was taught when I was still a student as signal processing. Like neural networks are not a new invention, I was taught that at university and that's more than 20 years ago. So AI is kind of a weird term in that regard. But the way it's mostly used these days in media when some old man says things or Elon Musk says things or whatever, I think the purpose mostly is to, and I'm kind of paraphrasing Ali Alkhatib here, I think the purpose is mostly to shift power to centralized infrastructures because these AI systems, especially the big AI systems are built in a way that you have to rent them. What you do if you integrate open AI stuff into your infrastructure, into your processes, whatever, this means that you will pay rent to them forever. That's the actual purpose. That's the dream that they have. Because all the investment, if you look at it just from a purely financial standpoint, there's been a fuck ton of money being poured into this. Where does that return on investment come from? And you can only get that kind of money back if you get everyone to pay rent forever and choking up the rent every month or every year or whatever. And the other thing is this promise that you don't have to pay people anymore. I think that's often more prominent, but I think if you look at the purpose of AI for the people providing it for Microsoft open AI, their purpose is, we need to get you into a point where you bind yourself to us forever because we have the AI that you need for your processes and you fired other people because we told you that you don't need them anymore. So I think the story of you don't need to pay people, you don't need designers and computer scientists, the programmers and whatever, that story is basically the trick. The trick is we're telling people who don't know better, you don't need to pay people anymore. Who likes paying people? No CEO likes paying people. So they really jump on this, like, okay, I just buy this thing, not realizing, no, you made yourself fully dependent on a handful of companies that you might also already depend on way too much. Like the business I work for also runs on Microsoft's email and Office 365 thing, every year they just roll the dice, this is going to be more expensive though, because fuck you, that's where you got to go. And this will just happen way more and way more integral, because this is not just, you have a problem opening a Word file, it looks a bit dodgy if you open it with a different program. No, this is, you replace the people who knew how your business works with infrastructure by OpenAI or by Microsoft or Dropbox or whatever. So now you're fully dependent and if they say you will pay double now, you will pay double. And a few software developers have seen that recently. There was this code editor called Cursor that some people really like, you pay a few bucks every month and then you have LLMs by Anthropic or whatever trying to generate code for you. You basically give it a prompt, I need an app that does this and it tries to generate an app for that. Let's not talk about the quality of that, different story, but okay, and it kind of works. And then Anthropic realized, this costs us a lot of money, so Anthropic raised prices. So Cursor had to suddenly raise prices and not like 10%, I think it was $20 before and then it was $60, and I think they are triple digits now every month. And you still are rate limited all the time, because it's super expensive to run this shit, it doesn't make sense, economically. But that's the world, that, every business that wants to integrate LLMs and AI into the process needs to look at Cursor and needs to look at what's happening there. It's cheap because there are subsidies. It's cheap because it's the whole Silicon Valley business model. We give it to you for free because we will get our money in the end. We will addict you to the service and then we will get our money. And we see that playing out currently. So I think that's what this thing is for. It's trying to set up maybe the biggest shift of agency and power to centralized infrastructures that we've seen so far.

MM

So it's almost like there are three levels to the way that just the phrase AI is presented to folks in general. There's AI as tool, as piece of code, as machinery, not exactly a physical thing, but a thing that works in a certain way. Then there's the underlying or explicit promise of that to CEOs, to managers who don't necessarily understand the tech, you will be able to fire half of your workforce. And then underlying that is the motivation of the folks building the LLMs in the first place, which is to rent seek, which is to build a mode and collect, and collect, and collect into the future.

T

Yes, and there's more levels to it even, like there's the very aspirational thing that you see is being sold to politicians. You need to put AI into everything because it will generate scientific discoveries and all these things. The funny thing about AI is because it doesn't mean anything. Anything is AI if you want to call it that, is that you can project anything onto it. A lot of the time when you read what Sam Altman says, like a few days ago he said, yes, in a few years people will study something, some jobs that use AI that we don't even know what they are already and they will become very rich through it. This is a hogwash, this means nothing. Like when he says AI soon will solve all the physics, what does that even mean? That is not a meaningful statement, but it sounds great because it sounds like, this is the most powerful thing we've ever found. And it has this power to fulfill any of your dreams, any of your wishes with a click of a button or the entering of a prompt. And that is very useful not just for the technology providers, but also for this whole industry of consultants around it. At least in Germany, I am in Germany, and Germany always feeks, for correct reasons, that it kinda missed the boat on the whole digital turn thing, like when all the other countries did a lot more work with that, we didn't. And so I think these days they often hope that by using AI, by just throwing AI at the problem, they don't have to do the hard work of figuring out how digital technologies can actually work in their specific context. If you've ever done a digital transformation process, A, it's a very social process because when you talk to the people doing the actual work, like what do you actually do? Like how can we help you with this? Why don't you accept the system? And it's a whole thing, and that's hard and it's very error prone and it takes a long time. It's the needs of the people doing the work and the needs of management are very different, so it's a really tough thing to crack. And now AI comes in and says, we don't need to do this work. We can just throw AI at it and everything will be great. And that is of course garbage, but that is, I think another way that AI as this narrative works. It's always this promise of getting meaningful returns without having to put in the work. You don't need to learn to play an instrument, you can just generate songs for you. You don't need to learn how to structure your thoughts, Chat GPT will write your essay and it doesn't cost you anything, a few bucks a month. But the promise is always, you get something and you don't have to put in the work, you don't have to invest in any meaningful way, shape or form. And that is very, very promising if all you look for is the outcome, the output, the quantifiable thing, the thing that you can sell. You need, with people, you can produce a thousand units a day, with AI, you can produce 10,000 units a day. That's better even if the units are worse because you can sell 10,000 of them now and not one 1,000. So I think that we see this narrative of AI is so powerful that you get anything you can think of, basically for free, if you just know the magic terms. That structural argument, you see that everywhere, that's why you can argue you don't need the higher people. That's why you can argue you don't need scientists anymore, everything will be great. That's because this narrative is so powerful. And by accepting this narrative, you are now forever renting shit from Microsoft. And it's a weird dynamic, in a society, and again, speaking from a German perspective, it might be different in other parts of the world, but I think a lot of the time we've run out of utopias and run out of positive views for the future. Like where do we want to go into climate change, building more cars, that sucks. And here's this thing that does anything, anything you want basically for free. It's a few things happening at the same time that support itself. Like this lack of the vision, economy at least in Germany is getting way more inequal, very rich people getting way richer, many people can't afford rent anymore. And then there's this AI that some people hope that they can at least keep going for whatever reason. And I think that we see that in many places of the world where a few structural things that were there before this AI height hit, laid the groundwork for this AI thing really hitting as hard as it did. And in connection with the tech sector really having nothing else to offer. Like what did they come up with after the smartphone? Thinner iPads and even thinner iPads. But everyone has an iPad now who wants one, what's the next thing? Blockchain wasn't it, Meta wasn't it, now it's AI. But nothing of those things have been meaningfully, I mean maybe not nothing, but I think there's very little meaningful change that people feel. But we kind of tied our wagion to tech as like they will drive us to the future, they will pull us there. If we just buy the things that they produce because they produce future and progress and all that. And that promise hasn't been true for a long time now. So everyone's kind of desperate because where do we want to go? And again, if we really want to have this conversation it gets messy and it gets very annoying, and you have to talk to people you don't like and who believe different things than you believe. Or you can just do AI and it's for free, and it just creates the future for free. I think this is a very long winded way of saying, we're all falling for this, this is cheap. And all the other options we have are hard and annoying and suck in a way. And this is the one thing that promises it won't suck. It will even solve climate change, somehow, we don't know how, but it will magically make it go away. It's a very childish perspective on things, like mom and dad will come and everything will be fine. And for kids, that's a reasonable approach to the world. But for us, sadly, we're the grownups, no one else is coming, we have to fix it.

MM

I have way too many follow-ups written down, so I just got to pick one and dive in. It seems like maybe this has always been here, or at least for the last few decades, but there's a growing divide between running after a utopian future on the one hand and addressing our messy present on the other. I wanted to highlight one beautiful quote from your most recent blog post where you talk about the importance of friction in human life, and you end it by saying, speaking of utopia, the utopia of AI is the dystopia of never being touched by anything. I wonder if you can dive a little bit more into that and tell us why friction and nuisance and all these other things matter, in a way.

T

Whenever we talk about AI, it's as we already talked about this promise of this doesn't cost you anything. It's cheap. It's there. It's just a click of a button. That means there's no friction. Everything is just perfect for you, streamlined. You get what you need. That's the promise of AI. It's also the promise of you don't have the- you have a hard time connecting to people; AI can generate friends for you. That's Mark Zuckerberg's vision for the future. You don't talk to people anymore. They suck anyways. We'll generate a handful of chatbots for you and they will tell you that you're the greatest person in the world. Again, no friction because every feedback you get from these systems is you're great; you're smart, you're the best. You're just the best. That's not the experience of the real world. If you interact with other people, they will sometimes tell you that you're a dipshit and a lot of the time you probably are because we all are. This friction is also what allows you to grow. It's also- I never delete all block notes. If you go back into my block, I've written some dumb shit, really dumb shit, not I'm going to- like Trump dumb shit but dumb shit, stuff I would agree today- agree with today but having said this, it shows me sometimes when people ask me, hey, you wrote this thing; do you still believe it? No, because I learned something, because I talk to people about these things and I've grown. I think friction is a way of us interacting with the world. When we interact with the world whether it's talking to other people and their needs and their wishes or whether it's learning to play the guitar, there's always friction. You always have to work against something and all good things in life come from working not just again something, but feeling like this- the world resisting you. That's just not a resistance as it doesn't want you to get there. It's like this is the resistance of you trying to learn something. It's the resistance of you getting better. If you fear resistance, that means you are improving. That means you're learning something. That means you're being part of this world. You're in it. That's the thing you feel. Walking around outside, of course friction is necessary to walk because otherwise you can't move, whatever, but walking around and having to make space for other people who also want to use the road, that's just you feeling friction. You feel the other person being there. You're on the same level with them. I think for human beings, that is very important and AI promises that you don't need to have that in your life. You don't need to organize with other people to build the next killer app because it's just going to be generated. You'll be home, on your computer, prompting shit; it will generate something and that will magically generate money for you, and you don't have to talk to anyone, be there for anyone. You don't have to do support or anything. Everything will just be done by AI and that is super isolating. The promise of AI is that you never leave your home anymore and that is I think a very sad state of affairs that many people- we have this loneliness epidemic that many people talk about a lot and that's true. People feel lonely for a good reason because many people are very lonely these days but where does that come from? Does AI help with the way it's structured as like, yeah, just stay home. We'll generate social connection for you that's not to an actual person but to a sycophant. Is that helping anyone? I don't see that but that's the role of friction, how I see it. I think you need to feel this. That's the only way to feel the world. A world that's frictionless is a world you can't feel. You can't touch and you can't feel other people and see them as equals if you don't feel the resistance of them also being there and their needs being different from yours.

MM

Beautiful, let's jump into some application type stuff here. I was very interesting reading through one of your recent blog posts about something I hadn't known about before, the difference between generative AI and discriminative AI. Can you briefly jump into what that difference is all about?

T

Yeah, it's basically just the application of the neural network. AI, as we use the term today is neural networks, very large neural networks sometimes called LLMs or whatever other term you want to throw at them. These are huge statistical models. Model is an overused term but just think a lot of statistical data, just a lot of numbers and if you put them into a little bit of computer code it generates- it can produce outputs. You give it an input. It then generates an output. These models are trained. LLMs are usually trained on the whole of the internet and whatever Facebook could steal somewhere, whatever. OpenAI also steals stuff, of course. They all steal stuff. Let's not be unfair here. Now you have a model that has the patterns from the training data embedded into them and you can use it in two ways. The way that most people look at it today is generative. Generative says I enter a prompt, write me a story about a unicorn as a firefighter and then this model starts generating the most probable continuation of the phrase that I gave it. It will generate something that looks like a story and it doesn't need to be just a story. It can also be program code. It can be whatever. It can produce an image, a video as a thing these days, music, whatever. You're putting in a prompt and it will generate a meaningful artefact that you can supposedly use. Discriminative AI is more about classification. That's a thing we've been doing for a long time. Think you have a whole bunch of photos and you want to know the photos that have a cat in them, then you need a neural network that just looks and scarecrows, that processes the image and says this image has a cat, this doesn't; this has one, this doesn't, this doesn't. That's discriminative. It doesn't give you the solution. It's more like a building block. It tries to take very- often very complex data, photos, audio or whatever and generates- and creates a categorization, a more abstract look at it. I put in a photo and it tells me is there a cat in it, yes or no. That model has two ouputs, a yes output or a no output. That would be very specific. It's a cat model and discriminative AIs, we've been using them for again, for a long time. These things are built into a lot of stuff. If you do audio processing, a lot of the noise filters that you have built in there, they are built this way because the AI that we didn't call that a few years basically tries to detect this is noise, this is fine, this is noise. Whenever it says this is noise, it cuts that out. I think there's a lot of very clear uses for discriminative AI because it's always embedded in domain knowledge. To use a- let's stay with audio processing. I'm training a model that can do great background noise filtering or whatever the reason. In itself, it doesn't do a lot but if I know how that works I can use that in my audio processing, thinking for this part of the interview I don't want the background noise because it's ruining the exprerience or whatever. It's still something that I work with. That's more of a tool. Discriminative AI systems are more like tools because they have a definitive use. This is a model that says I can filter out noise for you. You put in audio and I put out audio without background noise. That is a very clear purpose that you think of. Generative AI systems these days don't have that kind of purpose because they can do anything. They're supposed- they're presented as they do anything. ChatGPT can program for you. ChatGPT can write a story for you, can write your essay, can do your homework. Supposedly it can do your- the planning of the production steps in your factor. It can do anything. It's not a specific thing that does a specific thing. it's the- Karen Hao recently released her great book, Empire of AI. She always calls them the everything machine and that's what they are. They do everything and discriminative AI, because they need to output something that can be meaningfully used- if I want- for example, people who use Google Photos or whatever, Google Photos does that. You can look for mountains and then it shows you photos of mountains without you having ever tagged that. That needs clearly structured outputs that you can really further use of that purpose and I think that's the big difference of these two systems. The discriminative try to generate extractions of very complex and messy data that then is used in other kinds of processes or services. Generative AI does- generates something. It's not as purposeful and not as domain specific. Generative AI always claims that the thing it puts out is the end of things. It's the solution. Discriminative is more like this is a thing that's part of your solution maybe but probably you need to do more.

MM

I think a lot of the discourse around this ambiguous term AI that we're using these days tends to be around generative AI and what it can feasibly do for people. In the post where you draw the distinction, you argue that discriminative AI actually has a fairly significant role to play in abstracting the world and turning the world into data. I'm hoping that you can talk a little bit about what happens when we do that, when we turn the world from the messy experience that we encounter day to day into something that can feasibly be represented in a spreadsheet.

T

Just making that clear, that's not my utopia or the vision I want to have.

MM

Sure, of course.

T

It's more like what will this stuff actually be used for in five years when the bubble has collapsed? A lot of the time when business people want to run their workplace, whatever, they have the trouble that they don't have the data to do it. Sometimes you can acquire certain kinds of data like processing times or whatever but a lot of the knowledge is either in people's heads, and it's hard to get out or it's so hard to quantify. There's no sensor that's built for that. I can build- I can buy a sensor that tells me how bright it is or how hot it is. These things are- I can easily measure how hot a certain area is but it's very hard to have more complex measurements of sorts. A lot of the time if you want to fully automate or create more transparency in your processes, you need that kind of work that these days people do. You have a person who just knows the process in and out and can tell you now it doesn't feel right, and they can't tell you why but they are right. They just can't tell you how because they've spent 30 years building this embodied knowledge about the process and about the product. I think that we will see a lot of discriminative AI trying to be applied in that way. We have this person, very experienced. Can we somehow get them to train an AI that works well enough that it generates data that we can then use in our ERP system, in our planning system, in our whatever system so we don't need that person anymore and are not as dependent on that person and paying them a salary anymore? That is a very clearly- that's a testable thing. With generative AI, it's very hard to test things. You can generate something but then you still need people who tell you how good it is or not, whatever. Discriminative AI, it's easier to test the quality of the system because it's so domain specific. You build this domain specific thing. It does- it detects cancer in x-rays, whatever, lung cancer in x-rays. You can't just test. You have people evaluate the thing, have the software evaluate this thing and see where- how good a certain system is and at some point you will say this is good enough. Now we'll just switch to the alternation. I think we'll see a lot more of that just because the technical infrastructure that you need to run generative AI, OpenAI's ultra generator is the same that you need to- that you can use for, for example automated image processing. We talked about that earlier, a discriminative model that tells you if there's a cat on the photo or not. You can use that on video and it doesn't just need to detect cats. It can also detect people, maybe faces, maybe people's faces, maybe your face and my face. Someone might be interested in knowing where people are at all times. Now you have the technical infrastructure and the technology to do that. Now you have big data centers that will no longer just generate bad pros or essays for people who don't want to do their homework, but it will make sure that any video camera can detect any person on the planet which might not be people's utopia but that's the infrastructure that's left. Technology- the technology works exactly the same and all you need right now is just photos of people with a name attached to it. There's already- Clearview AI was a thing a few years ago but there's a whole bunch of these data stores already. Meta has a lot of data about people and knows how people look. Now you can train this model and rent that out to whatever fascist government you want to rent that out to because they super duper need to see this person was at that demonstration. Let's see where they go and let's see who they meet with and then let's send ICE there or whoever you want to send there. I think that that is an actual danger for the stuff being there because at some point these data centers will need to make money. Someone will need to make money and who has the most? Governments, that's why you see a lot of military tech being pushed towards AI because that's a big pile of money that you can just hit with a stick and have some money fall out and police also. Palantir will of course offer that stuff. Their software is a bit garbage but this is the obvious next step to build and it's not hard to build. It's just costly but it's not complex. With at some point, Anthropic no longer having money maybe, OpenAI maybe no longer having money, these data centers need someone to rent them and there's a very obvious target audience for that, the military and the police. I'm fun at parties, I know.

MM

It's important stuff to think about though. The potential outlook of the next few years involves a story where these data centers that always could have been useful to governments looking to surveil large groups of people were not built because the case could not be built for them. Now they are being built under the promise of undercutting large amounts of labor. That's just not going to happen because they're not quite capable of doing what they've been promised but once- as you said, once the infrastructure is there, the people who have the deed are going to have to make up for their losses in some way, shape or form and that's one of the most ready customers that is out there. It's an interesting scenario where the government of the United States or Germany or any particular place may or may not, depending on the place, own that infrastructure itself but they will in fact also be a renter in the equation, sending money to people who have the lease to the data centers itself.

T

Exactly because they also don't have the data. Building a data center, that is doable but the word data is not in the word data center for shits and giggles. It's important for these tasks. You need structured data sets to do that kind of work and who has that? Most governments don't probably for good reason but- even if legally they maybe can't even build these kinds of databases because there's data protection, privacy laws, whatever. Renting that is always a different ball game. You can always make certain cases and even if it's not legal in the EU for example where I live, maybe an American service will provide that and then you can always argue, oh, but it's for security so we need to rent this thing because it's the best in the world or whatever. We've seen that with the Dot-Com crash. We had this Dot-Com bubble where a bunch of companies grew really big and then imploded and what was left was infrastructure connection, cables that could route packets. What happened afterwards was Google and Facebook and the internet that we know today. That could only emerge because the infrastructure was there. The question is I don't believe that this AI hype can keep going forever even though it can go on for a long time because Microsoft has more money than God, but at some point someone needs to make money and I don't see an actual case there. If OpenAI, Anthropic, whatever, implode or reshape into becoming military contractors, what's left? Who can use that infrastructure? Who can pay for it and who has use cases for it? Analyzing your holiday photos, that's not- that's not bringing in enough money to pay for these things. I don't- I'm hoping that someone else can come up with something very useful that we will put these things to or we will just shut some of them down. People invested into these data centers. They need a return on investment. That's again looking at technology in present tense. These data centers, some have put a lot of money into tht and they need to get a lot of money out of that. That's how the system is set up and what's the path that they can take to get that money out? That's the only one I reasonably see. I'm not happy about that. I'm not happy about that but-

MM

You mentioned that- you used the word de-computing might be a preferable option. That's the dismantling of a data center rather than finding a heinous use for it?

T

De-computing is not just getting rid of all computers but it's more purposefully looking at how much compute do we need and is more compute always good because that's the current modus operandi. More data centers is better because more data is more good but the question is, is that true? Data centers have massive impacts on the world. They take a lot of power. They take a lot of water. They take up a lot of space, concrete and create e-waste and noise pollution and all these kinds of stuff that is harmful. Sometimes you still want to do harmful things. The ambulance here in Berlin, they don't run on electricity. They run on diesel and maybe fuel, but mostly diesel probably. That is bad for the environment but still we pay that price because we want to save people's lives. That's a meaningful way to look at technology. We are aware of what that does. We hope we can do it better but right now that's the best way we can handle it and that's the way that we should look at computing as well. What problem that computers reasonably can help us solve don't we have the compute for, and then we can evaluate what is the impact of that, water needs, energy needs and whatever has that, and then we can make a decision of-** This is still meaningful and we need this. That's fine. That's a way of meaningfully looking at technology. That's not how we look at things today at least not in the tech sector. It's always more. More is better and that's I think not- that's not how we look at other things in the world. Maybe we- in capitalism, we do look at a lot of things that way. More is better, more production, more whatever but maybe in this limited ecosystem that we call our home, maybe that's not the best approach to things. Again, I'm not saying dismantle all data centers everywhere. They all need to die. I think that compute is important and I think that a lot of the communication infrastructure is very useful for us and that many processes can also be done in a good way digitally. There's a lot of value to be had there but when is it too much and what's the purpose of these things, and what is the effect of these things? That's the whole idea of de-computing and currently that- you can't apply that to AI because it's just the gorilla in the room that we can't just get around. I think we should apply that more structurally. IT is- as I said, I like computers. I'm a computer scientist. They are cool but how much of it do we need and under which conditions do we want them produced? There's a lot child labor in there. There's a lot of- we don't know how to deal with the e-waste of these things. What are the actual costs of these hobbies, of digging around with computers? Are we actually willing to pay the price if we have to pay them ourselves, if we can't ship the shit off to Africa or parts of Asia to take our e-waste and poison their kids? If we actually make the calculation, what does this cost us as humanity, as earth- this has a cost of earth and we need earth. That's our spaceship. It's the only spaceship we have. Can we keep this running, please? That's basically the whole train of thinking of de-computing. What are the effects of these things again in present tense? What does this do? We see a lot of AI- Eric Schmidt recently said that and Sam Altman also said that at some point. It's not a problem that AI takes so much energy because it will solve the climate crisis in a few years. That's not present tense. That's not realistic. That's- how- tell me how because I just see it using a lot of energy and infrastructure and creating a lot of negative externalities. Can you justify that? Just pointing and saying it's magic is not a justification at least not for grants but I think that that is the- that is in a way again applying the lens to computing as a whole. What does this do for us? Is this really generating utopia? Is a world where- this is not just specifically an AI thing. Is a world where everyone just learns to hate themselves because they look at Instaram all day, is that the world we want? That doesn't mean all social media is bad but the way it's structured might be problematic right now, so how should it be- how could it be better? The same applies to the technologies that are sometimes put under these AI umbrellas. The idea that you can have cheap translation is super useful at times. I speak two languages, English and German, both mediocre but I get by but with other languages I don't know shit. Being in another country, being able to have a conversation or getting sent an article and it's in Spanish or Chinese, at least getting an idea of what this is, is sometimes super helpful. There are use cases for these technologies but can we balance them out with what they cost us and with- not just what they cost us environmentally but it's also a case- if we say you can generate great images, no illustrator, no designer has work now; is that an acceptable cost that people now no longer can work in their trade? Is that fine and what can we do about it? We can also just decide everyone who's a designer now gets, I don't know, 20,000 bucks each month just to do whatever they want and OpenAI pays for it. Let's go with that but that's not what I'm seeing currently. I think that is a perspective shift that I see as one of the few paths forward that can get us anywhere where we might even increase people's happiness and wellbeing and save this planet.

MM

That's beautiful. I'm glad that you mentioned that because I wanted to circle back to something you said way up at the top of the interview. You've been trying to formulate a more forward looking Lettrism. You started to speak to that already but what else is on your mind? What other pieces of a forward looking philosophy might there be? What is- what's come to your mind so far?

T

A term that I've been thinking about for a long time now aside from friction is care and often we say care work as taking care of the sick, the elderly, kids. That is all super meaningful work but I think we also need to care for other things and look at what care does. We often look at care as a cost factor. You need to care for your car. It costs you every year, repairs and all that kind of shit that you need to put into it or your house or whatever. Everything just- everything decays and this long defeat against decay can only be combated by investing into maintenance and care for things. I think that's the wrong understanding of care because I see care as productive. This- working with objects in the world, you maintain the building you live in, your apartment for you living in it, realizing what you need. It's not just this thing is broken. It's this isn't right. It can be because it's broken but it can also be it doesn't fit my needs anymore. Maybe your family situation changed or your cat needs something extra. I think that's also caring for, in a very small way about your living environment. We can also apply that to our neighborhoods, to our cities, to our countries, to the world. What do we- how can we care for that? How can we take care of one another? What is the world that we want to live in? For me, that is a very Luddite perspective. Luddites didn't just say my life gets worse through these machines. They saw themselves as- the term didn't exist back then but as a class of people who were disenfranchised and realizing that we share this planet but we also share a lot of needs. We may be- maybe it's good to have this conversation of what we need. I think that is a very forward looking way of using Luddite thinking. I think that for example, coming together in the neighborhood- the neighborhood I live in, in Berlin has a problem with trash being on the streets everywhere. There's neighbors meeting every few weeks. We just go through the neighborhood and pick up the trash together. We don't know each other really. That's just a thing that on some Whatsapp group or whatever gets announced and everyone comes in who has time and just picks up trash. I didn't meet my friends there. These people are not my friends. They're my neighbors but we do something together for all of us. That's good. We need more of that kind of communal doing something, taking care of the world together because again it creates friction. It takes away your day because you put on- and you need to touch dirty things and sometimes needles and all that kind of shit that's annoying but it is good to do that. It's good for your soul to do that together. I think that there is something there. I can't fully phrase it into a coherent ideology yet but that's where my head is mostly. How can we build more of that, create more of that? A lot of the stuff and especially with AI is about the opposite. It's about isolation. It's about you being at home. Covid taught people that- it was the weirdest thing. I love going to concerts. I love being yelled at by people for an hour or two. These days, people often don't show up even if they have concert tickets because at home and then I'm going to look at the video that someone posts on YouTube later, whatever. We are losing this communal being, being together with other people that are not our friends or family but just people and having to deal with them and being together with them in some way in a concert or cleaning up your street or just in the subway. Elon Musk hates public transportation because he has to be with other people. He has to acknowledge their existence. That's why he wants to put everyone alone in a Tesla. I think that's the wrong- for many reasons, the wrong path. The right path is to be together, to form community. That doesn't mean they are all friends. It just means you share space and you ackowledge that other people are there who have needs and wants and together you figure out how to do the best for everyone. There is some utopia in there, I hope because that's where I'm looking right now.

MM

That's wonderful, less vibe coding by ourselves and more picking up trash together. I love it.

T

Maybe coding together and someone can't code but they can tell you if this makes sense. You can tell them what you want to do and then they tell you this is dumb shit and maybe they are right because you just look at it as a coder. I can build this. Should you? Maybe you shouldn't. Maybe this is dumb. Maybe you can use your time better in other ways. Creating spaces where these connections can happen, I think we've lost a lot of these. Sometimes they were called third spaces where people used to go to church and everyone went to church. We had rich people. They had poor people there. People met even though it was still somewhat selective but we lost that. If you meet people, it's work so they are- you have peers, maybe the same- maybe they come from the same background, whatever or it's a selective thing. They like the same sport that you do or whatever. Having this- you meet people just because you share space and they are different, I think that is something we have replicated in online spaces because online spaces are always built around giving you exactly what you need. You find exactly the people who are like you, your peer group. That has value. As someone who grew up in the countryside in some little village, I was the odd one out. It was great to have the internet and meeting other people who are also weird in my way. It's also important to be with people who are not like you and to have that in your daily life to realize my life is not the norm. It's not how people live. It's how I live, how I choose to live maybe. Other people also have lives and maybe different needs, different problems and probably their problems are yours as well even though you don't realize it.

MM

Very nice, you've written in one of your blog posts, everything that an LLM generates is a hallucination. Could you speak briefly on what you mean by that and then also dive into a bit of the impact of generative AI in particular on research and education?

T

We all know this industry term hallucination for when an LLM generates something that isn't true. That's often framed as a bug in the systems that hopefully they find a way to solve. I think that's the wrong way of looking at LLMs because LLMs don't know about the world. They don't live in this world. They have no body. They have no experience of the world. They are stored patterns of language. After these three words, usually this word follows. That's basically what's in there. It's a bit more complicated but not a lot. What these things generate is just plausibly sounding sentences giving a prompt. They have nothing to do with the world. It's a misunderstanding to think that an LLM says something about the world. It generates a text. It generates a string that looks like something a human could have written. This is why I consider all of these- the things they say hallucinations because if we look at what the term hallucination actually means is you hallucinate if, for example you eat a lot of mushrooms and then you have impressions or sense data coming in through your body. You feel like that's not corresponding to anything in the real world. You see Bigfoot in front of you even though nothing is there but you see them because your eyes tell you that because some form of drug had that impact on you. It doesn't need to be drugs. Other reasons for hallucinations also exist. That's exactly what LLMs do. They generate a thing that has no connection to actual data about the world, that has no connection to the world. Everything they do is a hallucination. Sometimes these hallucinations because of the structure of human language are a fact. They sometimes generate a sentence that is actually true but that is not what the LLM is about. It's not a tool built for that because it has no connection to it. It's all hallucinations, some of them true, a lot of them being false. It's something that the philosopher Harry Frankfurt called bullshit. That's a philosophical term. On his book on bullshit, he defined bullshit as if you say the- if you lie, you have an intent behind that. You lie to get something or to trick someone or whatever. There's purpose. You want to say something that is not true for some goal. Bullshit is if you say something without any regard for whether it's true or false. You just say something, whatever. That's exactly what these systems do. They are the philosophic defintion of bullshit generators because they just generate something that looks plausible. If you put that into education, into research, into whatever which a lot of this- the pitch is we need to teach kids to use it because blah, blah, blah and what that does is A, it promises you, you get a result without having to put in any work. We talked about that earlier. You can get an essay without having to put in any work, anything and you also might not get things generated that are true. You might get things generated that are not true. Now, of course anything that these systems generate always comes with this asterisk where it says you, hey, you need to check anything we generate because it might all be bullshit which is a weird thing to say. We don't accept that anywhere else. If I order milk at the supermarket and they deliver it to me with a sticker attached to it, this might be full of rat poison, better check; we say what the fuck are you doing? With AI, that's fine. That's exactly what they do. It's cool because it's the future. We give that to kids or even to researchers, whatever. Will they check it? Will they have the ability to check it? If I have to write an essay on a book I haven't read because it's annoying, it's hard, I need to challenge myself, I don't want to do that. Can I check if it's true what's in there? I can't. I literally can't. I don't have the skills and that's the most obvious example. If the argument gets more complicated than what I'm used to, can I check if it's true? No idea but I can hand that in. I can get a good grade maybe if no one checks or no one has a conversation with me about it. What we are giving people is a very quick way to generate something where they have no idea whether it's right. Studies also show that they don't retain any of the information. If you- studies have shown that if people- if you have people generate essays with AI, they often can't say if a sentence is from their essay afterwards which means they obviously didn't check it. Even if they checked it, even if you forced them to check it, they don't retain that because it's not their work. They can't find that in the work. I think the effects on cognition of these systems that are being pushed as you need to use that for research because it's so much faster; you need to use that for learning because it's so much faster, the effects are- we see more and more studies supporting this but it's also very obvious the effects are massive. With human skills, it's very simple. There's this term that most people probably have heard, use it or lose it. If you don't do something a lot you will get worse at it. My handwriting is garbage because I don't do a lot of it. Now people can say you don't need to write; you have a computer, it's fine but my handwriting is worse. That's just an objective fact. It was never great but it's horrible right now. Anything you don't do you get worse at. Now you can say we are also not good at hunting our own food anymore and that's OK because we don't need to. We built a world. We don't need that and that's true. We sometimes find technologies or processes that allow us to no longer need a certain skill. That is actual progress sometimes but if we apply that to our cognition, to the way we reason about the world, that's on a different level. Saying, oh, I no longer need to make my own shirts because I can just buy them is on a wholly different categorical level than I no longer- am no longer able to structure my thoughts. That is fundamentally different. In abstract, you can say it's just a skill but cognition is what we are in many ways. This is how we are able to form political opinions. This is how we're able to be part of the world, to understand it, to figure out what we want, what we need. This allows us to even formulate the problem. What these systems create is what's often called epistemic injustice. You take away people's ability to even understand what's going on and to acquire a knowledge, to acquire understanding and to formulate what they need and want. That for- especially for democratic societies, that should be a no-go. If you see a technology doing that you should say this must never happen. We don't just- and I know some CEOs might disagree but we don't send kids to school to make them good workers. We send them to school so they are able to understand the world and understand what they want, to make political decisions later in life, to shape society. A byproduct is that they might also be able to work but the right for- the right to education comes from the understanding that you need certain mental tools and skills to be able to make sense of the world. We need to give you that. What AI and the way that it's often presented does is it undermines exactly that. It's not about getting rid of a skill that no one needs. We're all going to type anyways. Handwriting doesn't matter even though studies show that if you write something by hand you retain it way better than if you type it. Even if you- if we argue we don't need handwriting so it's fine if that dies, that doesn't apply to you being able to structure your thoughts, you being able to put your beliefs onto paper whether it's physical or digital. That is a skill that is important to you not because you need to write an essay about a book you don't care about in school. It's important for you to be able to tell others what you need, what you want and why you're not- why you're maybe not happy with the way that the world is running right now. It enables you to be an actual participating citizen in society to a certain degree. That's not- it's not the only thing. You don't just learn that in school but it's an important part of enabling you to be able to. In this increasingly complex society you can no longer parse the news if you don't have a certain degree of understanding of the world, if you don't have- that includes knowledge and that includes the way of-Then it comes a way of structuring your own thoughts, but also analyzing other people's thoughts, like why is that person saying that? What's their goal? And all these kinds of mental skills are decaying because of the use of AI, and studies upon studies show that. But Microsoft researches it's own studies show that, that people the more they use AI, the less critical about it they become, the more they just accept whatever is in there. So, we see that the use of AI in research, in education, undermines the things that we have these systems for. We have an education system to get people to understand the world. We have research because we need experts in certain fields, to generate [INAUDIBLE] that we can also all use to make sense of the world. Like we have [INAUDIBLE] and physician's to understand climate change, explain it to us, and now we can have an opinion on whether we want more solar power or not. Because we are informed. And all all those- this is a division of labor, an important division of labor, but we can only be part of that, meaningfully be part of that if we lean the skills. And these skills are hard to learn, they're annoying. Learning to write sucks. It's because all the things you write when you were a student suck, and that's not your fault. Everyone's work sucks, that's just how you learn. My work sucks. My work today sucks, I hope it's better tomorrow, and the day, and the day after. That's just how things go, and trying to do it all the time to get better, and I hopefully do at some point. But especially in school, we need to give people the space to find their voice, to find their- to understand themselves, to understand their own way of thinking. To find also the values that they actually have. Like not everyone believes in the same thing. Everybody values different things, that's OK., unless it's [INAUDIBLE], in which it's not OK, but whether you value like your physical appearance more than your- that's fine, that's your choice. You maybe are a sporty person, that's cool, but we need to give you the space to develop that kind of understanding of yourself, and the world, and be interactional with you and the world, and the other people in it, and AI robs kids of that. That's tragic. That's tragic on a human level, on a political level, on a societal level, that's a catastrophe in itself, but it's also, and this is not my main argument. The main argument is about human rights, whatever. But even for businesses, that's absolutely toxic, because the dumb, boring, routine work, who does that? The junior developers, the junior designers, the people starting out work, they do boring work, and it takes them a long time because they're just learning how things are going. They need to do that work, to learn their trade, to learn their skills, to build up their skills, to build up like a gut feeling for how things work, how to be efficient, how they can be efficient in the world, how they want to build things. We're taking that aways these days, like routine work is done by AI, only the hard work is done by the experts, what do you do in 10 years when the experts are all off of work, because they died, or they no longer have to work, they are retired, whatever. Who does that work? Who had the time to develop the skills to do the complex decisions? Who could build a bridge anymore? No one. It's also from a business perspective, not smart. in Germany and all over Europe you have a lot of complaints about a lack of skilled labor, lack of educated people who can fill in very complex tasks in engineering or whatever. We're putting gasoline on the problem right now with AI. Because the pipeline to people, no one's born an expert, you become an expert by doing it for 10 years or whatever, and you can't really get faster than that because you need to get that knowledge into your body. You need to make mistakes, and fuck up, and fuck up again, and then you're better, blah, blah, blah. You can't make that faster. We're not in the matrix where you just put in a chip and now you know Kung Fu, that's not how things work, not how humans work, at least so far. So, even for like a business, even if you look at society just as like an economic system, it's dumb, it's dumb in every way. It's just not very dumb for Microsoft, for them it's great because everyone is depending on them in the future to make sense of anything, and we see that these days. People ask Chat GPT for anything, for life advice, psychological help, but also like to make sense of problems they have, like Chat GPT, what should I do here? Maybe that's something you should know what you want to do here. Maybe that's something where you should have an opinion, and not just have one generated based on old Reddit posts.

MM

hen did the iPhone come out?:

T

Well, you're way more dependent on other people now. Like even if you're not a political scientist or an expert in a certain field, you have a gut feeling, or a lot of decisions that need to be made, like I don't know, do you want a new nuclear power plant in your backyard or not? Like you have probably an instant feeling of, and maybe that's rationale or not, doesn't really matter, but you can probably explain to people why you think the way you think, and that will go away. You will just ask some system to give you what you maybe think, you don't know, because you don't do that thinking anymore. And this is not a new technology makes us stupid. Sometimes technology has made us, if not smarter at least sometimes more capable. But the way that Chat Bots, LLMs, are being presented and put into processes, that's not what's happening, that's 100% not happening. And yes, people these days often don't know how to read a map, or could get anywhere without Google Maps or whatever map application they use, and one can argue, we don't need that anymore, that's no longer a relevant skills. One can make that argument, I'm fine with making that argument. You have to understand what you are losing, and again, not being able to navigate the town you live in if your phone is down, that's kind of dumb, but not being able to navigate your own thoughts, and your relationship to the world, what do you do? And how do you get that back? Do you even know what you lost? The example you explained, you still have that experience, my son won't. Like he's five now, he'll only know map application navigation, and maybe he won't think that he missed anything, maybe he didn't. Maybe it's irrelevant that he didn't get to see his mom and dad fight in the front seat about where to turn. Maybe that's good even, but that's a whole different thing than no longer understanding whether you think and what you feel like, that's just tragic. I hope we can curb at least some of that. Because what we see in studies is that the decay, the skill decay that you see in people it takes way longer to pick back up than one might think, it's not instant. It's not like, OK, I'm going to stop using the AI now, now it's coming back. It takes a long time, and especially when it's with kids, who maybe won't even get the chance to learn how to learn, learn how to read a book, and how to get information from a book, which is again, it's a hard and annoying process. I'd love to be able to do that quicker than I can. But if you don't even get to learn that, how will you reacquire certain skills? It's kind of like my generation kicking away the ladder behind them. Like we had the ability to learn these skills, and now we can build AI and make money off of it, and out kids, yes, fuck that. And that's no way to run a society.

MM

Well, that's what we may be facing in the years ahead. You're doing some work to help us address the future that maybe coming along. You work with the Otherwise Network, you work with an organization called, I believe Art Plus Com.

T

That's my employer.

MM

That's your employer, great. Yes, tell us about some of your projects, and tell us where people can find you if they want to follow your work, and keep learning about all this.

T

Well, the other work is mostly a German network of a handful of people who know each other and think about digital and technology in some way, but each one coming from different backgrounds. Like we have lawyers, and technologists like me, and sociologists, and whatever. It's mostly a space for us to meet and have conversations about things. Yes, I work as the research director for Art and Com, which is a studio that builds media art installations, like interactive museums and that kind of stuff. So, we're trying to connect things that a museum wants to tell with the space that they have and try to make engaging interactive experiences for people to learn. Again, that's my day job, I lead the research department in that company trying to figure out which technologies are useful for getting people to learn something, which often times [INAUDIBLE] are not. And yes, if you want to follow my work, my website is Tanta.CC, that's where I write most of the time because I don't have the patience to send it other places and it takes months to come out. So, this way I can just type it, and it's full of typos, but it's out, I like that kind of dynamic. I'm also on certain social medias, mostly Masterdon [ph], which for the three people who use that my account is TLDR.nettime.org/@Tanta, and I'm also on Blue Sky, my handle there is @Tanta.cc. But my website is mostly the best case, I have an RSS feed that you can subscribe to if you're an old person like me, who uses these suites still, that's where you can find me. And otherwise if you just search for Tanta mostly I come up. It's not a common name.

MM

Well, Tante, thank you so much for an incredible talk about generative Ai, other types of AI, and some of the implications of this technology, and how we might approach it to do our best to maintain some of the things that make us unique and make humans wonderfully irrational in the ways that we are.

T

Thank you for the opportunity to ramble and be a bit of a party pooper.

MM

We'll have to do it again.

Links

Chapters

Video

More from YouTube