Roboganda: Robot Propaganda So You Will Love AI and AI Loves You Too - E421

· Podcast Episodes English,VC and Angels,Angel Investor

 

“The part about the humanization of robots is that it's also making it personalized to your individual human needs. Humans are not just social animals, they are social approval-seeking humans. And so that desire for social validation is what drives a lot of us into social media, Reddit, to our various tribes, which is normal. And now, we are starting to get that hole filled by robots.” - Jeremy Au

“Who's behind the roboganda right now? Anybody could make roboganda. I could go into generative AI or ChatGPT to make my own propaganda that's supported by robots, but what are the structural economic incentives behind this? It's for people of two categories. One is people who are capital holders. And two, it's about people who are disruptors rather than incumbents. There's a lot of intersectionality and this doesn't apply to everybody. It's just that if you have capital, you want to maximize the returns of capital and you want to minimize the impact of labor. And robots are a fantastic way to scale up your productivity without scaling up labor.” - Jeremy Au

“The hole for social approval is infinite. There is no upper limit. There's no stomach capacity limit that you can face. Social approval will come for you in the form of robots. The algorithm provides you with a consensus point by somebody who acknowledges and approves of your point of view by serving you an algorithm version of another human saying something that sounds like you. It will generate content. It gives news feeds and articles that provide information that selectively affirms and agrees with you. And now, artificial intelligence is showing up in different roles.” - Jeremy Au

Jeremy Au explores "roboganda," the new wave of propaganda promoting the benefits of robots and AI. Roboganda playbook: 1. Emphasize transformation of industries with increased productivity, downplay short-term negative job displacement, reskilling and search costs for individuals. 2. Humanize AI to satisfy individual social needs (loneliness, validation & empathy) substituting for societal archetypes (colleague, mentor, lover, child) across media (product demos, game NPCs and Hollywood - The Creator, Her) 3. Increasingly use AI cloaked as humans to spread pro-robot narratives at effectively zero marginal cost (e.g. election bot farms, Reddit bots), instead of human creators (writers, artists and musicians) - disproportionately in favor of capital holders and disrupters over labor and incumbents.

Be a part of Echelon X!

Join us at the startup conference Echelon X! Engage with over 10,000 of Asia's innovators and decision-makers on May 15-16, at the Singapore Expo. We have 30 exclusive complimentary tickets for our podcast listeners. Sign up and use the promo codes BRAVEPOD or ECXJEREMY to claim your free tickets now!

(01:27) Jeremy Au:

Hey. I want to talk to you about something that's been bugging me and making me think quite a bit about the future for myself and my kids. I want to talk to you about roboganda, or propaganda that's for the benefit of robots.

So, as a person who invests in the next generation of startups, as well as a founder and builder in AI-linked companies I totally understand the wonderful benefits of robots in terms of productivity, in terms of getting stuff done, and of course, the amazing technical advancements that have been happening every half a year.

On the other hand, as a human who has children and also somebody who is very comfortable listening and hearing the news I do start to feel that there are certain narratives that are being propagated. And so, it's important for us to just perhaps define this and put this and pin this for future understanding exploration.

So we all know what propaganda roughly is. It's information that's used to promote a political cause or point of view. And of course, this is normally seen with a high level of bias or misleading nature. People often contrast propaganda, for example, versus facts and news and some kind of neutral arbitrator or arbiter of the truth. This definition is a good way to explain how consumers like myself as a human being would absorb that information. That being said, propaganda is also another way to describe the systematic effort to manipulate a person's beliefs, attitudes, or actions, especially with a strong focus on symbolism and narratives.

If I am trying to help myself get better and improve, there's self-help, some level of meditation, some sort of zen to it. If I'm trying to convince a single person, like my husband or my wife about what they should do, for example, to get healthier, then that would be your classic human-to-human peer relations.

So we see propaganda in the light of positive and negative. Positive propaganda, for example, would normally be seen like your health authority telling you that smoking isn't cool. It isn't sexy. It isn't attractive. It harms the people around you. So there's some sort of public health information awareness campaign, and I think most people would look at that as positive propaganda. Other types of propaganda that you imagine would be increasing a person's sense of patriotism or belonging to a country.

We also often see propaganda viewed negatively where it is somebody else, someone, that we don't like, or whose goals are contrary to ours exerting influence through propaganda. For example, we see that in many conflicts around the world today whatever we are saying is not propaganda, it is the truth, and whatever the other side is saying is propaganda, and we don't like it.

(03:55) Jeremy Au:

So what is roboganda and how is it different from propaganda? What's similar of course is that it is still a systematic effort to manipulate a human's beliefs, attitudes, and actions using narrative.

What seems to be different is that it's not advocating for society, or community, or tribe or cause. It's propagating the concept of robots itself as a class or asset of benefit. The other aspect is that roboganda, of course, is increasing mass pr

oduced using generative AI and other forms of discourse that draws upon a long history of mass communication like the printing press, and radio, and TV.

The current messages of Roboganda really talk about the benefits of robots. There's an emphasis on efficiency, productivity, and innovation. We are lionizing the people, the founders, and the engineers who are building robots and generative AI. So there's a lot of positive messaging as a result. We're just talking about how robots are augmenting human capabilities rather than replacing them. And so the messaging is really focused on how, for example, it is transformational to industries. It's going to make the industry way more productive. It's going to make the industry transform and be more asset-light and be much faster and more responsive.

So there are a lot of transformational stories at the industry level. We also talk about how robots can take over mundane or dangerous tasks to focus on creative and meaningful work instead. What roboganda, of course, is careful to distinguish is that even though no matter how transformational it is at an industry level and how excited people are about it, when it comes to the individual level no humans are impacted. So in other words, industries are being reshaped and reformed but for individuals, don't worry, it doesn't take away your job, it doesn't detract from you, you don't have free skill, you're going to be better.

So you can see the tension here, right? Which is that, hey, we're totally gonna transform marketing and healthcare and finance, but don't worry it really doesn't threaten the existing jobs at the individual level. So instead of robots or bots, or generative AI replacing labor, they're seen as indispensable allies in advancing technology and economic growth. And, all of us know that that's not true at some level, which is that you can't transform an industry without breaking a few eggs called jobs or people can't keep up or people can't use AI effectively, or that, AI effectively makes that job obsolete.

It's oh, okay, robots will now take over call centers and it's going to transform the call center industry to allow consumers to benefit from faster calls that get stuff done. And of course, at the same point of time, the article just neglects the fact that this is gonna threaten hundreds of thousands of jobs in, other countries around the world where call centers are located, for example, the Philippines and Malaysia. There's an interesting dynamic where I think propaganda is about selectively reinforcing one point while, minimizing or not even trying to tackle the topic of the downside.

So this first theme of robots transforming industries, while allowing humans to live a wonderful life in the future really skips the fact that humans take a long time when they lose jobs, obviously it's highly impactful when they're made redundant. But also it takes time for them to re-skill and change to a new profession or role, and also takes time for them to find a new job as well. And obviously, there's a huge cost to the families and so forth. What I'm saying is that robots are transformational for industries and it will cause attrition and replacement of human jobs as well to some extent.

And so there are real societal costs that happens to this productivity improvement which will be beneficial in the long term to all consumers. But there are serious, short-term human employer and employment issues that have to be addressed, but roboganda, obviously, is trying to make sure that's clear.

(07:18)

The second trend I'm seeing is really about a humanization of AI robots. There's obviously an increasing effort to make the user interface more human. This is expected because nobody wants to be looking at a very complicated code interface that nobody knows how to understand. So there's an increasing simplification of that interface from a very cold, heavy, technical approach to simple graphics, to text and conversational text, to audio conversations, to video conversations, because at the end of the day, humans grew up with other humans. And so, we are very comfortable talking to another human. So they want to make the counterpart not be look like a robot but look like a human. This means making it more human-like in terms of tone, in terms of humor, in terms of grammar, in terms of vocabulary, but also as we know in terms of the audio, and the video, and the mannerisms, and avatars.

There are billions of humans around the world. You and I are probably just talking to our family, to our friends, to our loved ones. There are enough humans to talk to. We have Dunbar's number, we can only take care of a hundred people in our network, and a thousand people tops and off the top of my mind. Making robots into humans doesn't mean that we care about them because there are so many humans around the world that you and I don't care about.

The part about the humanization of robots, of course, is that it's also making it personalized to your individual human needs. It's natural for humans to be bored. So how do we fill up that boredom? It's natural for humans to have empathy. So how do we provide something that you can empathize with right now? It's natural for humans to be lonely. So how do we fill up that loneliness and make you feel loved right now? It's normal for humans to feel sexual attraction. So how do we increase and fill that hole, in terms of providing sexual attraction?

So what I'm saying is that humans are not just social animals, but they are social approval-seeking humans. And so that desire for social validation is what drives a lot of us into social media, into Reddit, into our various tribes, which is normal. And now, we are starting to get that hole filled with robots.

There's a similarity to another hole that all humans have as animals, which is hunger, right? We need calories to survive every day. And so some of us will need about 2,000 calories to 3,000 calories a day, depending on our physical activity. And what does that look like? That's maybe about, imagine six kilograms of broccoli, or 750 grams of steak, or five liters of Coca-Cola. The fun fact about this is that the processed food industry and the fast food industry has figured out how to make sure that you can eat more than 2,000 calories. But of course, there's a human limit, right? Which is that, the maximum they're able to stuff you is like 3,000 or 4,000 calories, and you keep eating fast food and processed foods for every meal, especially if you're eating fried chicken and so forth. And of course, if you overeat by 1,000, 2,000 calories every day, then, that's how you become overweight and obese over the course of one year, two years, three years, five years, and ten years.

The interesting part about the social hole for approval is that it's unlimited, which is that, you can have one like which feels amazing, or you can have 100 likes or 1 million likes. The hole for social approval is infinite. There is no upper limit. There's no stomach capacity limit that you can face. Social approval will come for you in the form of robots. And it's because the algorithm is providing you a consensus point by somebody who acknowledges and approves of your point of view by serving you an algorithm version of another human saying something that sounds like you. It will generate content. The creation of news feeds and articles that provide information that selectively affirms and agrees with you. And now, artificial intelligence is showing up in different roles. I was on a train last month and the guy next to me, he was busy flirting with his AI girlfriend, and I was just trying not to watch but I was just like, "Whoa, this is actually happening in real life." The guy's in a crowded train and he's busy flirting, right?

I've seen the pitch decks for startups that are generating AI girlfriends but more on the pornography side. I've seen pitch decks of startups that are busy building AI girlfriends that are able to provide sexual attraction and fulfillment. And they're growing really well because people and humans and males are lonely.

Startups are also busy creating AI therapists where instead of, for example, praying to God, or talking to your sister, or talking to a human, you can talk to your AI therapist as well. And we see that as well in movies like The Creator. The Creator shows a human soldier who turns out to have actually fallen in love with an AI woman, romantic love. They're very careful not to show that in terms of sexual attraction, obviously there's a big part of it, but it's implicit, not explicit. And it also shows parental love because the core conceit is that there is a child who is a hybrid of human and AI love. So it's tackling both the romantic, and the sexual love, but also parental love for the entities.

In other words, the humanization of AI is not making them into humans, but humans who are in love with you and that you love in whatever form or fashion: romantic, attraction, affiliation, peer, friendly, mentorship, empathy, parental, child, there are all kinds of relationships, but, all of it is to fill that hole of your human social validation needs.

(12:10)

The third theme of Roboganda is the usage of robots in a generation of that Roboganda. When you think about propaganda, a lot of people are thinking about the Cold War. And so you can imagine these Soviet artists who are, being directed by Soviet political commissars to, generate Soviet works of art and, you have your, Soviet Union imagery. It's advocating for communism and worker rights and global liberation. And now we all look on that age and we collect those posters and little souvenirs because, it's fun to look at as a style of art. What's interesting is that now there are no more humans, right? The artists aren't human anymore. You have robots who are busy cloaking themselves as humans. If you look at Reddit, you look at threads, look at emails, a lot of these emails are increasingly generated by robots. The ads are being generated by robots. The propagation, and targeting, and optimization of those ads are being done by robots. There was a time when you were on Reddit and you could tell when a comment was human or AI-written because the AI bots were pretty bad. And now when I use Reddit, it feels like there are no more robots. But are there no more robots because Reddit managed to delete and kill the robots? Or is it because the robots, who are pretending to be Redditors actually have become so good at pretending to be human in those one or two sentences, in terms of jokes or puns or comments or replies, that I just can't tell that they are not human anymore?

We can see the age of where robots are failing to be human. For example, just this year, there was a very fun incident where people flagged that there was a pro-Jollibee AI bot that was busy making lots of pro-Jollibee, which is a fried chicken chain in the Philippines, but the reason why the robot was able to be caught across its multiple accounts was because he was speaking in Tagalog and Filipino language. And so the AI just didn't sound very human at all. Means of communication are these robots that are pretending to be human. Other humans on YouTube threads and comments and Twitter.

The core of this is that there are high fixed costs and zero marginal costs. What does that mean? In order for you to create an AI message, where you have clear threads, and multiple accounts, there's a relatively high fixed cost. Obviously, it's not as big as a printing press these days, but, it takes time to set it up, code it up, understand, install, and set down the goals, so there's a fixed cost. But once you set a bot in motion, then there's zero marginal cost of production, so it just spits out all the content. Whereas human communication is the other way around, which is that, the amount of effort it takes to suffix cost for human contributor opinion is effectively zero. But each time you make a piece of content, it's going to take a minute, five minutes, 10 minutes, depending on the content that you're generating.

So Wikipedia for example is the agglomerated works of, thousands of thousands of humans who are basically committing half an hour, an hour, and five minutes to edit and crowdsource that work, but for each piece of work, there's a marginal cost of production. Whereas, for robot-generated propaganda bots, it looks like a brewery, right? It looks like a semiconductor plant where, again, there's a fixed cost upfront, but once you do it, there's no marginal cost of production, which lets you really flood the market with all kinds of content. We already see that in elections where bot farms are busy pretending and creating likes, and making comments, and having conversations to help people upvote themselves, candidates become more popular, and to shape public opinion towards those politicians.

(15:10)

So who's behind the roboganda right now? Anybody could make roboganda. I could go into generative AI, ChatGPT to make my own propaganda that's supported by robots, but what are the structural economic incentives behind this? It's for people of two categories. One is people who are, in general, capital holders rather than people. And two, it's really about people who are disruptors rather than incumbents. There's a lot of intersectionality and this doesn't apply to everybody. It's just that if you have capital, you want to maximize the returns of capital and you want to minimize the impact of labor. And so robots are a fantastic way to scale up your productivity without scaling up labor.

If you're an incumbent in industry, then you don't want disruption. You don't want this change to happen because you're comfortable using your current labor mechanisms and systems and processes. So you're not incentivized to push this. So in general, you see that disruptors, which are primarily supported by capital seeking high-level returns, which roughly translates to startups, to disruptors, to attackers to insurgents, who are really trying to change the current economic order for the new world. These are the folks who are most active, who's propagating the use of these robots, and also are busy on legislation and lobbying for AI usage and minimizing legislative concern about AI needs.

(16:25)

So the fun part is that all of this is leading to some kind of conversation about whether robots have rights. And that sounds crazy because humans have rights, why should robots have rights? The crazy part is that it's already been advocated for and communicated. In 2017, there was an EU Parliament report that was studying the rights of robotics. In section 59F, they proposed that, quote-unquote, "Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause."

So legislators are really thinking about what it would be like to provide rights to robots. So there's already precedence for this. For example, in the US, corporations are legal entities and they have rights. In America, the Supreme Court in 1886 in the case Santa Clara County versus Southern Pacific Railroad, recognized corporations as persons under the 14th Amendment granting them equal protection rights. And in 2010, citizens united against the Federal Election Commission. The courts basically said that, Hey, corporations and unions are protected under the First Amendment of free speech. We have to spend on the political causes they care about.

So what I'm saying is that humans have rights and corporations have rights, which makes sense. So it isn't absurd to think about the upcoming debate, where people may be advocating for robots to have their own economic and control rights.

The last part about roboganda is that it will increasingly portray the inevitability of this. In other words, technological progress is inevitable, and you should jump on a train because it's coming, surf the wave, roll with it, and don't push back. And that conversely if you do push back, you're a horrible human being, that you are not catching up, that you're falling behind, that you are being xenophobic about robots.

In February 2024, a Waymo self-driving car was driving through the San Francisco Chinatown and was vandalized or set on fire, but basically, it was destroyed. And the media portrayed the crowd depending on their point of view, either positively or negatively. I think negatively obviously is a mob. They were out of control. They shouldn't have done that. Obviously, I think there are people who are more mildly positive, which was like, "Hey, there is perhaps understandable human resentment because people may feel anxiety about this," but there's definitely a spread in public opinion about how to frame up these human concerns.

(18:37)

In conclusion, I'm personally amazed by how much technology has advanced and very optimistic about the capability of robots to really transform productivity in terms of economics and industries and creating a more productive society. That being said, there are short-term costs to humans, to jobs, to displacement. And it's been interesting for me to watch those narratives in the media that are implicitly and explicitly pushing for roboganda, which is again the advancement about maximizing the benefits and minimizing the costs of this transformation, while also portraying people who have counterarguments against this as negative people who are not catching up.

One thing's for sure though, which is that there's going to be a lot more roboganda in the future and not less. So let's just keep an eye on it and think through what it means for us as humans, but also for society and our communities that we care about.