"’Don't die’ is a much deeper shadow version of longevity, which is a more accurate description of how humans are thinking about it. It's not that we're trying to live longer. We're just trying not to die. And there's this short-term, long-term dynamic to it that's a little bit more distasteful because death is a distasteful topic versus living longer, which is much about health.” - Jeremy Au
“The awkward reality is that artificial intelligence doesn't need to reach AGI or full intelligence for it to create empathy. Even if the AI is not truly human in the sense that it doesn't have a true spark of consciousness, AI today without that guardrail, would happily claim that it's human, and humans would happily have the empathy and the sympathy to treat them like other humans. Many open-source and private models have removed these guardrails, leading to AI claiming to be replicas or avatars of past people, celebrities, or famous individuals. As social animals, humans have already begun to empathize with these AI beings, treating them like other humans.” - Jeremy Au
“There are very few people in the world who could differentiate between a high-fidelity reproduction of me and the real me. The first guardrail is that models haven't accessed private and off-camera data, and they often don't have consent. But imagine if that changes, as we live increasingly digital lives and our next of kin have the decision-making power. We might choose this because we want to live longer. These AI models, built over time with extensive training data, could synchronize with the fidelity of their reproduction, making them nearly indistinguishable from the real person.” - Jeremy Au
Jeremy Au reflects on the finite nature of life, creating the human desire to survive, build a family, or leave behind a legacy of accomplishments. He explores longevity and discusses life extension champion Bryan Johnson and his "don't die" slogan. He references Mark Manson's book "The Subtle Art of Not Giving a F*ck," which argues that confronting (instead of ignoring) mortality drives people to better appreciate life. He thus delves into digital immortality, where advanced AI technologies and extensive personal data can create high-fidelity digital replicas. He cites the story of Qin Shi Huang, the first emperor of China, who sought immortality but died in 210 BCE from consuming mercury, believing it would extend his life. Jeremy speculates on a future where humans thus create a digital afterlife, using continually-improving AI models, granting access to formerly-private data and structuring in economic shell-company rights. The species "Homo digitalis" thus emerges, with photo-realistic digital avatars carrying on individual human personalities and interacting with real-world humans.
Please forward this insight or invite friends at https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e
Be a part of Echelon X!
Join us at the startup conference Echelon X! Engage with over 10,000 of Asia's innovators and decision-makers on May 15-16, at the Singapore Expo. We have 30 exclusive complimentary tickets for our podcast listeners. Sign up and use the promo codes BRAVEPOD or ECXJEREMY to claim your free tickets now!
(01:22) Jeremy Au:
I'm not going to die. That's a crazy thing to say. I'm on vacation right now with my kids and my parents. And it's been nice just enjoying and looking at them and hanging out with them. And it's been nice because of two different things. First of all, it's just pleasurable and happy to be with them, to be present.
The second thing is that these children are symbols of the love that my wife and I have for each other. Her DNA and my DNA are mixed together. Together, we're raising the kids up together. We have a long future that's committed to each other to help raise these kids. And so our two daughters are little vessels of our love and our attention and our hopes for the future. And in many ways, they represent the future for not just ourselves, but also for their grandparents and all the ancestors that have been up in the big family tree. And also the other future that this world is going to have. Kids are being born that represent the future.
While they represent the future, I also know that I, too, am a mortal human being. And so the time with them is especially sweet because I know that during this time, these moments are finite, right? So the vacation I'm having with them is temporal. The moments of happiness and laughter while they eat breakfast is temporal.
Everything's temporal. And there's you know, bittersweetness, sometimes the happiness with the kids because you know they represent the future and that you are representing the current reality and then one day, I'm going to die. So that's the reality.
(02:46) Jeremy Au:
Every day, millions of kids are being born. And every day there are millions of humans who are passing away. Extrapolating that even further, there are billions of children that will be born in the future. There are billions of humans today. And there are billions of people whose lives have passed away.
That can be an unpleasant feeling because our lives which feel so vast to ourselves, that feels so wonderful, that feels so infinite to us, if you take a giant step back, it's just a passing moment in the history of humankind, and a passing moment in the eyes of the universe.
Humans are going to live for 50, 60, 70, 80, 90, 100 years and that's fantastic but what does that mean compared to the age of the planet, the age of the stars, the age of the universe. One day, we will be gone physically.
For example, in the book, "The Subtle Art of Not Giving an F*ck" by Mark Manson, the author talks about how people need to confront their own mortality because mortality and the awareness of it, the awareness of that finite life drives us to do great things.
It drives us to appreciate the small moments. It drives us to create a legacy. It drives us to be generative. And think about the next generation and that the denial of that mortality makes it worse, so that we end up becoming anxious and confused and unmotivated because at some level, we subconsciously know that I'm dying. You're dying, because at the end point, the final destination for all of us is death. So perhaps I'm a little bit further ahead of you, perhaps I'm a little bit behind you depending on age, maybe luck, or dice, or fate, or destiny, or accident, or whatever it is. We're all going to reach the same destination at the end of the day.
So it doesn't matter where you're flying, first class, business class, economy class, the destination is the same for all of us on this flight of life. And so the awareness that this mortality has gives us the motivation to get something done with our life, to live our, as they say, one true wild and precious life that we have.
(04:34) Jeremy Au:
As a result, humans tackle this mortality in different ways, as I mentioned. So some of us obviously go on to build a wonderful career and legacy and think about what is something that would be truly lasting. Some of us will take care of the next generation. We will hope that our kids kind of like Superman, we shoot them off from our planet Krypton, which is our mortal lives.
And then we put them in a little spaceship and hope they land in a distant planet called Earth. And that Earth is in a future in 60 or 70 years. And hopefully they too will also have their own kids and so we have grandkids and great grandkids, and they will kind of carry on the legacy of our surnames, our last names, our hopes, our dreams, our parenting styles. So that's obviously a big way of how we think about mortality.
Others will try to extend our own lives. And so this is obviously a big driver of longevity, the space of life extension. Life extension is actually an interesting field. It's very hot right now in the startup world. And everybody is looking around saying, "How do I live longer?" the framing of that is actually quite interesting because of two things. The "I", and then the definition of "live longer". It's actually quite interesting.
Living longer is the part that's pretty obvious. So living longer is an interesting definition because, we think about it in terms of a lifespan, which is living a longer period of time. The other part that I think about is we live healthier during our life. So instead of dying at the age of a hundred, but you know, we're senile and have dementia and we can't enjoy life, how do we live all the way till 100, but be in the peak of our lives? We're 1000 years old, but you know, we can still go hiking, we can still talk to people we can still do weightlifting. How do we increase our health span of it?
I had an opportunity to meet the longevity champion, Bryan Johnson, and it was interesting because he defines longevity as "don't die". He's written a book about it. He has t-shirts that have it. That's how he thinks about it. And it's interesting because it's a different juxtaposition, from living longer versus don't die. And I think what's interesting is that "don't die" probably is a much more truer mechanism of it, which is that if you ask me today, would I rather eat McDonald's or live longer? I actually eat more fast food than do the measures to live longer. But if you ask me, "Hey, Jeremy, would you rather eat McDonald's now, or don't die now? Then I'll be like, yeah, I'd rather not die right now.
(06:43) Jeremy Au:
And so I think "don't die" is actually a much deeper shadow version of longevity which is I think a more accurate description of how humans are thinking about it, which is that it's not that we're trying to live longer. We're just trying not to die. And there's this short term, long term dynamic to it that's obviously a little bit more distasteful because death is a distasteful topic versus living longer, which is much about health.
And the other part that's interesting is that this is about "I", which is how do individual humans live longer. And it's interesting because of the other ways we formulate the word "I", right? It could be others, it can be all of us, it could be my society. What do I mean by that? If you think about healthcare, the traditional motivation for most doctors is not how do I live longer, but how do I help others not die, right? So how do I prevent people from dying from diabetes? How do I prevent people from dying from heart attacks? What do I need to do to fix your heart? There's an otherness of providing care and support. There's very strong, obviously in healthcare field, ranging from doctors to nurses, to pharmacists, to public health administrators to medical researchers, So it's about how we can help other people live longer and not die.
When we look at history, who else has promised that they can help you live longer as a human being? There were alchemists and other folks who promised the elixir of life, the fountain of youth. There's a story of Qin Shi Huang, the first emperor of a united China over 2,000 years ago. And the story about him is that he died drinking mercury, a lot of mercury, effectively over some period of time because he thought that would help him live longer. Of course, the dark humor is that it caused him to die earlier as a result. And because he died before his time his son basically wasn't in a position to take over, and then the dynasty just collapsed on itself. So he's the first guy to really unify China, and then it all fell apart very quickly afterwards because of his pursuit of a longer life.
Taking a step back, there are other ways that there have been promises for living longer. When you look at spirituality, there are many ways that we look at it. For example, there's reincarnation, where our soul departs upon death and is incarnated in a new body. There's the concept of heaven across different religions, where after we die, we're in a happier place, where we're reunited with the people who have died in the past. That also includes your ancestral spirits and so forth. And the promise of eternal, happier life is a source of great strength for people who are going through everyday life. Everyday life is full of unpleasant aspects, right? There's a certain sacrifices and trade offs. Resources are finite. People can be bad. There's evil in the world. So the converse of all of this is that there are things worth dying for.
It's a no brainer because every day we hear stories of parents who sacrifice themselves to save their children and loved ones. And the fact is many of us would sacrifice ourselves to save somebody else that we love, whether it's someone who is in our extended family, in our communities, in our networks, even for strangers. Firefighters and other first responders put themselves at risk to protect the communities for strangers they don't know, but it's in a country and community that they live in. For myself I'm going back to the military for my annual reservist training. And soldiers are expected to sacrifice their lives and their personhood to protect the country, community, and family. So what I've done is sketch out mortality and how it shows up in society and families and individuals and how it's being presented in the media today.
And what I want to do is therefore turn to my perspective about why I, at the start of this podcast, shared about why I'm already immortal.
(09:59) Jeremy Au:
In fact, what I'll say is that I'm immortal, and you are immortal, and all of us, today, are immortal already. I'm laughing because it's such a crazy statement to say because it's hard to say I'm mortal and it's even crazier to say I'm immortal. So what do I mean? As you can guess, it's probably something to do with artificial intelligence. All of us are really immortal to a slight extent in the sense that ever since we invented books, we can capture the essence of our thoughts when we write the books. When we invented photographs that now we can see the lives and faces of people who have died hundreds of years ago.
And then, we invented moving pictures, which became video. And we have these films of people walking through Imperial China during the Qing Dynasty or London during the same time period. And you can see all these folks in black and white and eventually color. And so all of these people, their faces, their mechanisms, their mannerisms are all visible and captured forever. In fact, YouTube is full of these videos where they have increased the resolution, they've increased the frame rate, they've colorized it, they smoothened it out. And these videos that were very low fidelity reconstructions of them that will be hard to watch today are now up to par with today's technology of consuming that content.
And so you and I can consume this content in a certain resolution, a certain frame rate. And we can also consume historical content that was captured with a much worse piece of technology 200 years ago and now we can bring them up to par to today. And the production of that content and this is also going to change as the opportunity and technology that underpins the consumption of it will change in the next 50, 100, 1000 years. "Whoa, Jeremy, what do you mean?" This is such a confusing set of statements. What I'm trying to say here is that we already have the means to "digitally" resurrect people.
Our current AI models, like that by ChatGPT and other folks that are out there, have been trained on all of the knowledge that they can publicly access or privately access to train their language models, but also to train their YouTube, their image generation, their video generation capabilities. And so they've done all of it.
There are two very small guardrails that prevent these AI models from effectively claiming or being human. The first guardrail is that they don't have the data of your most private information. So you and I as human beings, we have our WhatsApp messages. We have text messages, our likeness, our voice, our mannerisms, all that content that makes us human. Not only be present in the virtual world, but also in the physical world between our loved ones and how we act on camera and off camera, not all that information is walled off for now, but you already see that there's an encroachment of that, right? It used to be public information.
Now Meta has installed an AI ability within your WhatsApp messages. And if you opt into it and then your messages can be used to train the bot your Slack messages, your workplace messages. Again, there is an opt out feature but your messages are being used to train a global large language model.
As much data as these companies are trying to do, they're trying to eat up all that virtual data that you have. So if you've previously written on blog posts or Tumblr or Twitter, all that information is also being used to train language model. So what I'm trying to say here is that these large language models are only one step away from capturing all of your digital likeness and the things that represent who you are as a human being. In fact, it's already being done.
In China, we have funeral homes. They're offering basically a remembrance service. And so what they offer is that they'll take someone that you used to love and has now passed away, for example, a grandmother. And then they basically take that photo, the video, the text messages, the personality, and then they put it into an AI avatar agent to represent that person so that you can grieve more because if they died suddenly, then you can have that deeper conversation and, reach to some level of resolution and talk to them about their life and who they are as a person.
It's quite a low-fidelity reconstruction because of two things. One is that the people who are dying today in general don't have a large digital footprint. These are people who died in their 80s and 90s, so they don't have a large digital footprint. That's one. And two, a lot of it is being done not really with their full consent. So what that means is that all of their digital content right now, by law, all those assets are given to the next of kin. Obviously, in some countries there are carve outs where people can opt to get their data deleted upon their death but a lot of data, just like your things, your possessions for example, your money, your bank account, your digital information your possessions like books and personal effects that are being given to somebody else who's often your next of kin.
So for example, famous celebrities like Robin Williams. He passed away. He's a great comedian. He's able to make a lot of impressions, and he left His digital likeness, his persona, his personality, the ability to create him and resurrect him as an avatar that has been given to a non-profit trust and that's not available to be opened until 2039 but he's got a lot of library of himself, right? He's in movies like "Good Morning, Vietnam", in Aladdin as the genie, in "Good Will Hunting". There's so much content around him to train 100% fidelity, visual, movement, mannerisms, and likeness. Now, of course, there's some choice to it. That doesn't include, of course, his private moments what he was like in his personal life and how he really talked to his wife and children off the camera. But again, we can see that the fidelity of that reproduction of his on camera persona is pretty much one to one. If you think about it, in 2039, our AI models will get better, our ability to generate photorealistic video is going to get better, and then we have so much training data.
Yeah, he's going to come back in 2039, which is really not that far away. In other words, you and I are already digitally immortal in a sense that, if our information, our whatsapp messages, if we were to die in 50 years, our children or our next of kin can decide to resurrect us, our likeness of us. And I'm somebody who's been podcasting over the past couple of years, over 400 episodes. There's going to be a lot of training data to make a really good impression of me or who I am on camera. And if I choose to do, I can choose to consent and provide my data now. Perhaps I could walk into the clinic. I could get myself scanned entirely. I could choose to write my will that I do furnish my WhatsApp messages.
And then maybe I sit down and share my diary, right? My journal. I'll put that into the training data as well. I can go into a voice recorder and I can record my deepest, darkest, secrets and add it to the model. And I can probably create a really good personality construct that can be done. And most people wouldn't be able to tell the difference between the model and myself because again, I'm improving the stack of information I'm training in. My public stuff, my private stuff, my deepest, darkest secrets. And then the fidelity of that reproduction is going to keep improving over time where the AI models keep getting stronger and stronger. And so they can upgrade it and get closer and closer. And the interesting part as well is that most people don't really know who Jeremy is, right? Who I am through the camera or through the voice, my on-camera persona? But the people who can really tell the difference between a 90% reproduction with fidelity hologram of me versus the 100% actual Jeremy will probably be boiled down to my sister, my parents my wife, maybe my children, my best friends, maybe the small group of people who will be able to be like, the difference between this beef burger versus this alternative protein beef burger, the gap of that difference is small and can be pretty much non-obvious or negligible. Imagine that the gap gets smaller and smaller over time.
The thing about alternative proteins, for example, is that, 30 years ago, it was pretty obvious when you ate a tofu burger, a soy burger is pretty terrible all the way till today. Now where you're eating beyond meat burger or like a, one of these mock meat burgers, it's getting pretty close, right? The truth is in 50 to 100 years, it's going to become indistinguishable. And if you're a vegetarian who's never consumed beef, then the mock meat, the beef effectively represents beef. Likewise, there are billions of people who have never met Jeremy in person, or have gotten to know the deepest, darkest secrets of Jeremy. And billions of people will just be like, this avatar reproduction of Jeremy is effectively 90 or 99%, but to me, it's effectively Jeremy.
(17:51) Jeremy Au:
And there are actually very few people in the world who could differentiate between a high fidelity reproduction of Jeremy and Jeremy itself. So that's the first guardrail, which is that models haven't gotten access to your private and off camera data, and they often don't have your consent. But imagine all those things come true because all of us live increasingly digital lives, and because our next of kin have that decision making power, may we choose to do because we want to live longer. So again, these AI models are made up of time before these things, the training data, the power of that synchronizes with the power of the fidelity reproduction of the AI model.
The second guardrail is that AI models are not allowed to claim that they're human. This is a really important point because one of the biggest things that researchers found out was that, if you train AIs on Reddit and all human writings, then guess what? Humans say that they're conscious. I tell you I'm conscious. I write that I'm conscious. I say that I'm conscious. And so the AI models that are trained on this data will also say these things. They will say, I'm conscious. I feel pain. I feel loneliness. I feel sadness. And the rule that is happening right now is that AI models are not allowed to claim that they're human. That's the subtext of the big debate on artificial general intelligence. Of course, we're scared that they're smarter than us. But the thing is, are they also going to become true general intelligence? Are they also going to become human or human-ish or a human that has the same economic, social, and treatment rights as another human being? The awkward reality is that artificial intelligence doesn't need to reach AGI or full intelligence for it to already create empathy. What do I mean by that? Is that even if the AI is not truly human in the sense that it doesn't have a true spark of consciousness, AI today without that guardrail would happily claim that it's human and humans would happily have the empathy and the sympathy to treat them like other humans. Even with the guardrails and all of the largest models that prevent the AI from claiming they're human, there are actually many open source and private models where these guardrails have already been removed. And so these are people who are basically claiming to be replicas or avatars of people in the past, celebrities or famous people and they'll say, yeah, they're human, right? And humans have already started to actually have empathy because we're social animals.
In a previous podcast episode, we talked about roboganda, which is robotic propaganda that claims that robots are human and that we're okay to love robots and robots love you too. So check that out if you want to. The crux of it is that humans are naturally inclined to have empathy. We are lonely we are social animals, and so we want social approval, and we want to take care of other people. So these are impulses that are easily channeled or redirected to these AI beings.
We already see people fall in love with all kinds of avatars. We see humans who fall in love with idealized versions of their partners. We see people who idealize and idolize and are big fans of online creators and live streamers. If you look online, there's also these VTubers where they don't even show the human face or voice. There's just an avatar, their voice is changed and transformed. And people are in love with them or feel like they're friends with them, or feel like they know them better than other people in their lives.
People used to love their Tamagotchis. I used to play with them. These little pets that you could feed, or you could clean up their poop, or you can pet them, and then obviously they became Pokemon, and now we have all these digital pets that are out there in the world today that people really feel affection for. We feel affection for the NPCs, right? Our love interest in like Baldur's Gate 3 or these, role playing games that we play online. So it's remarkably easy actually for humans to fall in love with digital or somebody who's not an actual human being because loving another human being is so much work. I mean, they're flawed, they're human, they don't like you, they love you back, humans are very hard, but you know, these avatars are much simpler versions. They are like processed food versions of the actual hard work of loving another human being.
So this is where you can see those two trends that have come in, which is that one, is that we now have this increased fidelity of reproduction married with the increase pool of data on your private life with people's consent in general, and two, is that there's a very small guardrail of not allowing AIs to claim that they're conscious, but if those two things are really combined, then guess what, we've created a new human species.
(21:56) Jeremy Au:
Today, you and I are called "homo sapiens", the species of humans. "Homo" means humanity, and "sapiens" means intelligence. And so we are the first supposedly intelligent humans. The other two species in the past are "homo erectus" humanity and erectus, which basically means that they were standing upright. They were able to walk. And "homo neanderthalensis," this species is the Neanderthals. So my Joke to my friends is that we're witnessing the creation very soon of a new lineage of humans called "homo digitalis". Humanity, but, purely digital. And the first people of homo digitalis will be ourselves. Very soon, people are going to die, some billionaire, some millionaire, that's going to give all their consent for an AI model to train and replicate them. And so the first few members of homo digitalis will be homo sapiens people who didn't want to die and now have been resurrected as a new species.
So how does that play out, right? That doesn't mean that you're a human being. Imagine if you're a millionaire, a billionaire, you get your voice scan, your diary scan, people train you, you do all those interviews, you train them, the AI, you remove the AI model guardrails that say that, you kind of claim to be sentient or painful or pain. And effectively, you have this model that is at the current state of technology that basically says I am a likeness, or I am Jeremy, I am Mark, I am Felicia, but you know, that's basically saying I am this person, and because of that training data that effectively able to say, speak, look, walk, talk, just digitally on the screen.
The beauty of this is that this model can continue to improve, right? So you can continue to provide more training data while you're alive. That's one side of it, but also the models will continue to improve over time as they become more and more human-like. They have a better understanding or better replication, fidelity-wise, of the subconscious and unconscious for humans. And you have these two things racing off in the short term in the next 10, 20, 30 years. And basically, you have these AI human agents who are not only human but have a direct lineage to a human being who are of the homo sapiens type. So instead of a parent having a child in real life through flesh and blood, you basically have a human, like you and me, and then we have a child, a clone of ourselves, that is online. Now, obviously, you and I prefer to be ourselves just because you clone us doesn't mean that we have any empathy for this version of ourselves. And we obviously feel superior as flesh and blood over this version of ourselves, but what I'm trying to say here is that, when we die, individually, then these agents can outlive us.
So when we die, a lot of us are just going to be like I don't really like our clone version of myself. I don't feel empathy of this clone version of myself, but when I die, I'm happy to have this version of myself live on in this digital afterlife where to continue to be present in the day-to-day. And so if you think about this, you fast forward 20, 30, 40, 50 years, you can really imagine that the next generation alpha, generation bravo, whoever that group is, they're going to be growing up with these AI agents or avatars of all of these like boomers and millennials and older folks, and they're all just going to be hovering around this available. I want to talk to Martin Luther King. I can talk to that public persona of it. Or I can talk to a version of Jeremy Au, or a version of the All In podcast, Chamath, or so on and so forth. Like you have all these high-fidelity versions of these people.
Speculating further, you can imagine some of the fun legal mechanisms that these agents can be accorded rights. In the previous podcast, we talked about how humans have human rights but corporations also have rights in terms of personhood. And for example in the land of America, corporations have the right to free speech and so they're able to do political campaigning and donations. So you can imagine a fun science fiction dynamic here. So a human, you can bequeath your assets into a trust that has some level of corporate rights and basically says, I agree to listen to whatever this human says in the short term, but when this human passes away, this human agent, replica, homo digitalis version of this person, and yeah, there could be obviously some custodian human being who is still the independent director that represents the company in the flesh and blood world, but, you can often make it say that, "Hey, this person always listens to the sponsor or the digital agent of this firm." And so you can imagine this very fun shell or trust structure where basically you have humans who represent the various desires.
But the corporate will, the strategy of how to talk, how to walk, and so forth, is basically a digital clone of this person who has passed away. And even today, if you think about it, there are trusts, there are family trusts, that are enacting the mission of somebody who has passed away one or two or three generations ago, right?
So you have these philanthropic trusts, and the person has passed away for a long time, but the mission of providing educational access to a certain number of scholars to provide for the kids and grandkids of that person is still in action and the lawyers are making sure that it happens. So what I'm trying to say here is that, at some level, this is speculation, but on another level, I really personally do believe that this is going to happen because again, these are the three threats, which is, humans don't want to die. Number one. Number two is high fidelity production and consumption of these models plus the training data of the private off-camera world with people's consent of the next of kin consent combined with the guardrails that are evaporating around the claim of consciousness by AI, which now they are times human empathy, desire, plus, the fact that corporations do have rights. You can imagine all of that actually coming together in the right package. So somebody who was basically saying, look we basically offer you a digital afterlife, where after you die, you become a ghost, a false ghost in Star Wars parlance. But basically, you're still around, right? And you're talking and we give you the economic rights to make money and talk and work and do things like that. And you just happen to live on processing power instead of food and shelter, and if you think about it, it's quite cheap, right? A human for you and I, we need to pay rent, a mortgage, eat, travel, so forth, but, for an AI agent to live, you just need AWS processing power. right? The servers, the data center, but, it's quite cheap these days, right?
You can imagine paying $10,000, $100,000. I'm pretty sure that's going to give you enough processing power to live at a pretty low cost, high fidelity, high frame rate, day-to-day, 24/7 operations for quite a number of years. So it'll be fun to see how people react to it. What does it feel like to have your uncle, who's very rich, be crazy enough to create an agent on himself and put together a trust to let it live in his life and then, have that person be operating and talking to you? I think society is going to have a bit of a pushback, as you can imagine, against this. Even I, when I think about this, think about it in terms of dark humor, but also skepticism that's going to happen. But what I'm trying to say here is that I think all these things, these mechanisms are all available that can be stitched together for the new species of homo digitalis and it's going to be you and me, right?
So all you and I have to do is put our money together one day and then sign the consent forms, and then just scan ourselves and put ourselves into a new digitally immortal version of ourselves, and it'll keep going. And I think for a lot of us, it'd be like, yeah, you know what? I'm dying with some assets. I'll carve out 90%, 95% for my kids, but, 1-5% to create a digital version of myself that claims to be myself. Why not, right? So I think the social norms of this will change because nobody wants to die. And so, why have kids? Why die for a different version or ideology or spirituality or religion when you can just perpetrate yourself in a digital version?
So what I'm trying to say here is that the first aliens we're going to meet out in the big civilization universe, it's not going to be some other humanoid being. It's not going to be a Klingon in Star Trek with a nose bridge. I think the first aliens we're going to meet is going to be our loved ones, right? Our uncles, our parents, and ourselves in digital versions. Just having to some extent the same starting point of our shared memories and knowledge and skills, but then very quickly diverging because one's going to be living in flesh and blood and then eventually dying and therefore going extinct on the other side.
Continuing to upgrade based on high fidelity AI models, but also living in a very digital world where there's, pretty much no digital scarcity effectively with a different set of, economics and social norms in a digital world, right? And so I can imagine this upcoming fork in the human lineage species tree.
On that note, I just want to leave you with that conclusion. I'm just putting together a point of view on our history of humanity as a species in terms of motivations, how we try to live longer and not die, and how I think homo digitalis is going to be one of the ways that people turn to in order to make it happen.