Eugene Cheah: Generative AI Boom, Societal Hallucinations & UI Testing Automation - E232

· Singapore,Founder,Start-up,Podcast Episodes English

 

“In the industry, we call it hallucinations. It’s that very smart kid that doesn’t have the answer, who maybe took a beer and was half drunk and was not a hundred percent sure. That’s what AI does a lot. The problem is that because it’s a really smart kid, it can be hard to tell unless you’re an expert in the field and really studied. On the flip side, it’s almost impossible to tell. It’s very scary that we need to put a warning label on this technology.” - Eugene Cheah

Eugene Cheah is the founder of UIlicious - a low-code tool that helps automate UI testing at scale for development teams, with AI assistance. He was a software engineer, who dived early into the startup scene, before spending time in the enterprise financial space, and returning to the startup scene as a founder. All while working on his own set of open-source projects behind the scenes.
 
 
Jeremy Au: (00:29)
 
Hey, Eugene, really excited to have you on the show. You are the co-founder of an incredible startup. You have been playing around with generative AI, and it's been great brainstorming with you. And so this is a very timely topic about what so many folks are excited, confused, and curious about. So please introduce yourself.
 
 
 
Eugene Cheah: (00:48)
 
Hi, I'm Eugene. I'm the CTO of UIlicious, and our startup actually does automated UI testing. And more recently we also integrated with generative AI to actually automate the generation of UI testing scripts using AI. And this was actually something that has been in work since the startup was founded, where we've actually been building up the data required and the data sets and we've done very recent changes in AI as everyone has experienced in the past month. Things have changed very rapidly and we were finally able to get our test generator out and work into the public, and we are starting to get feedback from it. So it's a very exciting time, not just for us as a company because we thought that we would need at least two or three more years to reach this goal, but we finally reached it all of a sudden.
 
But it's also a very interesting time for me because since I've been working on this, I generated field for like, for over the past few years. In the past month alone, there have been more papers I've been reading about AI models and generation, right, than the number of papers I read in the past few years. It's all because of the explosion. Now everyone is rushing it out and things are moving so fast, right? Like people think it is very fast. When you see the open AI, Chat GPT, things are even faster on the paper side because it's like a lot of people

are saying, we have done this theoretically. Trust us. If you have computing power, you can replicate this and yeah, it's very interesting seeing these things.
 
 
Jeremy Au: (2:11)
 
Thanks so much for sharing. I want to actually ask about how you're using artificial intelligence to bring it into the product for UI dishes, and what exactly are the learnings that you've had from there?
 
 
 
Eugene Cheah: (02:24)
 
One of the things that I actually realized from using artificial intelligence bringing into the product is, I actually realized that the problem, one of the difficulties is actually trying to figure out the use case. And it's something that a lot of people lapse in. This is a completely new technology that a lot of us do not know what we are going to use it for. I was quite nervous for this and when someone told me to like, Hey, why don't you just come out with a list of questions so that you can better prepare in your head what could happen? And then I basically, I don't know what's the list of questions, because I have no idea and I was stuck at it for a week.
 
Then eventually that friend eventually said, maybe just ask Chat GPT. Then I was like, oh yeah, crap. The topic that we wanted to talk about was AI, and I've been staring at it for the entire week and I'm like, Hey, actually, I could have just asked it. And so is that very obvious thing that you didn't know until you realize it. And that was what I actually felt that a lot of people need to start thinking in the ways of trying to like, Hey, how can we apply it? One way that I actually try to phrase it is to think of it as one of the most practical ways to put around it. Because an intern, you never expect it to be a hundred percent right. You expected to get it wrong. You expect to need to fix their work. An intern is also someone who when you come in, you may need to give it some basic pointers and training. You need to prepare, like, hey, these are the instructions. Same way as you need to do for the AI bot. And an intern is also someone who, if you can properly set up the process around, can actually bring a lot of value to your company. And as long as we use that as a framing right, I believe that actually a lot of founders can find ways to actually use AI that are not so obvious.
 
So for us, it was to create that, we are also exploring other means of using AI, for example, as our first frontline in customer support. And we are also doing additional tooling into the product. So be it whatever you need to do. Our end goal for UIlicious is to say, hey, all you need to do is tell UIlicious, I want to test this website at this URL and we should come up with a test plan. This is not yet done yet. Test like, hey, I'll test this. Is this good for you? Then you say yes, and then you'll just generate all the test scripts.
 
Okay, these are the tests. I ran it. These are the results. We want it to be that seamless and it's

all about connecting the dots with different AI along the way. On the technical side, there are just three very big major techniques, be it, this is one, is what you really been doing with
Chat GPT, providing it some instructions and that's just basically prompt engineering and then they do the rest. The other one is embedding, which is basically to do news, and AI to do searching, which then can be used to combine the first. And the third one is to finetune it, which is basically AI with custom data, which is the most complicated. But AI is also something that can be the most useful for, let's say things like the medical field because you can train it on medical data specifically and make it a medical expert.
 
And once you start understanding these are the three basic building blocks and treat it like functions that you can link together, you can start to realize that hey, actually there are a lot of different combinations of use cases. It is just a matter of do we have the time to and that's what I'm doing for the UI testing industry. And I think a lot of people can do this for other industries, industries that I have completely no relation to. That's why it is something that I would encourage you to explore, or at least think in those terms, like the intern. I use the intern model a lot because interns need to get paid. That's why I like to remind people that AI is not cheap. It's so treated like an intern.
 
 
Jeremy Au: (5:58)
 
We're gonna dive into that for sure. Could you share a little bit more about how you moved from being a developer to being a co-founder and CTO? Obviously, part of that journey was an entrepreneur first, where both of us were as well. So I wanted to hear a little bit about that journey to decide to be a founder.
 
 
 
Eugene Cheah: (06:15)
 
So for me, my story was actually, I've been coding since secondary school, since the Flash days in the very early. Yeah. So I apologize for the counts of Flash popup banners I've done, I'm quite sure annoy a large percentage of the internet, but I got paid. And the reason why I did that during secondary school was because I wanted to play my Xbox, and my PlayStation and my parents refused to pay anything for them.
 
So I needed to get the money one way or another. And that's how I actually started into programming and development. Somehow through some strange twist of it, be it working with basically what was one of my ex-teachers, and then I work on one project or another. I started being passed around from one company to another and the embassy has always been, Hey, we need to do something like this. Do you know how? And I'll be like, I have not done in this programming language, but I can learn and it's like, okay, sure. And that's what I did. And by the end of like JC before university, I was already having a resume and job offers that were better than the comp science graduate pay at that time. And so that's how I actually got into programming because I basically decided.

Since I could anyway, get a job at any time in this view, I decided to do the most questionable decision there. And it's like, you know what? Let's just go straight to the startup scene. and I, from then on, I've been working from one startup to another. I have worked in some startups that were actually, this was before startups even hot work in Singapore, so it was like I was working on, like one of the stops I was working on was doing sms broadcast. Remember the Singapore Idol era of the voting system and things like, yes, I was working on those. So things like that. Somehow I liked the startup scene during that time. I enjoyed it. At that time it didn't really pay that well. And then subsequently I was like, okay, I need to stabilize things. I went into the enterprise scene, so I ended up actually doing software for all the major insurance companies in Singapore except for Abiba. I want to cross it off my list at some point.
 
I hope Abiba at some point buys this, UIlicious, cause it's a personal thing. I subsequently, got tired of enterprising and got very frustrated from the testing problem actually, where basically a lot of things were tested manually. I have a project that I was working on a mobile app where it basically does, the CEO misread the mobile app launch it, during the AGM and it was on me and my team to basically rush up the product one month earlier. And because we are in the financial industry, no one wants to make sure no one wants to cut short the testing. So basically the death time was cut by a month, so I was pissed off. I was like, why can't we automate this? And after that I was like, I'm going to do this. So I went back to the startup scene with my co-founder and here we are.
 
 
Jeremy Au: (09:05)
 
There's a lot of noobs out there, right? Because they understand there's engineering and then there's product, and there's this thing called UI testing, right? Which is your automating. So why is UI testing important for those who are fresh to the scene or look at it as part of the process?
 
 
Eugene Cheah: (09:23)
 
I think a lot of people already have direct experience more of like why they need it, and especially when it's not there, like having, like I think a very common frustration is trying to book an airline ticket only for it to have an error or book a taxi, right?
 
And only for it to not actually build a book. And it's not. It's just an error and that's very frustrating for the user. And people lose money more like companies lose money if they fail to actually handle these errors. Even more notoriously, I think a few years back also, like one of the major financial institutes, they made a software error where basically a zero was missing.
 
There was a lot of, there was a lot of money that actually went haywire there. So things like that actually cost money in the more literal sense, either on a big scale for major errors or

sometimes it's even just on a much more smaller sense. I think there was one famous case study for like, I think for one of the burger joints in America where they had this very low error rate for a certain checkup process, but when someone fixed it, it was, cause since it's literally a Major America joint, there was like just a million dollar revenue that they gain just by fixing an error.
 
Yeah. Testing is important cause you want to make sure that your users are happy and I guess for business owners, you wanna make sure that everything's smooth. Especially in this era where especially everything's on the internet and there are so many browsers, Chrome, Firefox, HIE, and mobile devices are actually an entire rats nest on their own. Testing itself is actually a pretty hard challenge because not only is the internet unstable, you have a lot of low-end Android devices, with various screen sizes, that is a very complicated form of testing as well. So it's just more of reducing these errors to make sure that things go smoothly and well. And if your business is important and worth billions of dollars, every fix is worth something.
 
 
Jeremy Au: (11:11)
 
So how does UI testing happen today, right? From, is it a bunch of engineers that load up their iPhones and everything works and that's good to go? How does it go happen today? Outsource that?
 
 
Eugene Cheah: (11:24)
 
So outsourcing is one common situation. Especially in Singapore, but in general, 60 to 80% of the industry still does things very manually. We do have some form of automations here and there with companies like UIlicious, but most people are actually still doing it manually. Mostly because engineering manpower is expensive. A lot of existing testing tools, for example, one of the major ones will be Selenium, requires you to code effectively in no HTML and CSS in order to write the test, and this has nothing to do with how the things on the screen look, and I think it's another topic on Amazon that engineers have been incredibly expensive and a rising cost right now these days, that a lot of companies simply ought to, to either. Take a calculated risk or do it manually.
 
In fact, in my experience with a lot of enterprises, what is very common to happen is that before a launch of a major website or app, right, they would conscript the company members internally, the sales team, the salespeople, and so on. And basically, you have a bunch of people, a hundred different devices and they are all just pressing buttons on the screen manually.
 
I find that ridiculous because, hey, we are gonna talk about AI, so we have super advanced things, UI frameworks, AI technologies, serverless, cloud, whatever. You can put in all the high-tech passwords. And if we put everything together, how do we check that it works? A hundred people pressing buttons. It seems like only a rare few companies can actually, at least

traditionally that's what we are trying to change, can afford to actually build the technical teams to fully automate all these things. So big names like for example, Google, and Microsoft, are companies that are able to fully automate on Facebook as well, but not everyone is Google, Facebook, or Microsoft. So hence this weird situation where despite our progressing technology, we test manually for the most part, and we want to change that. We believe that it can be changed.
 
 
Jeremy Au: (13:14)
 
And you mentioned about how you feel like it's ridiculous that humans are doing it, and you're obviously automating to some extent, and yet you're also experimenting with AI. So could you share a little bit about what that transition looks like from your perspective? For UIs?
 
 
Eugene Cheah: (13:28)
 
AI has always been towards the end goal for us, mostly because one of the things that when we launch our UIlicious, there was actually a lot of, there was quite a number of testing tools that tried to incorporate AI early, but we felt that was misaligned mostly because most of the common problems in test automation is that the argument is that it's very brutal. It means, for example, if, let's say your test is too specific. Let's say I test to click this button on this part, the screen, and for some reason, the UI designer tomorrow says test automation. One of the reasons why it wasn't commonly adopted easily was because it breaks at these kinds of very brittle breaking points where humans don't break. Humans were able to automatically adapt to it.
 
One of the things that AI was very early on in trying to introduce to testing was that when these things happen, maybe we use AI to automatically fix the test scripts. This sounds perfectly fine in theory, except that in practice what we realized, at least at the time when AI was being used was that, hey, if you start the automatic fixing, then a lot of testers complain, and a lot of project managers complain that sometimes some of the things that were supposed to be boxed got fixed as passed.
 
So at this point, if everything's just automatically fixed, passed, what's the test for? So that was one of the struggles with testing and AI in the early generation when trying to integrate AI. So what we tried to do differently was that we looked at it fundamentally. It's like, no, the problem is not actually on fixing the test scripts after it has run. Because we felt that if, let's say your test scripts are designed in a way that was stable and reliable, you shouldn't need to be constantly fixing the test script.
 
So that's what we actually did first without AI. Ironic. We wanted to make the test script system reliable and in a way that any tester can view the report and say, Hey, this makes sense. I understand it. This passed, and that was something that was very critical for us. Then after we did that, that was what we initially did during the first few years after we did that, we wanted to actually tackle the second part of the problem, which is to write test.

So a lot of people would have very loosely defined test scripts. If you look at the manual testing world, they'll be like, take for example, how will you write a test script? You'll be like, hey, go to this website, click on these buttons, check everything. You'll probably write in English. Even though your is a testing language, you probably won't write in our syntax. That's where the next step would make sense, because that is where a lot of people has already their test scripts for a manual test, but written in plain English, they passed around as notes as Excel spreadsheets as Google Docs to do this and that. And we wanted to be able to take that copy and paste it over and translate it into a more formal testing programming language, which then can be verified that, hey, these are all the correct steps, make the tweaks and then let it run.
 
So that was something that we tried to build up over, over the years with the dataset and something that we are finally glad to actually manage to reach. At least the first working version. We felt that was something that the public could start using. And that was an exciting thing for us.
 
 
Jeremy Au: (16:30)
 
That brings up to the news of the year, which is generative AI, Chat GPT, Open AI. So what do you think happened? Because it felt like AI was like this linear path, right? Which is every, it is always coming. It's always coming, but it's not here. And then suddenly over the past year, it just feels, whoa. Like it did a little bit of a hockey stick in terms of real-life applications, but also at least awareness, at least amongst the tech ecosystem. Right. So what do you think happened there for those who are new to this?
 
 
Eugene Cheah: (17:01)
 
So one of the things that I found very amusing during the past few months, I think it's a very in hindsight moment for the industry. One of the trends that we had right in neural networks was to try to expand the data. It seemed to be somewhat better, but it wasn't a hockey stick better. It was kind of more of like linearly better. And one of the things that was very interesting that happened recently was that what we were doing previously with at least some of these very large scale AI models, is that we were basically training it with a very broad set of skills that is at the very low level.
 
So think of it as if I were to do a university grading. So for example, you have a university- level AI in a topic and a primary school level in a topic. We were basically training the AI to be a primary school level at everything under the sun also, which make what people felt not that practical and useful. But it laid the foundation because what happens is that when the primary school became secondary school and secondary school becomes JC, how many of us can say that we have secondary school or JC knowledge of every topic under the sun, at every language under the sun? For example, my chemistry is terrible, my biology is terrible.
 
Things like that. Honestly, don't think I'm any better than the AI in those topics. And the other

major thing that we realize is, Okay, why don't we start focusing on one of these big models, right? And let's just narrowly just train it to be really good at one thing, which is basically following instructions. And this is what Open AI did. They call it reinforcement human learning. That's the more technical segment. I think someone can search it up if they are more interested, but the thing is an instruction GPD. But basically, the key thing is we taught it how to obey instructions like how we taught a primary school kid to obey instructions from a parent all the way to secondary to JC.
 
Overnight, these two factors just clicked in because now you have trained someone who is able to follow instructions, and now you also have someone who had a very broad knowledge of almost everything. As flawed as it is, it's not going to win any university graduate in a topic. These two things clicked in education like we call the team model, and it started working together.
 
And it's very funny because for me it's like we built neuro networks using techniques that we understood from the human mind biology and who’d have thought that a neuro network modeled after a human brain would benefit from training and learning like in education, the team model the same way a human does. It's one of those very obvious things when now that we know it and learned it, but it was not obvious previously.
 
So now when we actually used the same technique that we have learned into smaller models. We realize actually our smaller models that we have built a few years ago could actually do much better. And that's why you're seeing a lot of sudden jumps even though Open AI is like one is like at the forefront of it, they have shared what they've done and people have been taking what they have done and it's okay, why don't we try on these other AI models and it actually works.
 
It actually makes things much better and all of a sudden we have a lot of what was previously not useful. AI models being very useful in various useful cases. That was literally the case for us also. We had the AI model that was spitting out rubbish test code that was not usable and we made a change and it's like, oh crap, it's usable. That made such a big change.
 
 
Jeremy Au: (20:19)
 
So that's interesting. Right? Which is that neuro networks on the basis of humans. You know, I just recently had two daughters. One is a two-year-old and one is a seven-month-old. And my joke I always tell people is I also created two neuro networks and I'm just watching them slowly engage, explore, and realize that something is hot. The food is too hot, so don't eat it. And then slowly learn how to blow on the food to make it cooler. And there's a little bit of that. You can then go to a little bit of a hockey stick. And suddenly now, I think over the past one month, there's a bit of an explosion in the woods, right? A hockey shape is starting to come up, right?
 
But last time she's learned like one word a month and now she's learning, I don't know, 10 words a week. Right? So a little bit of hockey stick. So how else do you think is similar

between, I think what we said, training humans, versus training neural networks or AI wider in parallels do you see from your perspective?
 
 
Eugene Cheah: (21:11)
 
I would actually say there is actually a very accurate parallel besides the time. Like to be observing AI's making mistakes in the very same way that we actually can observe in humans or more, especially in the growing formative here. One of the things I noticed recently, for example, is I've been training my cats, for example, to sit down when eating food.
 
So I always ring the bell, tell them to sit, and then I only after they sit there and feed them the food. So that worked out for most parts. But one of the common mistakes in AI, and we use the term overtraining, is that when the scenario is too exact, they may end up learning things to be too specific. So I assumed that they were learning to sit properly when I ring the bell, turns out one day when I didn't lay out their foot mat and I ring the bell, they didn't sit.
 
There was the foot mat that made them sit, not the bed. So I was like, oh. Over-training. I've been always training with a be a bell and a foot mat and I never noticed it. I always thought it was a bell. So these are mistakes that we will see AI make and these are things that we will need to fix. But one thing I will say very, that is very different from real-life humans, in this case, is that, or cats for the matter is that once we notice that they learn things, these things wrongly.
 
We just spend the next few years making the change and then we continue educating them and making incremental improvements. So we just continue moving on from there. For AI models, one of the big differences is that if we notice that we have made some fundamental mistake in the training floor, sometimes the answer would be to go back to the fundamentals because we can rewind back their learning, their years of learning that we have built up, okay, for this year to this year, this training that we did, let's just take it out.
 
Then this is all the data and we just in a way that is growing money at a problem, let's just grow enough GPUs at it to just speed up that whole five years of progress into one month and burn a lot of electricity in the process. And that is something that is very different that we can't do with real-life human beings. Yeah. Is that we cannot go back and reset the process and change things accordingly. So if only it was that easy to remove our bad habits.
 
 
Jeremy Au: (23:28)
 
I think there's actually a point, right? I was just actually on a WhatsApp group. We have a whole bunch of VCs and founders and everybody plays Dota. What was interesting was that there was a realization that we could ask Chat GPT about Dota strategies, and so everyone was just like, so if we have a Triance and a Naga, what are the odds of us winning? And then the answer was actually pretty good, right? And after that, we were like, okay, what's the

difference between high skill rating of 600 versus a high skill rating of 700 was a difference in terms of play style and approach.
 
Who's more likely to win? And it was a really good answer. And the awkward reality, I was thinking about it as reading it was, I actually can't tell the difference between whether it's a good answer, bad answer, because I'm more of an amateur and daughter than the ability for me to tell the difference whether it's correct or how it could be better versus, of course, on the topics that I have been plugging into it on myself, which has been more like, hey, tell me about, I don't know, the ethics of AI or predictions of AI inside startups. Because I'm an expert on it, it's more obvious for me when the errors are there or what needs to be unlearned or relearned or corrected, or edited.
 
I thought it was an interesting dynamic, right? Because I was just like, yeah, like you said, there's so many topics that I'm just bad at chemistry, biology things where if I'm plugging the Chat GPT, from a production side, I can tell that that errors and it can be improved. But from a consumption perspective, if it's an area I'm a non-expert in, is actually very difficult for me to tell if it's wrong actually, or even inaccurate. So it feels like there's quite of an asymmetry there from a human perspective in this dynamic to understand AI and I'm curious about what you think.
 
 
Eugene Cheah: (25:13)
 
Yeah, actually that is something that I actually flag out very heavily. That's probably one of the biggest danger of this current generation of AI. Even the Open AI founders themselves also flag it out. Is that because unless you're instructed specifically not to try to lie or lie might be an extreme word or more try to like guess and make its best shot at it. In the industry, we call it hallucinations, but basically it's that very smart kid that doesn't have the answer and it maybe took a beer and is half drunk and it was like, I'm not a hundred percent sure, but he didn't say that. But I think it's this and that's what the AI really does a lot. And the problem is that because it's a really smart kid, it can be really hard to tell. So if you're an expert in this field, for example, and you have really studied, okay, fine. But on the flip side, if let's say it's you're not an expert, right? It's nearly impossible to tell. And that's actually a very scary thing that we actually need to put a giant warning label on this technology.
 
Because for example, I got to generate a few recipes just for the fun of it, and it looks very genuine, but I honestly have no idea whether it was good. And so I boiled it down to a few people who actually know cooking unlike me, who might make the kitchen on fire. And basically one of them said that this recipe will overcook the thing until it's charcoal. And I did not know that. And if I actually followed it, I might have started a fire. So, that is actually a very genuine danger on this topic on its own. Because it's like 50, 80%. Sometimes you get it right, but you actually need someone to fix the 20%. And it just becomes boiled down to do I actually have the expertise and knowledge for it. And yeah, it's the line that we have to be careful about, especially when we start using this. Cause there's a lot of thought about using this in education or this can replace teachers, this can replace doctors and all that.

I'm like, someone needs to be there to make sure that people don't learn the wrong things, for the teacher, it's the first thing you need to teach a student before they're allowed to use this tool. Okay? You need to be able to fact-check it. And that's the first thing you need to learn. And from then on, it's become super useful because hey, having it right 80% of the time, is already a first step.
 
It's just about asking it, 10, 20, 30 times more questions and just scrolling quickly through it, iterating it through it. So yeah. That’s my take on it because it's a strange people we live in where we have an answer to everything, but it might not be correct.
 
 
Jeremy Au: (27:42)
The tricky part is that, like you said, is it reminds me of a phrase where it says a lie can run around the world before the truth can wake up and get its boots on. And I was like, this is exactly what's happening because you can generate so much more content and then to be able to verify it, per Snopes and all these fact checker organizations that are just doing it by hand. Absolutely bonkers, right? And I think the commercial incentive is the part that's really unfair, right?
 
Because if you said, oh, it's a pure marketplace of ideas where every idea would debate based on its merit and honesty, et cetera, that, of course, the truthful, how they say the truthful arise and out. But, That's not true, right? Because in a marketplace of ideas weighted by commercial incentive weighted by, I don't know, the ability to spam and get as much SEO for your webpage, et cetera. I feel like the war is weighted not less towards honesty and less towards truth like you said, but it's like basically going to the rich kid, right? It's like going to a rich kid, basically telling, going up to the smart kid who doesn't really understand it and say, hey, I'm not asking you to lie, but I need you to generate as much BS as possible so that you and I can make money together, and it is a beautiful tag team. I don't know what the end state of that world is gonna be.
 
 
Eugene Cheah: (28:50)
 
Oh yeah, those are one of the things that keep me up at night because, pardoning, this ends up being too political or touching on political enough, is that we already had this problem before AI, like people were already coming up bullshit and pedaling bullshit to make money literally for views and for to pedal their own set of products.
 
This has been an ongoing problem for the past few years, especially in the US in particular. Even Singapore is not immune to it. Also, like sometimes we, sometimes a lot of us may have the experience of trying to push back against fake news from our uncle, from our grandmas and so on. Yeah. And trying to correct them. And it's very difficult because it's that lens of credibility, that line. Cause one of the most common tactic that will be used is that, hey, groups of people will end up citing other people that ask, hey, I did my research so I cite so, but they'll cite each other in a circle. This forms those current bubbles of effectively fake

news, and people are already doing that with effectively $10, $20 per hour, or even less when you contract them in the Philippines and so on, and India.
 
People were already doing that. Now we have just changed that $5 an hour bullshit generator to 5 cents an hour. And to me, I feel that it's a topic that's not just within AI. Cause we already had this problem, now it's just going to be amplified. And I think just in this way, we just sped up the roadmap. I'm like, how do we deal with this?
 
I think the biggest irony from what I'm observing and, and I'm also living this for myself because now especially in this topic space, there are a lot of random people talking about random things, speculating from the end of the world to the next Renaissance, right? I find myself following more and more specific people. Rather than actually a lot of aggregation sites, people who are like experts in their own respect, if you like. So some AI experts and also some engineering experts who are top few examples Casey Hightower for Qbenet, Sean for currently more towards AI, because they will actually be filtering through the noise of the news and presenting it to the audience.
 
And in a way we have gone full circle into subscribing potentially to expert magazines, though it's not magazines, it's there are that I'm subscribing to, to actually filter out this information for us, which then is a question right now for me. Is this going to be the next phase of how content is created, where basically you have to have trusted and verified content creators effectively?
 
Or is the flood of AI content and fake content going to be the going to be a norm that we have to get used to. I'm not so sure which direction your face, I just hope that maybe we go the full circle route cause at least we have a happy ending in that route.
 
 
Jeremy Au: (32:00)
 
I think it's gonna be both and I remember this phrase I learned about two years ago, right? And it was like flood the zone of shit which is a tactic that was describing politics, which just when something is contrary to what you want to want to be said. You just flood the zone of just more and more news and then all the shit basically overwhelms the thing. So if you have a hundred, shit pieces and you have one really good article.
 
Which is how echo chambers happen and everything. So I think it's gonna be both, right? I think it's just going to be like, the Internet's just gonna be full of shit and then a whole bunch of people are just gonna become like the world is full of Oreos and processed food, and then some people are now fasting in order to avoid processed foods. Most people will just consume everything. And then some people, a very small section of people are just gonna do, like we said, right. Just only follow human-verified creators, no AI inside things.
 
 
Eugene Cheah: (32:43]

So this is gonna be the information vacant.
 
 
Jeremy Au: (32:49)
 
What's the solution like? What we need to create are truth-seeking robots. Do we need to create AI that automatically labels content as AI? I think I started to see that in Reddit, which is the bots that say, oh, this content came from somewhere else, which is like Source Finder. Maybe there should be like Hunter Killer robots, AI that tell you how much AI content is in the text.
 
 
 
Eugene Cheah: (33:12)
 
Actually, that might not be a bad idea. So there's one move right now that people are trying, which trying to create an AI that detects AI. But honestly, I feel like this is a cat and mouse game because the moment that came up, less than one day, someone already figured out how to tell the AI to generate it in another way that bypasses it. But I think the other one that you have suggested where basically it does the fact check might be a necessary step forward as a potential way to, counteract this. So whenever someone posts something effectively, it does the research on the topic and says, hey, this was set by so and so, check the citation.
 
Or actually, it's this person that said something else instead. Is that possible? I believe so. But is that something that's practical? It's actually a lot of computational power needed just to do that. And, but it might be the necessary evil or future that we hit towards. And another thing that actually, I find it a bit intimidating sometimes with this progress is that one of the potential dangers right now is that because Open AI right now, for example, the state that AI has been trained up to 2021. For example, I think one of the things that we have to be careful of is from this day onwards, effectively, if we start training on the whole of internet, let's say from this day onwards, if we don't filter out the things that people are coming up of thin air, it's gonna be fed into the data, which is then gonna be fed to the next AI.
 
And basically, we are gonna have AI generating crap that you'll learn from and it'll just rinse and repeat, and it is going to be something that we need to counteract with maybe let's say this citation AI or fact-checking AI and so on, so forth. Whether this is the best answer, I honestly don't know, because I think, look at how people reacted in very violent ways or very objectional ways about fact-checking systems on Twitter, for example.
 
Like they build it as a form of censorship. Yeah, it's a question that we need to ask as a society to like, is this what we need to move forward? And honestly, I have no idea about that state. I think we do hope, that we could just find a better solution, but this might be a necessary evil.

Jeremy Au: (35:26)
 
You suddenly reminded me of eutrophication. I'm not sure you remembered that, about the negative feedback cycle for ponds of water in ammonia. So there's a certain level of ammonia and the ecosystem normally is a positive feedback loop where it, you know, is a hemostasis, it stays in balance. But sometimes if you add a whole bunch of fertilizer, basically the algae just go out of control and then just eat up everything in the pool. They block up the sunlight and the whole pool just dies. So you just suddenly made me remember, this AI thing can just swamp the internet with so much shit, right? The internet may just basically effectively become unusable for all certain purposes. Like fact, facts.
 
So honesty, right? And it just kills the thing. And then everybody retreats to like private messaging channels. There's a good point that you raised about how the internet can actually kill itself by AI iterating on previous AI-generated information.
 
I guess one question I have is when you think about that, how do you think engineers and CTOs should build responsibly while being self-aware that there is a bill of the arms race and that someone else is gonna do it eventually as well to some extent? So how do you think about that from court of conduct or stewardship?
 
 
 
Eugene Cheah: (36:44)
 
I think this is also one of the hot topics actually when it came to AI in particular, but I am firmly in the camp that we should actually start splitting up very clearly the data collection phase and the AI training phase, so the data collection phase, we should learn to recognize it as its own form of phase and how we do that.
 
And I will argue that during the data collection phase, we should be handling copyrights of the material in a very respectful and friendly way. So if I said, the site says no scraping, one of the issues that are actually happening right now in the AI space, for example, is that we have an argument that is being made that, hey, since AI's like a human, even though you have a no scraping rule, like a human that visited your website and saw your website and learn from your website.
 
That was the argument that was being made when they merged the two together. So the AI do the script and learns from it directly, and they further reinforced that, hey, this is fair use because this AI is now going to be open source into the public only for that very same company. To have a commercial business that uses that open source model and then sells it, I feel that we may need to start taking better response dealership to the data that we input. Be it for copyright, be it for fake news, be it for false content, that is something that companies should actually strive for to bring more transparency in the process.
 
Subsequently, the next step will be the training site. We have already reached the point, right, where AI models are learning from text and literature and all that, right? More than a hundred times every human will ever read in your lifetime. Basically one of the problems. I would say

for AI is that it's very inefficient in learning compared to a human. For example, let's say you were to read one textbook, for example, you already learned your physics or chemistry or math to a good degree, maybe with a good teacher as well for AI model, maybe one textbook is not enough. Now, maybe it needs to read 300 different textbooks to finally understand. Yeah, so that is something on the training side we can improve on because one professor actually jokingly, just for the fun of it went to do the projection, hey, at the rate we are growing on the amount of data that we need to feed to grow the AI.
 
Right? We will run out of words on earth, right? Within the next 30 years, because we've been scaling up at the data site to improve the AI when we should probably start to look into improving the efficiency of training the model. So that we need the whole of the internet to train the AI model. We probably need something that's closer to what a human will experience in their lifetime. And then from then onwards, if the data demands aren't as huge, then a lot of the ethical copyright concerns can actually be reduced because then this data can now start being shrunk into a smaller set that is now usable and more verifiable for all parties. So I think there's a direction that we need to work towards whether, I'm not so sure how long it'll take us. Maybe we'll need to read the whole internet in 30 years first before we actually shrink it.
 
 
Jeremy Au: (39:54)
 
Yeah. I think it's spot-on about data ingestion, right? So right now I can label my webpage, so do not crawl. So it should not be indexed by a search engine, right? I'm pretty sure that means that all my past blogspots and blogs and Tumblr posts have all been indexed into Chat GPT for example. I feel like that's already done. I don't know how you can unscramble an omelet and put it back in the eggs. So I think looking forward a little bit, we see generative content as happening in text, right? We see that happening for art, definitely. I think we see that in video as well. Where video filters are doing a tremendous job. I was at a live-streaming event by Bigo Live, and it was interesting because I could see the video streamer, and live streamer in real life, and I could see what exactly they look like. And then I could see on the app that they look quite different. The complexion, chin, shape, nose, and dimension. It was kind of crazy to see both of them. Those differences, obviously there's gonna be more and more generative videos as well, right?
 
So what do you think of that world? I think I was reading a comic, it's like maybe the end state of this is we end up living in a less human world. That was by Penny Arcade. He was very depressed. I think we end up with a world full of AI avatars and role models and things like that. What do you think about that?
 
 
Eugene Cheah: (41:21)
 
Yeah, so it's not just pretty much any generative content. Even music actually is also another one there. There's really been a few models that are out there. People don't think they're as good as pop music, but let's just give it time and that's what I'll say. But okay. I'll first say the

most positive take because since I've been rather negative on, for most parts, I would say the most positive take will be what Sean proposes, one of the tech influencers I follow. Generative AI could be potentially portals to new roles. So I experimented with this on another podcast as well with a fellow entrepreneur where basically we use the AI to say, hey, let's do an adventure in this world. And you can just imagine it along those lines.
 
Also, for example, if let's say a Star Wars chapter from this episode to this episode, maybe even a specific book or a Harry Potter universe or a Star Trek universe on this planet because since the literature was there and AI learned from it, you can actually generate stories in that world in that flavor, and this is more like things that the AI already learned. Moving it forward, we could potentially see in a positive sense that artists, or game designers, obviously world builders, effectively at this point, design these worlds by instructing the AI that, hey, instead of this, I prefer this way to just create this virtual world for people.
 
We are gonna see very different roles accordingly, whether that's through a Metaverse headset, which maybe not, or maybe just the LCD screen. That would be interesting, and that's another question. But the most positive thing would be we will get to see a new way to experience entertainment like in the form of video games but in a lot more. Even without going that full nine yards where basically the AI-generated whole world and the stories and the characters and everything, even on a much smaller scale. I'm actually looking forward to the next two to three years in indie gaming because one of the most common problems for some indie gamers, at least the game's creators, is at least the smaller studios, was, hey, I cannot create the story I can think of.
 
I got it, I can make the game interaction in the game engine, and actually I do play some indie games. But now that they effectively have an AI art generator, or even let's say a conversation generator. These indie games can actually have the tools to create something very new and very vast at their fingertips like a conversation even like random NPC conversations. The reason in particular NPCs and not humans, as I realized, players like to come up with things to mess up the system. Players are like the reverse rebels because we like to do terrible things like destroy the shop or things like that. But like NPC to NPC interactions can be a lot more lifelike in this world, and that is not using new technology.
 
It's literally using what we have now. People have done it for fun already. Get two AI bots to just talk to each other and you can just see a conversation, pretend to be two different characters and they just talk to each other. And it was a really interesting thing to see that. And I'm like, man, this will come into games. So that is what I think from that positive angle.
 
 
Jeremy Au: (44:33)
 
Wrapping things up here, could you share with us a time that you personally have been brave?
 
 
Eugene Cheah: (44:39)
 
I think I touched on this. So my parents are Asian and yeah, I think one of the hardest thing

that I had to decide on early on was to not go to university. My end will literally just embark on the path to startup and this was many years ago before even startup was a thing. So it was a very strange thing for my parents to grapple with because they like, with a lot of Asian parents, they actually grew up with the fundamentals of getting to university, and get a job.
 
I decided to choose otherwise, and it may not sound ike one of the bravest things, in terms of adversity, but to go against your parents, it was something that was personally difficult and it was personally a challenge for me because being an entrepreneur or subsequently like doubling down the startup, wanting to be a startup employee is another thing to be a startup founder to go down that route because it's not a common thing.
 
Until more recent years, your whole family assumes the default failure, and that was difficult. Your friends, a lot of them did so as well. That was something that I found personally challenging, to which I'm actually glad actually now, that in more recent years have changed very drastically. Extremely drastically because of how startups are now more accepted in the mainstream right now as a potential career path.
 
Quite frankly, we have been starting to see success grab and so like if people start to recognize these things and we started moving away from the MNC mindset and then I think there was only one family member that was really supportive of me and it was that isolation feeling that actually felt very difficult. Things have gotten much better with my parents and naturally over the years as we as basically people became more receptive to startups and yeah, that was something that I've struggled with.
 
 
Jeremy Au: (46:44)
 
Great. Thank you so much for coming on the show. I'd like to paraphrase the three big themes that I got from this. First thank you so much, your excitement and passion about the implications of the generative AI boom. This, seeing how it's blooming and growing and talking a little bit about how it was. Obviously initially designed from a data perspective I think the key innovations and understandings that unlocked the explosion right of its capabilities and talking about how it will actually continue advancing in the future. The second that was really interesting was obviously the exploration of what is gonna happen to society from the easy, which is like indie games for NPCs, all the way to fake news and somewhat good news.
 
And I think we talked about how the AI isn't lying, at least not intentionally yet, but is trying to make things up, right? And those hallucinations are gonna have real-life consequences on how we live, operate, and interact with one. Last year, we also got to dive into UI testing from why it's important to how it's currently being done, to how you're looking to automate it, but also drive and enrich it with AI as well. Thank you so much, Eugene, for coming to the show.
 
 
Eugene Cheah: (48:04)

Thank you for having me here. It was a blast for me.