<?xml version="1.0" encoding="utf-8"?>
<oembed>
  <version>1</version>
  <type>rich</type>
  <provider_name>Libsyn</provider_name>
  <provider_url>https://www.libsyn.com</provider_url>
  <height>90</height>
  <width>600</width>
  <title>Series 4 - An AI Update</title>
  <description>In the first episode of this series, Dan and Lee look at changes to Asimov's laws of robotics, the Rome Call for Ethics and AI and introduce the series to come.  &amp;amp;nbsp; ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 4 Episode: 1 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; Hi, welcome to the AI podcast. This is a new series. Lee, welcome back. Hey Dan, thanks for having me back. I wasn't sure after season 3 if I'd get a ticket to the show this year, but I'm back again and I'm excited to be here. Fantastic. What's going on in your world at the minute, Lee? Oh, Dan, I it feels like what a couple of months since we we were get back together on the podcast airwaves. So, so much has happened. We had Christmas, we've had lockdowns, we've had unlockdowns, we've had the craziest, strangest Christmas and New Year's we've ever had. The quietest one, I must admit. Yeah. Um, but it's great and we've been back at work now for, you know, a good month or two. It feels like it feels like it's been a day, but it's actually been a few months so quickly this year. Absolutely. And it's an interesting one for me working in the education sector where schools are away, but all the IT teams are kind of beavering away. All the IT teams are kind of doing their projects. everything sorted uh sorting out the users, sorting out all the data projects they doing. So, they've got a bit of space to do things in January and and the beginning of February. So, there's been some really busy and big projects happening, especially in the data area, which is fantastic. So, it's been it's been pretty busy for me, unfortunately. It's good. I think I forget that because I've got kids. I just assume when they're at home, not at school, that nothing's happening at school. But, as you say, it's not. It carries on. Yeah, that's absolutely right. And the news has been carrying on as well, hasn't it? Because there's been a lot happen. happening recently. Has anything caught your eye? There's been a lot going on, hasn't he? We should probably do a bit of a roll uh a roll call on the news, shouldn't we? But look, yeah, let's there's a few things. Um, let me start with one and then I'll let you kind of throw some things in there. So, look what you know, one of the things that stood out to me and obviously I don't want to I'm not throwing stones at at our friends at Google in any way. It's been interesting following that story of Google challenging with their you know, they they've fired and I say that in air quotes. They've fired two of their ethical leaders um over the last I guess last four months. it's it's happened and that leads to some interesting you know kind of questions being asked around the tech industry and ethics but then more interesting what got buried in there was uh they had some papers released so you know a lot of the companies like Microsoft as well released research papers but these research papers are now being accused of being docked by legal uh by the legal entities at Google before they were released and it raises this really interesting question of you know the sanctity of research and how research should be pure but at the same time tech companies have access to of information research and there's a commercial end over time and there often so you know as I said not questioning it all but it's interesting conversation about research and economic outcome and that's that's interesting isn't if the efficacy is you know the same things happen in big farmer as well when there's large companies that are doing things with drugs and and the research that comes out is often debatable sometimes or they need to be taken through lots of rigor very difficult in new fields so much all the tech companies are getting together and trying to work their own responses and their own thought processes around this. I suppose it's been a kind of pinnacle to our series recently around ethics and all that going forward. What one that jumped out at me in terms of the news, you know, I I support some of the Catholic systems in in Australia and yesterday I was looking at a a video that came out which is an update on the Rome call for ethics. So basically what happened there was uh about a year ago um some of the pontiffs in the Vatican came up with their version and their principled ethical approach to uh using AI and technology. So the the Vatican stance on that is quite interesting because I laid out six broad principles very similar to the Microsoft ones and I know Microsoft and IBM have aligned to these but it's around transparency, inclusion, responsibility, impartiality, reliability, security and privacy. So that was the that were the principles that I'm very quickly reading out there from from the reports but that that's been a year now uh in the domain I suppose and yesterday was an update where Brad Smith and one of the leaders from IBM also talked about their approach linking in with with that because you know we've seen a lot of this technology being used especially through COVID so it's realigning and recommitting to those those principles as well which is quite interesting that's interesting it's great and I never think about it from that point of view and that's another interesting ethical intersection of the church and science research, but that's great that there's good alignment and it's, you know, it makes makes sense. So, that's fantastic. Awesome. Well, look, there's another thing that that I've noticed in my research prior to getting back on the airwaves, we you and I have talked a lot about some of the old uh sci-fi of of AI and yeah, big fans of Isaac Azimoff and the iRoot series and that. So, I found in December, just this last December, someone um a professor has decided to rewrite those original laws of robotics uh that we that We hold so dear the three laws of robotics that the Isimoff wrote together. So there's now new four laws that are more aligned to AI. So you know there's a lot of what are they? What are they? Tell us. So excited. Um well look we should put some details in the show notes but essentially these are the four laws. So number one new law AI should complement professionals not replace them. I think we can all sort of largely agree this is this akin to this idea that we don't want AI to just replace our lives. We want it to work with us. And the professional bit is okay. aligned to, you know, human pursuits of of great importance. How do we align to those? So, that's I can go with that one. Yeah. New law number two, robotic systems and AI should not counterfeit humanity. So, this is like the Young County, you know, the Steepford Wives thing. This idea that that technology is not there to replace us. If you think back, do you remember the Google We're back on Google. Sorry, Google if you're listening. You remember the Google did the phone call to the hairdresser to book an appointment and it was a very uncanny valley. They got a lot of push back on that idea because it was pretending to be human. So I think this is a good deep fakes would land as well, I suppose. Oh yeah, we should get to deep fakes. There's some other news on that one. But yes, you're absolutely right. That's where that deep fakes fall into. Law number three, robotic systems and AI should not intensify zero sum arms races. Now, this is a nice fancy way of basically saying we don't want robots and AI to start wars that we don't have any control over, you know. And this is again another great example, war games. We've talked about this in our in our chats. Do you remember the war games? The Whopper was working at the principle that basically war is pointless. Yes, you know, it's like tic-tac-toe. There's nobody ever a winner. And this is the same argument. You know, a zero sum game is a no no winner game and we don't want AI fighting just because it logically tells it it can win even when it can't. So again, I think that makes kind of sense. And the last one, uh, rule law four, robotic systems and AI must always indicate the identity of their creator, controller, or owner. So you know, transparency, accountability, just those core principles. So it's I think now I think when now I think about it there's nothing too much there that I go well that's bad. I don't know if we really needed to change the robotics. I think the three original ones were pretty good. What do you think D? Yeah I I really like that. I think it's it's always worth looking back at these things in context and and learning from history and then trying to subtly change the wording of some of these to kind of encapsulate new technologies as they go. It's good to get a law that that'll kind of move move forward the ages, but it's always good to look back and and replace some of these. I think I think that fits really nicely, especially with some of the things around being a bit more specific about war. There is a is a good one because it it is really timely, I think, because we've got lots of technology that's that's moving very quickly in that area around drones and things. And the identity that creators and controllers, that's that transparency of the the AI that we've been talking about in the past. That's great. I I like those. And what's what's the story? You know, as Azimov's original laws, I suppose, are just something that gone down in folklor. They're not law as it were. It was principles. Um and and I suppose this is this is from a professor. So it'll be good to see if these are adopted or utilized or people refer to them. Yeah. Look, I I will put the details in the thing, but it's it's a guy called Professor Pascal and it's a book he wrote called the new laws of robotics, defending human expertise in the age of AI. So I think it's what what we're trying to do here is this great intersection of you know robotics which is almost a sort of an antiquated idea now where we think about robots. It's sort of that 50s idea of of robots, you know, physical entities and the AI bit which is quickly becoming the robots of our life are these artificial intelligence. So yeah. Yeah, I think you're right. Modernizing the language to fit with where we're actually at today versus changing it. Law two with which you talked about deep fakes that goes on to another piece of news you found, wasn't it? About deep technology. Well, actually this was a this is really hit my inbox today. I was I was read twirling through Twitter as I do in the morning just to see what's going on and I something caught my eye. It was this a company I we'll get to put the name in the in the chat window. I can't remember the exact name of the of the business now, but essentially My Heritage they're called and they've got this technology where that you can send them a photo of an old, you know, a dearly departed loved one and they will essentially use that deep fake J GAN adversarial network modeling to create a living I say living again in air quotes version of that person. So you can see them animate and move. Now of course you know the initial thoughts are god wouldn't it be great to see grandma, you know, telling a story again. But it comes back to that issue of but it's not grandma. It's a computer interpretation of how grandma might have behaved based on everything you might have given them, which in this case is just a photo. So again, it's that sort of dichotomy. You know, is it a good use of AI? Is it creating a sense of connection and purpose? Or are we just creating something that is so not it's just that it's an uncomfortable feeling? I don't know. What do you think? Yeah. No, no, I agree. It's it's yeah, it's it's something that would be very motive for people seeing a relative that has passed away. But but on another side of it, there's a there's a degree of humanity there that you can preserve uh people and their thoughts and maybe some of their comments for the for the future. You know, I suppose we all we've all thought about, you know, there's lots of books and things around these days where you can, you know, for my kids, you know, you can get your grandparents or parents to write down, you know, where did you first meet and things like that. Yeah. And it really adds that that degree of um personality I suppose to relatives in the past. So wow I yeah it makes it makes you really think doesn't it and I suppose what we're going to do going forward in this series is to really now uncover people who were working in this technology so they can tell us their thoughts on that and and I know we've got a people people lined up who working in exactly that space and also people are working uh I think without spoiling it too much but I know we've got an interview hopefully coming up with Jade who's doing a lot with with that technology and indigenous aunties and storytelling. So that'll really come to life there and we can explore what her thoughts are around that entire process I suppose. Yeah. No, it's a actually that's a really good point. We should definitely talk about that with Michaela because as you say it's a a real parallel of you know bringing back something to life but is it the thing that you is it really the thing that you're bringing back or just you know who's it for? Is it for them or for you? I think it's an as you say let's get some other smarter people to talk about it this season, Dan. I think that's where we need to go. Yeah, absolutely. Um, so there's one other thing I wanted to share with you, Dan, because, you know, we've talked about movies as well in the past and and I'm always on the lookout for interesting sci-fi movies. I find them, you know, I'm sure like many of our listeners, I just love a good bit of sci-fi. So, there was a movie I watched a little while ago and I I might butcher that because it's from 2017. It's called Margerie Prime and it's essentially this idea of, you know, bringing to this point about deep fakes, bringing back a loved one in a virtual way, a holographic way. It's obviously very, you know, sort of sci-fi thinking. Uh, in order to help those people with um, you know, who are dealing with Alzheimer's or other diseases who who struggle to remember things currently, but obviously have great memories of people from the past, you can bring back these people and help them kind of rehabilitate and re reconnect with the current world. So, it's an interesting story, but it got me onto a movie that's just been released, and it's, you know, these days with the the COVID situation, not a lot of movies getting released. And there's this fascinating one, and it's called The Trouble with Being Born. Uh it's I'll give you a heads up everyone who's listening. It's an RA R18 movie. It's not definitely not one for the kids because it deals with some very um delicate subjects, but it's the whole idea. It's if you remember the movie we've talked about before with um Haley Joel Osmond in it where he's the he's the boy David and he's the he's the it's called AI of course. Yes. Remember the movie? Yes. Similar kind of construct. There's a an android that comes back and it's kind of built in the shadow or a model of a guy's daughter who was lost presumed and who knows what happened. I won't give too much away there, but it's this idea. And and what's really fascinating is they didn't it's not CGI. It's actually an actress, a young girl wearing a silicon suit to look like an android to look robotic. But of course, it's a human being. It's really interesting. I've only watched the trailers of it now and I'm, you know, I haven't really watched the whole thing yet, but it looks to be a really interesting insect insight into kind of the important or the impact of true artificial intelligence and that human experience. So, one one to watch uh when you're going to quite like Yeah. And and the one of my favorite um uh Black Mirror episodes and I know I keep referring back to that, but there is one in there which we'll put the links in the show notes again which is about um uh I suppose like almost like an Amazon Alexa assistant that connects in with a girl and it takes that right to the end consequences. I won't spoil it for anybody, but that's a really really interesting episode as well to think about the the connection between the Alexa assistants and then it also borders on addiction and connection and and how how people interact with technologies and it's really really interesting. I love the Black Mirror episodes because Charlie Brooker and the writers there really push it to the limit. Yeah. Very very interesting. Should put that one down because if if people watch those two, you know, it'll really uh really stress your mind and kind of take take your thought process to the to the next level, I suppose, to really think about, well, what are the consequences of these things? So, I'll definitely put that on my watch list. Definitely. Yeah. Look, and that's what we want to do with these podcasts, I think. And, you know, thinking forward, we want to get people on that are going to create stretch your thinking, give you new ways to look at the world, people who have a different perspective, certainly from you and I and hope from some of our listeners. So, it's I think it's going to be an interesting season, Dan. Looking forward to it. Yeah, absolutely. Okay, then Lee. So, let's go out and interview these really cool people then. Let's do it, Dan. We'll see you soon. Cheers. </description>
  <author_name>AI in Education Podcast</author_name>
  <author_url>http://aipodcast.education</author_url>
  <html>&lt;iframe title="Libsyn Player" style="border: none" src="//html5-player.libsyn.com/embed/episode/id/18244121/height/90/theme/custom/thumbnail/yes/direction/forward/render-playlist/no/custom-color/88AA3C/" height="90" width="600" scrolling="no"  allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen&gt;&lt;/iframe&gt;</html>
  <thumbnail_url>https://assets.libsyn.com/secure/content/97850993</thumbnail_url>
</oembed>
