Dr. Chris Hazard is a unique figure in the world of artificial intelligence. He connects the experience in software development, psychology, physics, economics, hypnosis, robotics, and privacy law.
In this episode of UpTech report, Dr. Hazard offers some startling revelations on how AI and machine learning can inadvertently expose sensitive and personal information—even when that information was not willingly offered.
More information: https://www.hazardoussoftware.com
TRANSCRIPT
DISCLAIMER: Below is an AI generated transcript. There could be a few typos but it should be at least 90% accurate. Watch video or listen to the podcast for the full experience!
Alexander Ferguson 0:00
Welcome to UpTech Report. We hear from Dr. Chris Hazard, a unique figure in the world of artificial intelligence, drawing from experience in software development, psychology, physics, economics, hypnosis, robotics and privacy law, known for leading the development of the game Macron. Dr. Hazard is a renowned award winning researcher of advanced technology applications, and an entrepreneur and public speaker. In this interview, Dr. Hazard offers some startling revelations on how artificial intelligence and machine learning can inadvertently expose sensitive and personal information. Even when that information was not willingly offered. He begins by telling us how important it is to fully understand this new technology.
Chris Hazard 0:44
Ai right now, it’s very easy to overhype it, and also very easy to dismiss it. And the right path is somewhere between to understand it, understand how it can be used in your industry, how it can be applied, and what results you’re likely to see. And make sure that you understand why the decisions are being made. I don’t think it’s such a clear path that we’ve got right now, I think it’ll be it’ll take a little while to to get these systems in place to get them to bog in tune and understand all the different facets that they will interact with.
Alexander Ferguson 1:14
Dr. Hazard gives us an example of how the application of AI is not always fully considered.
Chris Hazard 1:21
We bank just gave a talk recently at Troy, looking at how you know how they can merge together all these different models from different customers. And it’s great, it’s really powerful in a lot of ways. But at the same time, it’s like all that data is being pushed together in ways that we don’t know where it is, you know, if your data is in a decision tree, or like let’s say, your data influenced the decision tree in some way, in this random forest or in a neural network, it’s, you know, it approximated you, based on some part of the function maybe approximated you more than somebody else, because you were a more influential data point. For some specific example. It’s really hard to tell that you can there’s there’s influence functions, there’s ways to tease that out. But it’s just not very attractive, tractable, and other people are doing now. And so to just sort of help manage this and help use data for good, it was one of it’s one of my driving forces.
Alexander Ferguson 2:15
Dr. Hazard tells us despite the great promises, we’ve been told about the future of AI, the applications of this technology still face major hurdles.
Chris Hazard 2:25
If you train a self driving car on a million miles driving on a highway, and you’ve no accidents, nothing unusual, great, it can drive on a highway. But what happens when there’s a snowstorm, what happens when, when you’re in a city driving up a hill, and there’s a road that is half cobbled half not because you’re in Boston, or Pittsburgh or some old city, and it’s snowy and icy, and all of a sudden, the car in front of you has its brakes on and slides into you. And you could have easily avoided. It’s New Year’s, it’s like two in the morning, people are flooding across the street and not behaving in the ways that they would normally behave, or Halloween and all of a sudden there’s a new costume and all the kids are dressed as which looks like a statue on the side of the road, and is fooling self driving cars.
Alexander Ferguson 3:07
And Dr. Hazard cautions us it’s precisely because there’s so much work left to be done with this technology, that it’s so important, we understand it’s distinctions, including the difference between AI and machine learning.
Chris Hazard 3:19
I prefer to take a little bit different approach to defining AI machine learning. And I tend to define it on two axes, we’ve got the the classic exploration versus exploitation trade off an AI. And this trade off is if you if you don’t know something and there’s a lot of unknowns or unknown unknowns, you have to go find out what are the answers to all those. That’s the exploration. exploitation is when you know some things and you know how if I just do this a few more times, I’ll get an expected result that might be very good. And so you know, where do you draw the line between those two? And how do you trade those off? And there’s been 1000s and 1000s of papers examining that and it’s got other names as well that are there closer later like it’s related to the the variance bias trade off and statistics for models as well. And that the one arm that multi Armed Bandit problem in game theory, there’s there’s a lot of related work. So that’s one axis. The other axis I would define it on is, is it? Are you trying to achieve goals? Or are you trying to achieve accuracy? And so the difference here is are you working with data, or you working with, you know, rules and causality accuracy isn’t the only thing you need to know why it was made.
Alexander Ferguson 4:38
Finally, Dr. Hazard describes some shocking ways in which very personal and private information can be discovered and exploited all from just playing a video game,
Chris Hazard 4:48
if you are undervaluing or overvaluing positive utility events so what that means is that if there’s a positive outcome and again like Oh, I got this reward, I got this treasure Oh, that’s, that’s really awesome. I really value that. But oh, if I lose this thing, or if I get out, you know, if I gain this one coin or whatever, it’s not that much. It turns out that according to one study that depressed people more accurately value positive utility events than non depressed people. So think about that for a second. So imagine that you’re you. You wrote an indie game, it’s on much people’s phone, they play the game of bunch. And if it wasn’t that successful, maybe you know, you had a couple 10,000 People play the game, and you didn’t make that much money. And all of a sudden, there’s a company that’s slurping up all this game data and say, hey, I’ll buy your game for you know, $10,000, you’re like, okay, sure, that’s fine. And now this company, applies a whole bunch of machine learning techniques, and extracts that and now can determine basic, basically, you know, sensitive information about whether you’re depressed or not, whether you were dieting, like all these sorts of things that you didn’t think were exposed in your data, but when aggregated and just the right way, even if you apply differential privacy, or different privacy techniques to some parts of the data. You know, there’s always this sort of information leak and good models good AI can tease that out.
Alexander Ferguson 6:13
That concludes the audio version of this episode. To see the original and more visit our UpTech Report YouTube channel. If you know a tech company, we should interview you can nominate them at UpTech report.com. Or if you just prefer to listen, make sure you’re subscribed to this series on Apple podcasts, Spotify or your favorite podcasting app.
SUBSCRIBE
YouTube | LinkedIn | Twitter| Podcast