false
Catalog
Plenary: Artificial Intelligence and the Future of ...
Session Presentation
Session Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Please welcome Program Planning Committee Chair, Dr. Sarah Wong. Welcome, everyone, and happy Physiatry Day to each of you. Today is our Spirit Day, an opportunity to pause, reflect, and encourage each other. Be sure to join in the fun as we celebrate and unite the specialty in its work, because together we are powerful. I hope to see you tonight at the PM&R party. Let's celebrate together. On behalf of the Program Planning Committee, I'm thrilled that you are joining us for today's plenary session to hear from Dr. Jennifer Golbeck. Dr. Golbeck is a professor in the College of Information Studies and Director of the Social Intelligence Lab at the University of Maryland, College Park. She began studying social media from the moment it emerged on the web, and she is one of the world's foremost experts in the field. Her research has influenced industry, government, and the military. She's a pioneer in the field of social data analytics and discovering people's hidden attributes from their online behavior and is a leader in creating human-friendly security and privacy systems. Professor Golbeck is the author of several print and online publications, including the books Analyzing the Social Web, Online Harassment, Introduction to Social Media Investigation, A Hands-On Approach, and Computing with Social Trust. She is a frequent contributor on NPR, and her TED Talk on What Social Media Likes Can Reveal was featured in TED's 2014 Year in Ideas. Please join me in welcoming Dr. Jennifer Golbeck as she discusses artificial intelligence and the future of health care. ♪♪ Hi, everyone. Good morning. I am so happy to be here with you today. Our plan for the day is basically to freak you out, kind of like you've gone to a horror movie, but hopefully, in addition, have some ideas about artificial intelligence and its future in health care. Now, when people talk about AI now, there's a subset of them that talk about it and warn you that AI is going to take over and dominate human civilization, this kind of Matrix or Terminator-style situation. And I want to promise you first that that's not a thing that you have to worry about. Artificial intelligence is very bad at a lot of things, like telling the difference between chihuahuas and blueberry muffins or labradoodles and fried chicken. Somebody asked AI to come up with names for paint colors. These are some actual ones that you get, like, on those strips at Home Depot. So they fed all the colors and the paint names into an algorithm and asked it to produce some new ones. This is what it came up with. They're, like, really good band names, but not maybe a living room color. And then someone also took those motivational posters that have, like, a picture of the skydivers, and it says, like, there's no I in team or something, and it gave all those to an algorithm to learn from and then asked it to generate new ones. There was actually a website, so this is the one I made for myself. I kind of still think about framing it and putting it in my office sometimes. So this is all to say that AI is actually quite dumb, despite the fact that intelligence is in the name, so we really don't need to worry about humanity being taken over. But there are a lot of things to worry about, and, in fact, the people who are pushing you to worry about this Matrix-style situation are often doing it to distract you from the real issues that exist right now with AI. So what I want to start with is to take us through some examples of the power of what AI can do and how it gets all of the data that it works on, and then we're also going to take a chunk of time at the end to talk about generative AI, like ChatGPT, which I think will be a little less scary, but it's just a thing we're all trying to figure out right now. And we will have time for questions, so if there's ideas that you're coming up with and things that you want to know about while I'm talking, we'll have about 15 minutes at the end to do that, too. So let's start by thinking about what AI can do. I think we all know that, like, it's really powerful and we're being observed in a lot of ways, but it's way worse than you think it is, actually. So let's go back 10 years to research that was new at that point. There's a story that I love to tell from researchers at Cambridge that we're looking at your Facebook likes and what we could predict about you from that. So is there anyone in here who doesn't have a Facebook account? Okay. Okay, there's a good number of you. You all have Facebook accounts. I will come back to those in about 10 minutes. For the rest of you who know about your Facebook accounts, you know that you can like things like books and movies and bands and sports teams, whatever you like. Those likes are really interesting because you can't make them private. Even if your whole account is private, your likes are always public. So as computer scientists, that's really interesting to us because even though it's a narrow slice of what you do, it's a slice that we always have access to. So we tried to build artificial intelligence to use those likes to find things out. And the researchers at Cambridge did this and they were able to find out tons of things, race, religion, gender, sexual orientation, behavioral things like if you're a drinker or a smoker or a drug user, they could tell what your political leanings were. If your parents divorced before you were 21, which still is amazing to me. And you may come back at me and say, well, sometimes your likes are clearly going to telegraph those traits. Like if I like the Budweiser page on Facebook, you know I drink alcohol. You don't need any fancy AI to figure that out. But it turns out the algorithms don't really rely on those obvious connections. And as one example, one thing that researchers were able to predict was your score on an IQ test by analyzing your likes. And they actually went back into the data to find out what are the four likes that are the strongest indicators of a high IQ. So no particular order. Those were liking the page for science, which maybe makes sense, smart people like science, thunderstorms, the Colbert rapport, and curly fries. So curly fries are one of the strongest indicators of high IQ. Why is that? The answer is we don't know. And that's really important. Artificial intelligence. So as a field, as computer scientists, the core question at the heart of our field is can you compute X? Fill in the blank. Can you compute someone's IQ score from this data? We sure can. And all you asked us for was the correct answer, and that is all we give you. We don't give you any insight. We don't give you any explanation. All you get is the right answer. And most of the time, if you're trying to do business, the right answer is all you care about and not the insight. We don't get any insight, and we don't know how it works. Since the 1990s, we've been trying to explain how artificial intelligence comes up with the things it relies on, and we're still trying. We haven't figured it out yet. That's important, and it has a lot of implications. But I can tell you generally why we might see this IQ curly fry connection. And it's because if you're on social media, whatever platform it is, what you do there when you like something is very rarely that you go search for the thing and then like it. Almost all of the time, it's that the thing is shown to you because someone you know interacted with it, or now an algorithm thinks you might like it, and then you like it. So it's a very social process to like things online. So imagine that we have a smart person with a high IQ who also likes curly fries, and she decides to make a page for curly fries on Facebook. So she makes that page. It gets shown to some of her friends. Some of them like it. It gets shown to their friends, and it starts to spread the way these things do. There's two things that we know about how that process works that are useful here. One is that things like likes, but also viral videos, fads, rumors, and fake news, those spread online the same way that diseases spread in offline social networks. We actually take models from the CDC on the spread of flu or HIV or now COVID, and we can put those on top of the Facebook network and exactly model the way that information is going to spread. The other thing is that we borrow this idea from sociology called homophily, which is a great, fun word that just means that we're friends with people who are like us. I do this in my social networking class with my students. I visualize my social network, and I've got a bunch of really distinct social circles. They all have different attributes that they share with me. So it's not that every single one of your friends has the same traits as you. It's that your traits are more common in your social circle than they are in the population as a whole. So that means if you're friends with a person who's rich, an algorithm might think that you're also rich, and if you have 100 friends who have a lot of money, the algorithm's going to be more sure about that. If we put those two things together, we have a smart person who makes this curly fries page, and her friends see it, but they're also probably going to be smart because smart people are friends with each other, and the same will be true of their friends. So we get curly fries spreading like a virus through this smart layer of Facebook. So in the end, if you come along and like the page, the algorithm doesn't know what IQ is, and it doesn't know what curly fries are. It just knows that you did a thing that other high-scoring people did, so you probably also get a high score. This is not stable, by the way. So I did this example in my 2014 TED Talk, and then, like, overnight, the curly fries page got, like, 500,000 new likes because people were like, I'm smart, so I'm going to go like that curly fries page. So now if you like it, it probably just means, like, you've heard me talk at you, which is okay, too. So we can find out a lot of things. I've done a lot of my own research in this space, and I will say in my lab, every single attribute that we have tried to find out about people, we have successfully been able to build an algorithm to predict from whatever data that we had at hand. And as we moved past individual attribute prediction, we in my lab started looking at future behavior prediction, and a lot of people have looked at this. So one example that we did was looking at addiction recovery. We went on to Twitter. We looked at everybody who tweeted about going to their first Alcoholics Anonymous meeting. This, of course, takes the anonymous out of it. I don't know how that messed up our data. That's a problem for somebody else to solve. So we found all these people. We filtered out the jokes or, like, people who were in training to, say, be counselors who had to go. So people who legitimately were saying, I'm going to my first Alcoholics Anonymous meeting tomorrow. And then we followed what they did after that to see if they made it 90 days sober, which is a good, like, early addiction recovery marker. We made sure they said explicitly. So it could be a week later they complained they were hung over at work. We knew that they were drinking again. It could be a year later they celebrated their one year of sobriety. So we know they also made it 90 days. And then what we did was build an algorithm that would look at everything they did on Twitter up until they announced they were going to that first Alcoholics Anonymous meeting and predict on that day if they would make it the 90 days sober. Our algorithm is right 85% of the time. So is this good or bad? I will say that we published this. We published a few scientific papers on it because the science is really interesting. We did not make a website with this tool where you could type in your Twitter username and get a prediction. On one hand, we did try to build this to actually give some insight. Most of these algorithms, like I said, we don't know how they work. But we built this one using information from the psychiatric literature on factors that impact recovery. So we could tell you, like, hey, your social circle looks like people are drinking a lot. That makes it harder. Or maybe this kind of therapy would help. But I always have to think about what's the worst thing that people will do with the technology that I build. I have always underestimated how bad it can be. And in this case, I used to be embarrassed to tell this story of what my worst worry was. Which is, you know, in some jurisdictions, if you get a DUI, you are given the option of going to treatment or going to jail. You could imagine a judge or a local legislator saying, well, let's just run the algorithm. And if the algorithm says it'll work, they can go to AA. Otherwise, they just go right to jail. You don't even ask them. So I used to think that was a little hyperbolic. And then in talks like this, I've had lots of people come up to me afterwards and be like, I think that's a really good idea and we should do that. And they come from a good place. You know, they're coming from a public safety perspective where they say, look, if this person's out driving drunk, we really want to keep them off the roads and anything that'll make that better is good. The problem is, one, our algorithm is only right 85% of the time. The 15% of the time it's wrong, it tends to be pessimistic. So these are people who actually will get sober, but the algorithm is incorrect about that. We also have biased data. I have no idea how it's biased, but these are people on Twitter. They're probably tending younger than the average population and they're tweeting publicly about going to Alcoholics Anonymous, so who knows? The third thing is that artificial intelligence has this veneer of objectivity to it because it's math and data, so how can it be biased? But it's biased all the time. And so I used to be afraid to say that example. And it turns out way worse stuff is actually happening kind of in the same space. So there is a corporation right now that has an algorithm that doesn't predict alcoholism recovery. It predicts recidivism risk. And this is an algorithm that's out there. It's commercial. Counties around the country are using it to predict what's the risk of someone reoffending, and those predictions are used in determining if people are given bail or probation. Now, that algorithm has been analyzed, and it is profoundly racially biased. Black people are considered much higher risk for reoffending, and it's not just that overall. If you dig into the data, black people are incorrectly classified as high risk twice as often as white people. And the reverse is true. White people are incorrectly classified as low risk twice as often as black people. So this isn't even something that you maybe could explain by differences in the populations. It's just incorrect in biased ways. And it's still being used to make decisions about people's liberty. So these are the kinds of things that we really want to keep in mind. These are, of course, problems that we want to solve in the abstract, but once these algorithms get out there and are in use, they have real-world impacts on people. The last example I want to talk about is going back a little bit, again, about ten years, a story that some of you may have heard that was in The New York Times that begins with this amazing anecdote of this dad going into his local Target. He's got a flyer that came to his daughter in the mail, and he calls the manager over, and the dad is in a rage. He's like, you sent my daughter this flyer. It's full of coupons for maternity clothes and baby bottles and diapers. Do you want her to get pregnant? Why would you send this? The manager has no idea what he's talking about, because, like, Target headquarters sends the flyers and not the stores, but he promises to look into it. A couple weeks later, calls the dad back just to apologize, and the dad says, you know, I actually owe you the apology because my 15-year-old is pregnant, but Target found out before she told us. So how does that happen? Why does that happen is an interesting question, too. Target computes what they call a pregnancy score for all of their female customers that not only can tell if they're pregnant but also what their due date is, and they do this by analyzing purchase history, because Target has a lot of that data. They not only know how much of every single thing has been bought at Target, but per person, because if you use credit cards or debit cards or loyalty cards, it's very easy to piece everything together into individual histories. So it's really easy to tell from purchase history when somebody had a baby, because there's a big spike in, you know, wipees and infant diapers and all of the, like, early baby stuff. So what you can do is just line up all of those customers and then go back six or seven months and look at what they were buying then, and that might give you some purchases that are indicative of someone early in their pregnancy. Are there any guesses what is the purchase that's most indicative of that to Target? Pregnancy test is the family feud-winning answer. It is the most popular answer I get when I ask this admittedly trick question. Some of us have bought those hoping that they're negative, so it's not always a great predictor. But it turns out they don't actually look at individual purchases. They look at combinations. And the Times interviewed a guy in the analytics group, and he said, you know, right now, one of the strongest predictors is if a woman buys an extra bottle of lotion, a large handbag, and brightly colored rugs. So, like, you're in line, and the lady in front of you has, like, cocoa butter and a big purse and a blue rug, and you're like, oh, she's totally pregnant. Right? This kind of comes back to the idea, like, why those things? We don't know. We don't care, right? All they want is the right answer. They can then send the flyers out and get some new customers. But what it really has implications for is privacy. This 15-year-old didn't want her family to find out, and they may have eventually. But there's all kinds of things about us that we want to keep private that artificial intelligence is able to find out through analyzing data that we wouldn't think has anything to do with that. Why is adding, like, a rug to your purchase when you're six or seven weeks pregnant going to indicate that to Target? We don't know. We don't get any insight. It probably changes month to month. But if you want to keep that private, you don't know what to do in order to keep it to yourself. Now, all of this artificial intelligence only works if we have access to a lot of data about you. So where does that data come from? All over the place, and I know we all know this, and again, it's way worse than you think it is. This also gives me an opportunity to put my dogs on the slide. So this is my dog, Chief Brody. We rescue special needs golden retrievers. We have five right now. Chief Brody came to us at the beginning of the pandemic. He was nine years old at the time. He's 13 now. He had really serious allergies. His previous people kept him in the cone for, like, five years, and he was, like, 50 pounds overweight. So we had a really tough time. And we got him and gave him allergy shots and put him on a diet, and he's fine now, except he has trauma. And for his trauma, he takes Prozac, and it has made his life so much better. And I get Chief Brody's Prozac filled at CVS, and I got this text from CVS in May. I was so excited to get this so I could put it in my talk. Because this is a marketing text based on prescription history. To my dog, but still. I knew CVS knows my prescription history. I did not know that it was crossing that barrier into marketing messages. And I will say, Chief Brody doesn't really care about this, but my trust in CVS dropped substantially when I got this. Because even though this is an internal CVS marketing message, right, they're not sending this out, I had no idea that this really sensitive medical information about me was then translating into marketing efforts. And what are they doing with my prescriptions? If I get a migraine marketing email, I don't care, but man, there's some other ones, like, I don't need anybody marketing to me about that. It's private, and I don't want it crossing that line. This, I think, is a great example, though, to show once corporations get access to data, they want to use it, and they're going to use it in all kinds of ways. I've got three hours of material on this, and so I'll give you some of my favorites. How many of you have had the experience of, like, your phone is just, like, sitting on the table, you haven't searched for anything, and you're like, maybe for spring break next year, let's go to Costa Rica, and then you forget about it, you don't look for it, but the next day you have ads for, like, Costa Rican tourism showing up on your phone. Yeah, like, we all have this, and then, like, Facebook, and everybody's like, no, no, no, we totally, like, don't listen in, like, you must be crazy, like, you probably saw something, and that's why you got the ads for it. And my friends, I am here to tell you that they are lying, your phone absolutely is spying on you, just like you thought it was. And some companies have been caught doing this. So La Liga is the football association in Spain, they got in a whole bunch of unrelated trouble this summer, but in the previous Women's World Cup, they got in trouble for this, because they had an app that would show you what are the time of the matches, what are the scores, all the normal stuff that you would expect in a sports app. And what they also did was track users' location, like most apps try to do, because that's extremely valuable data, and if you walked into a bar, which is easy to tell if you have location data, because there's databases that just say, if you're inside this set of coordinates, you are in this bar, and here's who they are, they would turn on the microphone and passively listen in in the background to see if the game was on the TV in the bar. And if it was, they would cross reference that bar against the database of bars who had paid the licensing fee to show the game, and if they hadn't, La Liga would find them. So you're like narcing on your neighborhood bar just by walking in there with the app on your phone. Now, this was in Europe, where there's like actual privacy laws, and so La Liga got in a whole bunch of trouble for doing this. But it's very common. Now, do we know, is Facebook actually listening in on the app on your phone? We don't know. I will tell you that they have patents for it. So it's not like they're like, man, we would never listen in on you on your phone, because they've patented all kinds of ways to use that exact kind of technology. Facebook's really easy to pick on here. They're not the only ones doing this at all. But I'm gonna keep picking on them. But I just want you to know it's not just them. Raise your hand again if you thought you didn't have a Facebook account. It's always funny, there's way fewer hands when I re-ask this question at this point in the talk. Okay, so for those of you who thought you don't have Facebook accounts, you do. Facebook has testified to Congress about this. They call them shadow profiles. They are profiles for people who have not yet submitted and given in and signed up for Facebook. We don't know exactly how they work, but if I were put in charge of that project, here's how I would do it. And I think this is, you know, that they're certainly using these things. So you do get an out. I will give you an out. You might not have a Facebook account if you don't have a smartphone. Is there anyone? Sometimes there's one old guy in the back who proudly holds up his flip phone. You also have to have no friends, and no one has ever said yes to that when I ask. All right, so you all have at least friends. So when your friends who do have Facebook accounts install the app on their phone, which I have not done, so take that as a point to consider, Facebook takes a copy of your contact list on your phone, sometimes even if you tell them not to. And in the contact list, for you, the person who doesn't have a Facebook account, is your contact information because this is your friend. So it has certainly at least one phone number for you, maybe an email address, your name, maybe a picture, so your face shows up when you call them. And this is true not just of that one friend, but all of your friends who have Facebook on their phone, their contact list is now in the hands of Facebook, and you're in it. So Facebook is able to use your email addresses and your phone numbers as unique identifiers, and piece those together and say, okay, these aren't associated with any account that we know of, so let's start collecting all this information. So anytime some contact information comes in, say with your phone number, it gets added on to the shadow profile. After they do that with everybody who knows you, they're likely to have all of your phone numbers and email addresses, probably a street address, maybe a couple, a few photos of you, and all the variations of your name. They also have your social network. So some of your friends who know you in real life are also friends on Facebook, and they're likely to form clusters. So there will be people you work with, people you went to high school with, people at your kid's school. And they're gonna know each other, and they may even be in groups together. So they're on the PTA Facebook group for your kid's school. Now, we can go back to that homophily thing that I talked about, and say, well, you're friends with people who are like you, so if you're friends with 10 people in a group that all are kind of connected to each other and all do the same thing, well, you probably do that thing too. So if you give in and finally sign up for Facebook, what you're gonna see is a welcome message that says, hey, thanks for signing up for Facebook. Here's 200 people that you might know. Would you like to add them? Here's a bunch of groups that you should join, like this is when you graduated from high school, this is your hometown, this is where your kids go to school, this is your church. Also, we use our fancy facial recognition algorithm, and here are all the photos of you that are on Facebook but weren't tagged. Would you like to add them to your profile? So conveniently, you're all set up and ready to go. All kinds of places have these profiles on all of us. Facebook is really visible, because a lot of us know what that is. But lots of data brokers and companies collect thousands, sometimes millions of points of data on each of us and sell that around to people in the background. So we're all being monitored and having our data collected this way. Let's talk about some other ways. I could go on about Facebook forever, but let's talk about some other kinds of tracking. Okay, location tracking. Location data is super valuable. Some people like to push back on me and say, well, I use this setting that says people can't track my location. And one lesson I want you to take away from today is that settings don't matter. We can get that information whenever we want it. So if you turn off location tracking, for example, we can look at your phone when it's connected to Wi-Fi, and every Wi-Fi router, which is the little thing that, you know, the signal that you connect to, happens to be mapped to a GPS location. There's a lot of ways that they can do this. So Google did this when they were driving around doing street view. They would just find all the routers. They didn't even need to connect, right? They could just see the addresses and then map that to the GPS location. So all the home ones are done that way. But also if someone in this room is using an app with location enabled, so you're using Google Maps, Google can see the Wi-Fi signal, and it knows because you have location on where you are. So other people in here who have turned off location access but are on the Wi-Fi, well, Google knows where you are because they know the other people are in here who are connected to the same Wi-Fi. So we can use the Wi-Fi signals very easily to tell where your phone is. So it doesn't matter that you turn off location tracking. That just turns off the GPS and we can get around that. This is also true of the microphone, right? So people are like, well, my phone doesn't listen to me because I used the setting to turn off the microphone. So one, they don't care. Like sometimes they get around that. But two, there's all kinds of interesting research that you don't need that. So say you are like, no one's gonna listen to me. I am gonna open up my phone and rip out the microphone, rendering it no longer a phone, but no one can listen in on me then. So your phone has a thing called an accelerometer, which is a thing that can kind of tell which way it's tilting. It's used in the compass. If you play games where you tilt it, that uses the accelerometer. And the accelerometer is so sensitive that it can pick up vibrations from you speaking next to your phone. And we've shown with artificial intelligence that you can convert the vibrations that the accelerometer picks up back into speech, essentially turning it into a microphone. Your Roomba can also do this. Roomba, as far as I know, does not do this. But my colleagues at the University of Maryland did an experiment because Roomba uses LIDAR, which allows it to not smash into your furniture, the sort of forward-looking. And when you talk in a room, your voice also vibrates the furniture a little bit, and the LIDAR is sensitive enough to pick that up. So my colleagues hacked a Roomba, and then were able to convert the furniture vibrations back into speech as well. So we can do whatever we want. Use the settings. It's not a bad idea to use the settings, but you're not gonna get total protection from that. And then the last point that I wanna hit on here is just that we can go very deep and into, at best, ethically questionable areas with the way this kind of data is collected. So in the medical space, one of my favorite examples here was a story about a man in Texas who uses a CPAP machine. And he was getting, the technician was out servicing the CPAP, and you guys can correct me if I get the fine details of this wrong, but basically there was a kind of cartridge in the CPAP that he could bring into his doctor, and his doctor could review it, and kind of see how much he was using it, and what the patterns were, and that was a way that they exchanged data. So when the technician was servicing the machine, he said, you know, we've got a new thing that I can put in to replace the cartridge where just your data automatically gets sent to your doctor so you don't have to bring this in. And the guy says, yeah, that sounds like a great idea. So they upgraded that. And when he went to refill the prescription for the tubes and the bits that you have to replace, his insurance denied it because they said he wasn't using it enough. There's a minimum amount of usage that you have to hit in order to be eligible, to basically be compliant with the use, and he wasn't using it enough. So they denied his refill. And they knew that because that thing that got installed to send the data to the doctor also sent the data to the manufacturer and to the insurance company directly. So it doesn't matter what his doctor said. The insurance company could review his usage and then actually like deny his access to healthcare because of that. And he didn't even know that it was being sent. I think this is a terrible kind of way that we're monitoring data. I can see why people think it's useful, but it shows that a lot of really sensitive decisions about people's wellbeing can be made by this, you know, anonymous and hidden way that we're able to access data about people. I have so much more to say on this. So if you have questions like, are they actually doing X? I promise you the answer is yeah, they totally are. If you just take like capitalism plus data, their answer is yes. You know, they're gonna do it if they can make any money on it. But if there's specific things, ask me questions when we get there. Okay, so let's talk about AI in healthcare, specifically in some of the spaces and issues that we see this showing up. So we've talked about a couple of these examples. There are actually really interesting ways that it can be used to help with compliance. One really cool example that I saw, and then we're gonna shift a little more positive and less horror movie here. One really cool example I saw of this is looking at something called the care gap, which you guys maybe know about better than me, but these are people who like need to get some kind of treatment and aren't getting it. I am a hypochondriac. I see a therapist for it. I've got very serious health anxiety, and I suck at this sometimes. So I'm supposed to get annual MRIs to check on a kind of benign condition. And just thinking about getting the MRI like really ruins my day. When I finally got it this year, it had been three years since I got one. I'm fine, but I'm not super compliant because I get very anxious about it. So how do we address that kind of care gap? There's all sorts of people for all sorts of reasons that have this gap. Who's gonna be responsive to messaging? So if community radiology, which is where I get my things, sends me a text message that says, hey Jen, you're overdue for your MRI, is that going to get me to be more compliant? It's absolutely not. It is going to get me to schedule an extra session with my therapist and maybe block their number, but I am absolutely now six months further behind in getting the MRI. I'm a sort of extreme case, but there's like plenty of people like me. I absolutely will not be responsive to that, but I have no anxiety about going to the dentist, and when the dentist is like, hey, you're late for your cleaning, I call them right away and schedule it. So how do we figure out, who are the people who will be like me with the dentist who are like, oh yeah, better get that taken care of, thanks for the reminder, and who are the people who are gonna be like me with the MRI that's like, go away and don't talk to me again? There's really interesting research from the recommender systems community who are like the people who help recommend you Netflix shows or like curate your social media feeds where I do a lot of my research on exactly identifying data from people on their public interactions with messaging to be able to identify who is this gonna be helpful to and who isn't it, so you are targeting the right people with the right messaging, and then thinking about other ways to reach the mes with the MRI to like try to make something happen. There's also obviously areas that we need to be concerned about here. This is a kind of hopeful example that illustrates a lot of points. So there's a major hospital system in the US across the country. They have a program for people with a lot of complicated comorbidities, so think diabetes, heart disease, high blood pressure, and people would get referred into this program where they would have access to specialists and therapists and dieticians who could help manage all aspects of their condition. The program is very effective, but it's also quite expensive, and so the hospital didn't wanna be referring people into it unless they really needed it. And what they did was build an artificial intelligence algorithm that would analyze the electronic patient records that they had access to and determine if they should be referred. So this system was in effect, and some researchers then came in to try to understand how well it was working and particularly to look for bias, and what they found is that the algorithm was profoundly racially biased. If you were a black patient, you had to be much, much sicker than a white patient in order to get referred into the program. Some of the statistics on this, the referral rate for black patients was about 8% when they were run through the algorithm. If they were allowed into the program at the threshold of sickness that allowed white patients in, that would jump from 8% to 42%. So there's a major racial gap. So they dug into the algorithm and looked at what was happening, and what they found is that the algorithm was very accurately predicting future healthcare costs. But black patients and white patients tend to access the healthcare system in very different ways. White patients see more specialists, they have more surgeries, they cost more money, and so because black patients have less access to care or receive less care, they are less expensive, and thus they weren't hitting the threshold regardless of how sick they were. This is a really common kind of bias problem that we see across all applications of artificial intelligence working on people's data, whether it's in the healthcare space or otherwise. I like this story though, because a lot of times I feel like I just leave you with everything's horrible, have a great day is a message, but this actually has a good turn to it. There's not many of these. So these researchers who identified the, I mean, I am gonna be honest, there's not many of these, but this one gives me hope. So the researchers went in, they were like, okay, this is what it's doing, and instead of changing the existing algorithm, they let it keep predicting future healthcare costs, but they added on a component that predicted future healthcare risks, which you also can do if you have access to patient records to see what happens after this point, who's gonna get sicker. And when they did that, like 90% of the racial bias was eliminated from the algorithm. So they essentially were able to fix the problem by having people come in, not just who were computer scientists who know how to build an algorithm, but who were experts in the domain and could understand what was going on there. And I think this really highlights a space where in the healthcare field, as AI comes out, we need to be looking for these kinds of bias and then also thinking about what are the ways that we can correct that. So what's coming next? How many of you have seen this picture of the Pope in a puffy coat? It was like my favorite picture. So this is an AI-generated image. Unfortunately, I would like the Pope to have this coat. He looks great. This came out, I think, over the summer. Do you remember when those protests were happening in Paris? They were raising the pension age, and so there were a bunch of protests in Paris. So these images started showing up on social media. And I was like, France is a really different place because I have been to some protests, and there is not nearly this much hugging. And it's like we've got three conventionally attractive women hugging three riot cops. And again, it is not my experience at these kinds of events. But they look good, right? They look real until you look closely because that lady's ear is messed up. And that cop has six fingers. So these are also AI-generated images. And then, of course, ChatGPT. So how many of you have played around with ChatGPT at all? A good number of you. Yeah, so this isn't a scary part of the talk. This is just like let's give you a little bit of background on what's coming with this kind of generative AI technology. There's all kinds of spaces that we're likely to see this and warnings that we also want to take into account. So we are seeing across fields, but especially in healthcare, chatbots that are leveraging tools like ChatGPT, or we call these LLMs, large language models. So ChatGPT is like a brand name version of one, but you can install one. You know, a good IT person can get one up and running for you on your own system that's not commercially connected in a week. So we see it being used for chatbots, for example. So if patients were to have questions, they can come to the chatbot. And traditional chatbots are more kind of like the phone system where you'd like press one and then press seven, and it's just a little texty. But these can actually understand patient questions and give like pretty good answers based on all kinds of documentation that you have. We're seeing a ton of like industry push in this space. It's likely to be one of the first rollouts of this kind of tech in the commercial space that you're likely to encounter, which is great. All kinds of people are also using it for all sorts of reasons. I will tell you, as a professor, students are absolutely using it to write their papers, poorly. I have a colleague who teaches health informatics, and she gave an assignment in the spring that just said, write a summary of a published article about anything in health informatics. And she's a better professor than me. She took those papers, and then she would go look for the articles that the kids wrote about. Like I would just read the essay and see how it was. But she actually went to get the articles. And like a bunch of them, she couldn't find the articles. And then she emailed the students, and she's like, hey, I can't find your article. And they're like, wow, it must have been taken off of Google in the last few days. Would you mind if I redo it? Because they just straight up like said, ChatGPT, write an essay about health informatics article. And it just makes stuff up. Because what tools like ChatGPT do are put together words in a way that sounds human. That's basically the entirety of what they do. They are not like accessing a database. They're not trained to be accurate. They just say things like humans say. So if I were to hold a gun to your head and be like, tell me about an article in health informatics, and you don't know, you could fake it, right? You could make up a name and a title and like tell me what it was about. And it'd sound pretty good probably. That's what ChatGPT does. The technical term that we use for this in AI is a BS generator. That's what it is. The technical term for when it makes up things that don't actually exist is called artificial intelligence hallucination, which is a great and formal technical term that I encourage you all to use, because it's really fun. This has gotten a lot of people in trouble. So there was a rodeo professor in Texas. I didn't know there were rodeo professors, but there are, at least in Texas. He failed basically his whole graduating class in the spring for cheating on their final papers. And the way he determined that was to take their essays, paste them into ChatGPT, and then say, ChatGPT, did you write this essay? And for most of them, ChatGPT said, yep. Because that sounds like a human answer. It doesn't know. Like it doesn't keep a list of those in check. It's just like, yeah, I totally wrote that. And then he failed the students for that. Allegedly, they were sorting it out. I have tried desperately to find a follow-up story, and there was nothing on that. There's a great story that I do have a conclusion to. Of these lawyers, there was a lawsuit going on in federal court. It was pretty boring. It was like a personal injury thing. A guy got hit by one of the rolly carts on a plane, sued the airline, and I think the argument was over like statute of limitations. The legal part's super boring. What's interesting is that the lawyers for the guy used ChatGPT to write their brief that they submitted to federal court, arguing that the statute of limitations didn't apply, and citing all of the relevant case law to make their argument. And the lawyers for the airlines got that, and they tried to look for the cases, and they couldn't find them, and they went back to the judge, and they're like, we've checked the databases. We've checked the docket numbers. None of this stuff is where it's supposed to be. And the judge goes back to the lawyers who filed the brief, and were like, produce these cases that you cited. And the lawyers went to ChatGPT and was like, give me the decision in this case. And ChatGPT's like, zoop, here you go. And they printed them out and sent them to the federal judge. And the lawyers on the other side were like, these totally don't exist. And then the federal judge put in an order that I read online that says, everyone will be in my courtroom on June 8th for a discussion of this issue, which is very angry for a federal judge. I was so excited about this, I tuned into the court Zoom to watch them get yelled at by the judge. And it's great because you don't lie to a federal judge. So the judge not only made them admit like, yeah, they used ChatGPT, and no, they didn't even bother reading the cases that it cited. He also got one to admit that he had lied about going on vacation when he asked for an extension because he can't lie anymore, you know? The conclusion to this was that the judge fined them $5,000, which is not that bad, but he also made them write apology letters to all of the judges who allegedly wrote briefs that they cited, which is like a real middle school shame kind of punishment. And I sort of wonder if they used ChatGPT to write the letters. But there's real implications that come from this too. So I think Iowa had passed one of these laws about not having sexual content in books in schools. And there was a school there that's like, how are we gonna go through all of these books and figure out like what is sexual content that meets the law? So they just like said, ChatGPT, is there sexual content in this book? And then they put the title for all the titles. And then if ChatGPT said yes, they took the book off the shelf. ChatGPT doesn't know. And the school kind of knew that, but they're like, there's no way for us to do this. And so you'll probably believe it if ChatGPT says so. So we don't really know how to use these tools in the right way right now. People are getting themselves in various kinds of trouble, but there's all kinds of beautiful, productive ways to do this. I use ChatGPT all the time. I live in Florida most of the time, so I'm angry about a lot of things. And I sometimes sit down to write to my legislators and all I can do is come up with like one really angry sentence. And it's good, but it's not an effective letter. So I said, ChatGPT, please write a letter to my representatives that say, and then I put the angry sentence, and then it makes like a really sane sounding letter, and I proofread it, and then I email it off, and I feel okay for the day. It's great for that, where like the quality of the writing or the interestingness of the writing doesn't really matter, but it makes the point. And there's a lot of that kind of writing that we all have to do. One space, you don't need to read this tiny text, but just as an example, one space that we see this in healthcare is with summarization. The legal field is also using a ton of this, where you could take like a radiology report. I took this one, I literally like Googled a sample report, copied and pasted this text into chat GPT. And I was like, hey, summarize it. And it gives me this nice thing. I've talked about this with other physicians who say it did a pretty good job summarizing this. I don't know. But that's pretty cool, right, that you can actually use it to generate summaries of large bodies of text. So we are seeing it come into other spaces where it's going to be useful. I'm not here to warn you off of using chat GPT. But it kind of takes us to this point of thinking of, what are the challenges that come along with all of the opportunities, not just for generative AI, but for AI in general? One, absolutely, whether it's using it to analyze personal data or patient records or using generative AI are issues of literacy. What is this tool actually able to do? What are its capabilities? What are its limitations? As a user, you're very rarely told what that is. I know what chat GPT can do, because I'm a computer scientist. And I have colleagues who have been building it for decades. But if you don't know that, you just get this tool that sounds really confident in what it's doing. It's hard to figure out, what are the boundaries of that? Bias is another just critical option. I have a whole hour talk just on dealing with bias in AI. We're working on it. I think it's likely to be one of the early spaces that we start seeing regulation. Cities like New York and Washington DC, I think New York may have implemented, but are considering laws that require auditing for bias in AI algorithms, which is very difficult to do. We're not sure how to do that yet. But it's a place that, if we're using a tool, it absolutely needs to be a thing that you consider is, how are we checking for bias there? Keeping humans in the loop, not just allowing AI to make decisions as though it's some oracle of truth, because it gets stuff wrong. And a lot of times, it's stuff that humans would actually notice. They need to be part of that process. And then keeping that creepiness factor in mind, because especially in the health care space, having trust is really critical. And as I mentioned with CVS, my trust dropped way off when I saw the way that they were using Chief Brody's data to do marketing. So thinking about, when are you doing something that's actually going to seem kind of creepy or make people worry about their privacy, or the care with which you are handling their information, is a really important factor to have in mind. So with that, we have time for questions. There are three microphones set up in the room. And I am happy to talk about anything, whether it's stuff in the talk or you're worried about your 15-year-old on TikTok. I totally know about TikTok. I can talk about that, too. Anything you want in the AI space. I cannot see super far back. So you guys are microphone one. And this is two. And that is three. And I'll just kind of go through. Maybe we'll start at three, since there's somebody up there. There's a four, too. Oh, is there a four? I have glasses. Oh, yeah, there's three. OK. OK, let's start at four, which is over there. Quick question about the black and white in health care. Is it because they place the race as an item in that algorithm? Or is it because the susceptibility for some diseases are higher in one race versus the other? So weird to hear myself. I know. It's very distracting, isn't it? This is a great question. So typically, when we look at racial bias in algorithms, very rarely is the race actually an input. But race often maps to differences in behavior or access to information. So another quick example in this space. Wells Fargo got in a ton of trouble for a really similar problem in the lead up to the mortgage crisis. So they had an algorithm that was assigning people either to traditional or subprime mortgages. And the city of Los Angeles did an investigation into them. And they found that if you were black, you were sent to a subprime mortgage one and a half times as often as a white person with the same income and credit history. If you were Hispanic, it was twice as often. Now, the mortgage industry is regulated. Race absolutely cannot be a consideration. And it was not included as a consideration in the algorithm that Wells Fargo put together. But zip code was an input. And zip code, especially in urban areas, often maps very closely onto race. And the way that AI works is that it just kind of takes examples of what people have done in the past and learns to replicate those examples. So if we know, which we do know, in the past that mortgages have been given out with some racial bias against non-whites in terms of being given access to better terms of the mortgage, the algorithm will have seen those examples. And so it sees two people, and it's like their income is the same. Their credit history is the same. But this one seems like it should go into a high-risk group, and this one shouldn't. And the only thing that seems to be different is zip code. It's going to learn to replicate our biases on that. And they got a big, giant fine and had to throw out their algorithm. It is not the only problem that they've had, but it's a good example. Amazon had a similar problem. They decided to use AI to build a hiring algorithm for software engineers, which is a terrible idea. Women are a huge minority in software engineering, sometimes 5%. There's studies that show if you send in two resumes that are identical and you just change from stereotypically male to female names, the females almost never get called back, even if the males always do. So they're like, we're going to build an algorithm to solve this problem. The algorithm was so sexist that they had to throw the whole thing out. Like, if you went to a women's college, you didn't tell the algorithm what those were. It just learned, because it's like, these don't look like people we want who went to those places. If you played on the women's lacrosse team, they wouldn't hire you. And then they're like, well, we'll just take the word woman out. But it's really easy to find signals of gender. And it knew we don't hire women, based on all the examples I've seen, and so it wouldn't do it. So very rarely are the things that the bias is on included as inputs, but we leave all of these signals of our characteristics in the rest of the data, so it's really easy to have the discrimination come through that way. Number three. Hi. Hi. So I have noticed that if I take my kids to a trampoline park, or even when I sign them up for Sunday school, there's a line in all of those liability releases that is like, we own your image and likeness, and we can use it for whatever we want, for however long we want. I will typically add something that's like, no photos. But what are you seeing in terms of the bad from that? Because that's obviously what I worry about. But what are some of the real world examples of how capitalism plus data equals yes? Yes. Thank you. I'm glad that line stuck. Yeah, so part of this is lawyering. My husband's an attorney, and he writes terms and conditions. And so he'd be like, oh, we've got to put that in there in case we want to put a picture up on the Facebook page, or whatever. But there are definitely risks with that that are going to vary by organization. Because if they can do that, for example, with facial recognition, have you guys heard Clearview AI will be a company that may be in your consciousness? Clearview AI is a facial recognition company. They sell their facial recognition algorithm to lots of places, including police forces. And all of their data is just scraped off the web, off of social media. And so what does it mean if your kid's picture is in there? I don't know. I don't actually think that technology should be legal to use for law enforcement at this point because of all of the major problems that it has. But that's sort of the thing, right? Like, is your kid's daycare, or camp, or trampoline park going to sell their image? Like, probably not. But is it going to get harvested if they put it up somewhere? Like, almost certainly, yes, because capitalism plus data, right? I don't want to freak out all the parents in the room. So I'm going to, though. I'm sorry. Part of my work is studying really bad stuff people do online. And so I go to the places that you don't want to go. And kids' images get co-opted in lots of ways in sort of pedophilic communities online. Obviously, there's child porn. And that's very clearly banned by the law. And man, are tech companies good at filtering that. You never accidentally see child porn. I'm just trying to filter this to not upset too many people. But there's communities that collect images of children, say, in certain positions or doing certain things, and then circulate them and put them in videos. If your kids make YouTube videos, there's a whole community that loves kids' gymnastic videos and sends them around. That's not a capitalism plus data thing. But that's a, like, man, these things are out there and happening. And is that something that you really want for your kid's image to end up in that collection? I don't have any kids, but I would certainly say no. And so that's a kind of thing to keep in mind as well. So I think your absolutely no photos is, like, a great rule. I don't know if that's legally enforceable. You can talk to a lawyer about that. But yeah, there's a group of risks that come with that. And I think the kind of personal ones are just as bad as the capitalism ones in that case. Sorry to take that to, like, such a bummer place. Number two. Hi, Dr. Goldberg. Thanks so much for this presentation. I had two questions. One of them is, like, the most important thing that I've kind of noticed in AI in health care is the HIPAA and kind of upholding that privacy of the patients and making sure that patients are in control of their data. And so how can AI, like, kind of moving forward with these new applications? I just saw a software that can write your subjective note just from the data that's being inputted. How do we protect this data from continually being harvested and kind of like Chief Brody in that kind of way? I know that the legislation is much more advanced in Europe or in China that's kind of, like, spearheading this. And it's kind of lacking in the US. So how can we get policymakers to kind of really get by with this? And then secondly, with the AI hallucination and the deep fakes, how can we help AI make better predictions or help patients when you say, like, I have these symptoms. What do I have? And it doesn't say, like, cancer. And instead, it actually points you to the right direction. I have so much answer to that long but very good question. All right, so some practical advice first. If you're using a tool like ChatGPT, it can do exactly what you said, right? It could probably write those notes for you. But ChatGPT is, and any tool, right? This is not to pick on them. Any tool that's online like that will absolutely take the input that you give it and use that to train itself to do better in the future. So if you put sensitive information in there, it becomes part of what ChatGPT learns from and could make its way into other spaces. So there is a practical solution for that, which hopefully your IT team knows but is a question that you should be asking, which is to implement your own local versions of it. So you can get large language models online, install them on, like, servers in your hospital that are just on your intranet. They don't access the web at all for anything. So all of the data that goes into that stays within your organization. We will likely see some kind of regulatory guidance on large language models and generative AI with respect to HIPAA that's going to say exactly this. Like, it's a best practice. I think everybody's kind of figuring that out. But it's been less than a year, right? ChatGPT was released to the public in December. So it's all really fresh. But that's a really practical tip on that, that whenever you're using an AI tool, if there's patient data touching it, it should be internal only. It should not be something from a third party, because who knows what's going to happen to it. And they change their policies, and they do messed up stuff all the time, including CVS, right? Like, they're bound by HIPAA on prescriptions. And that didn't stop it from crossing those boundaries. So the corporate side is hard. In terms of the deep fakes in ChatGPT, so I did a talk for MD Anderson. They had a big conference, and I went in. I gave them basically the same talk. And one of the people in the little meeting before was saying, the Washington Post ran this story of this mom who had a kid. The kid was sick. They had been to, like, 17 different specialists who were all kind of treating individual symptoms but couldn't figure out what was wrong with him. And in a moment of frustration, she just took all of his records, put them into ChatGPT, and was like, what's the diagnosis? And ChatGPT said, he has spina bifida. He didn't present as traditional for spina bifida, but ChatGPT said that's what he had. And she went, and they analyzed him and concluded that, yeah, that's actually what it was. And I was like, this is terrible. Like, I'm very happy for this mom. What we don't want is hypochondriacs like me switching from WebMD to ChatGPT, right? To be like, here's all of my symptoms. What's wrong with me? And it tells me this thing, that I'm convinced because it sounds really good that that's what it is. This is going to be a huge risk of management. I don't have a great solution for this one. It's one of a wide range of information and tech literacy problems that we have to solve. I think that kind of comes down to an education level, like starting really young, right? We've got to start doing that in elementary school. But it's going to get hard to stick. To ChatGPT's credit, they are kind of recognizing this. I mean, they're in a really tenuous legal position now. They've got all kinds of lawsuits in all spaces coming. They're trying to put guardrails up. So if you were to ask that question, I don't know what it would say now. But it might say something like, as a large language model, I am not capable of diagnosing this. Sometimes it just won't answer the question. Sometimes it'll be like, if I had to make up a hypothetical answer, it would be this. But there's a lot of risks or ways that you get around that with how you phrase the question. So I think it's really concerning. I want to influence legislators to help on all of these problems, privacy and managing AI. Their track record is not good. I mean, you've seen the tech executives go in, and then they're asking for help with their email in the hearings, right? They just don't know what's going on. So I don't know what the right solution is there. I expect that, in the next five years, the regulatory approaches that we're likely to get are kind of extending existing class protections for bias and kind of auditing requirements into the AI space. And we probably won't get much more than that. Maybe there will be big changes after the next round of elections, and we'll see something. But I'm not hopeful in that space, unfortunately. Number one, is there somebody over there? Yes. That touched on my question. But I guess just to follow up a little bit, it's possible to program those safeguards and any legislative things that come around to safeguard patient data and things like that into these tools without the tools then figuring out a way to get around that based on what the user puts in. So the big thing for me that I struggle with now is the eight-second denial by UnitedHealthcare for a procedure that I want to do. Well, now that I have access to all of this shadow data that Facebook is selling them without our consent, well, how much more quickly can they just deny patient care unless there's safeguards within that space? Yeah, this is a great question, especially when it comes to insurance. This is a problem that's solvable. So full surprising credit to George W. Bush. In his administration, passed a law, had worked with Congress, passed a law that prevented insurance companies from being allowed to discriminate in terms of care or even consider genetic predispositions. So this was early days of human genome mapping. And they're like, nope, insurance can't use this. That was a very forward-thinking law that has had a really big impact, I think, in preventing us from going down this capitalism plus data path in terms of mapping the genome and using that to make insurance denials. But insurance companies are absolutely trying to use all the rest of this data to do exactly what you're saying. We know that they're using fitness tracker data, that they're getting linked to social media profiles. They want all of this, and they're totally using that. I talk to the insurance industry a lot. They bring me in to give versions of this talk. And I'm like, please don't be evil with this. Everybody thinks you're going to be evil. And they're like, but it's so useful. So it is a solvable legislative problem. We did it with the genetic data. George W. Bush made that happen. This is a bipartisan, clear problem for people. There's other legislative ways that you could fix that that I don't think we're going to get to. But you could get there. And we have models for it. Will you? I don't know. I'm not a political scientist. Lobbying seems really hard to me to get around. But I think that's ultimately the kind of approach that we would need. That's a great question. Is there somebody? Yes, back at four. Hi. Thanks for sharing all that absolutely horrifying information. It sounds like all of our information is out there for anyone to see. And I assume this includes our face, our fingerprints from our touch IDs. So my question, as someone who knows this better than anyone else, do you use online banking? Yes, I do use online banking. The regulations around online banking are pretty good. I think this was another space where everybody's like, oh, this could go bad so fast. And it's why we see in online banking, if you pay attention, which I totally do, the security features that they get usually come like a year before you see those security features anyplace else. They've got really advanced fraud detection because the laws kind of make them responsible for that kind of data getting out there. So that's kind of on the security side. There is, of course, the other problem of how are they using your data. There's a great story about Visa that they were reducing the credit limits on credit cards for people who are getting divorced because getting divorced is one of the most difficult financial times. It's when most people declare bankruptcy is in the middle of divorce. So they could analyze people's purchase histories and tell when they're getting divorced. And so then they would just analyze your purchase and be like, oh, it looks like he's getting bad at home. Better lower that credit limit down. They denied that they were doing this, but the evidence was pretty strong. And so that kind of data mining side of purchases is super concerning. And all of this stuff is totally going on there, too. So I use it because I don't know the last time I went to an actual bank. I feel like it's secure. But this is a you're damned if you do and damned if you don't. You could decide to opt out. And there is a great article. I'm blanking on the name, but if you kind of Google the terms, you'll find it. It was a woman who was pregnant. They wanted to keep all of the pregnancy stuff out of the data. No digital data. No photos. No photos. No social media posts. No buying any baby stuff with a credit card. All the data about the pregnancy is going to be hidden from the algorithm was their experiment. And they wanted to buy a stroller, which apparently there's big, fancy, expensive strollers. And it was way cheaper on Amazon. But they're like, we can't use a credit card on Amazon. So the husband went to 7-Eleven and was buying $500 worth of Amazon gift cards with cash so they could anonymously buy it. And the guy at 7-Eleven is like, I'm going to need an ID because we're required to report these large gift card purchases because that's how terrorists finance their operations. So essentially, if you try to step out of the digital collection space, you're a big red flag because bad people do that. So you get noticed either way. So they're going to track you either way. Seems like there should be some rules around that. We have time for one more question, number three. Yes, thank you. I just have two thoughts. Number one, is it possible to flood the data corpus that is being mined and scraped with synthetic data and noise? It's a great question. Nope. Go ahead, finish the second half. I think conversational AI has an interesting potential in clinical decision support and knowledge delivery systems for multiple stakeholders. Do you think that other than retrieval augmented generation, are there any other strategies for minimizing AI hallucinations? And do you think it's entirely possible to eliminate hallucinations as it pertains to LLMs? Wow, that's a fancy question. I love that. I don't know that it's totally possible to eliminate hallucinations. But I do think the next big step we're going to see in this space is combining text-generative and conversational AI with the whole space of knowledge management, knowledge graphs, knowledge retrieval. So if you guys remember, IBM had Watson, their AI on Jeopardy. It was like 15 years ago, 10 or 15 years ago. So I have a bunch of colleagues, people that I was in the same labs with in grad school who went on to work at IBM, who worked on the IBM Watson Jeopardy project. And there, the entire point was getting the correct information. They have somebody else helping turn it into the form of a question or whatever. But there's decades in computer science of research on knowledge modeling and searching and finding the right answer. And right now, conversational generative AI, like ChatGBT, is separate. It puts words together in the right ways. It's not connected to that big space. I think that's absolutely the next place this is going, which is going to be transformative and interesting and probably have its own set of problems. Are you talking about vector databases, embeddings, MLI research? I mean, I guess that could work. I'm thinking more of like Knowledge Grasp, but we should probably take this conversation offline since I'm sure not everybody is as interested as we are. With that, all right, I have 16 seconds left. So I'll just say thank you all so much. It was great to talk about this. I really appreciate your time. Thank you.
Video Summary
Dr. Jennifer Golbeck discusses the power and potential pitfalls of AI in healthcare. She dispels the notion that AI will take over civilization, but cautions about the real issues that need to be addressed. She discusses the power of AI in predicting and analyzing data, but highlights ethical concerns with using this data to make decisions about individuals. She also raises privacy concerns related to personal data collection by companies like Facebook. Dr. Golbeck emphasizes the need to understand and address the ethical implications of using AI in healthcare. She mentions a case of racial bias in an AI algorithm used by a hospital system and highlights the importance of addressing bias in AI algorithms. The speaker also acknowledges the benefits and risks of AI in healthcare, including unreliable or biased information produced by AI chatbots. They discuss the ethical and privacy concerns associated with generative AI technologies like deepfakes and AI text generation tools. The speaker calls for greater awareness and regulation to protect individuals' privacy and prevent harmful uses of AI-generated content. They emphasize the importance of considering the ethical and social implications of AI and making informed decisions about its use in healthcare.
Keywords
AI in healthcare
power of AI
potential pitfalls of AI
ethical concerns
privacy concerns
data analysis
racial bias in AI
unreliable information
biased information
generative AI technologies
×
Please select your language
1
English