false
Catalog
AI and Physiatry: Opportunities, Pitfalls, and Dis ...
Session Recording
Session Recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hey, everybody. Wow, that's loud. Thanks so much for coming out here today. I'm really excited to present on artificial intelligence with my esteemed panel. I'll let them all introduce themselves as we go. But anyway, thank you so much for choosing to spend your Thursday afternoon with us. Can we get the clock started just so we make sure we stay on time? Anyway, I'll introduce myself in the first part of this topic. My name is Evan Sheldon. I'm a clinical assistant professor at the University of Cincinnati. I also serve there as the associate program director. And I'll be giving the first part of this four-part talk with hopefully a panel discussion at the end, depending on whether we manage to stay on time, on artificial intelligence 101, what even is this AI thing? A little bit of a brief disclaimer. These lights are extremely bright, so I literally cannot see any of you. So just at some point, we'll have to deal with that as we go. I have no relevant financial disclosures. So I don't think it's possible to have opened up any sort of news app, turned on the news, any sort of article over the last two years without seeing headlines such as this. Like in the New York Times, Dr. Chatbot will see you now. Here's Forbes with hyperreality. Will AI take over their job? Artificial intelligence or AI has really taken over the world by storm. And like any new technology, there's a lot of questions and anxieties around it, leading to headlines such as these or these. So despite some of these medical-based AI headlines, I think one of the conclusions you're going to get to at the end of this talk is that AI probably won't be taking any of our jobs anytime soon. But I think there's a reasonable chance that AI is going to be changing how we're going to be doing all of our jobs in the future. All these headlines are pretty eye-popping and attention-grabbing, probably largely due to some of these sites to try to, you know, get some clicks and get some ad revenue. They do raise some really good points that we'll hopefully be addressing over the next hour or so, such as how do we use a tool that's now at our disposal to come through every—that has the potential to come through every single medical article and textbook that's ever been written in milliseconds to benefit our patients and their patient care? And for the Bedside Manor headline specifically, while a chatbot likely isn't going to be speaking with a patient anytime soon, how can we use something that has the ability to generate infinite amounts of content, again in milliseconds, to better discuss with us and inform patients of their illness, disability, and their disease process? But anyway, enough of the talk framing, and back to the subject of the first part of this talk, which is what is artificial intelligence? Naturally, to do this, I asked artificial intelligence to help define itself, and they, I think, did a pretty good job defining what they are. So first up is Microsoft's CoPilot. It thinks that it has the—it refers to the capacity of computer systems or algorithms to imitate intelligent human behavior. When you ask OpenAI's ChatPuPT the exact same question, it refers to the simulations of human intelligence processes by machines, particularly computer systems. These processes include learning, which is the acquisition of information and rules for using it, reasoning, which is the ability to solve problems and make decisions, and self-correction. So in summary, AI, or artificial intelligence, is the ability for a computer to take on the full complexity of human intelligence—learning, reasoning, and problem-solving—and the ability to self-correct. Are any of these tech companies at that point yet? No. But we might be getting close and close and closer. So as I've alluded to, the ability to solve this problem of AI is a lucrative, multi-billion, if not trillion-in-the-future dollar industry. And because of that, it's pretty—you know, people want a cut of that, so everyone in the tech world is getting involved with some of the, you know, quote-unquote for the tech industry, older players like Microsoft, Adobe, and Google, and then some new companies that no one really heard of until a few years ago, such as OpenAI. Obviously, creating all these models is a really tricky business. And rather than creating the one-size-fits-all AI model, many of these companies are trying to create some smaller, more focused AI models that try to do one thing really well, rather than everything. And this could be, like, text generation, coding, image, or video generation. The list goes on and on and on about what some of these can do. So my aim with this really complicated slide is to demonstrate that there's a lot of different players in this space. Some will succeed, some will fail, and it's rapidly evolving. We don't really know who's going to win this game. But I also wanted to hammer home the fact that while AI becomes more and more a part of your life in the future, which I believe it inevitably will be, it's much more likely that you're going to be using multiple different tools to accomplish different tasks in both your personal and professional lives. It's probably not going to be a one-size-fits-all thing. You'll probably be using different agents, different company softwares, different things to accomplish specific tasks. So here's another little look at the timeline of AI and where we might be at on it now. So the point is here just to show that these products are rapidly evolving and may be ready for primetime sooner rather than later. Just because an AI model that you may be used today or used last week or used six months ago wasn't up to par then doesn't mean that today, tomorrow, or next week it isn't going to be ready for primetime and something that you can be using. Again, rapidly, rapidly evolving space. So most of the time when we talk about artificial intelligence right now, we're talking about something that's called a large language model or an LLM for short. That is what in theory makes AI special and different from what we've been using currently to gather information like a Google search or an up-to-date type of search. So explaining what a large language model is to all of you all, I'm probably not the expert on it. I'm a physiatrist, not a computer scientist, but it's something I've done my best to try to understand in preparation for this talk in my personal life, and I'm going to do my best. I think the best way to try to explain this, and hopefully you can tag along as we're going through this, is since this is artificial intelligence, the best way to think about how this technology works is to how your own human brain works, because that is what it is literally being modeled off of. So first what we do is we feed a model tons and tons and tons of data. And when I say a ton of data, think about literally the entire internet, or literally every book that has ever been written, or every medical journal that has ever been written, or data, which is the exciting part of AI, that maybe hasn't been categorized and sorted yet by a person, but just sort of exists out there. You can take all this information and feed it into these large language models or machines, and they can serve as the raw knowledge base. So next what needs to happen is the model needs to be taught to do with that data. Answers are spit out by the model, just as a student in class, and if you don't mind scrolling down on my notes just a little bit. Thank you. Answers are spit out by the model, just like a student in class, and coders can hopefully provide the model with feedback via changes in coding to produce better answers next time. So again, just like a student in class. The student gives the wrong answer, we give feedback on it, and hopefully that means the model is better for next time, and so on, and so on, and so on. This is repeated over and over and over and over again to hopefully help these models learn and grow, and then the theory is with enough feedback, we'll end up with a useful model that can give us good answers. So another way to look at this is a different type of product. So instead of a large language model where we're feeding it literally the entire internet, we could create something called a small language model. So for example, we as physiatrists, we wanted to create physiatry GPT. We don't really, and we want that model to hopefully help diagnose and treat our patients better. We probably don't want to feed it the literally entirety of the internet, because as we have all come to realize, the internet maybe is not always the most reliable source of information for getting medical information and patient care. So instead, we might be served by creating a small language model, or a different type of AI model. What if instead of giving the model the entirety of the internet, we give it the entirety of every medical journal that is PM&R related, and that is the only information that the model has to do and learn. So in theory, when you ask that model a question, it will give you an answer that it can only get back from the entirety of every PM&R literature that exists, and that information is probably going to be more useful to us and accurate. And you can see how you can create a bunch of these smaller language models to be useful in specific tasks. I have one, Google lets me create it, that's my own GPT, that's literally all of my information that I've ever written and every email I've ever gotten that's been fed into it, so it hopefully knows what I want and can spit out more information on that. We can get into how well that's working as we go, but you know, I'm not being replaced by a robot anytime soon. But despite all of this, like the human mind, AI can produce the wrong answer, and unfortunately when it does that, it presents that in a very confident manner. And this is called a hallucination, and yeah, there are going to be some laughs as we read this slide. So some of these examples are a bit humorous, I intentionally picked these out, such as making sure to eat one rock a day for your digestive health, and you know, if your cheese is falling off your pizza, maybe you should put a little bit of glue on the pizza to make sure that that cheese really sticks. Yeah, obviously humorous stuff. But some of these answers might be a little less benign in the medical field. So we've got one here where it's encouraging patients with kidney stones to make sure they drink two liters of urine to be able to pass their kidney stone. So, and the even more concerning thing about this example, as you can see below, we've got two different sources that look like they're pretty good sources of information about this, that if you don't click into them, and I honestly didn't, but I'm pretty confident that Mount Sinai is not recommending two liters of urine to pass a kidney stone. But yet it's citing it right there. So all of these examples were pretty embarrassing for Google. This is all from Google's LLM, and these were about a year ago, and they're constantly working, like I said, training that model, training that model, training that model to hopefully be better and better, and the internet definitely provided Google with some feedback on how this is not maybe the most right answer. So when you get this stuff, and this is another big emphasis of the talk, be skeptical, take it with a grain of salt, but you just got to take some caution with all of this stuff. So here are my references on part one of this talk, and I'm going to pass this discussion off to Dr. Richard Kaplan to talk about some of the more practical uses of AI. Richard Kaplan. Okay. Greetings. I'm Richard Kaplan. I'm a physiatrist. I've practiced for about the last 30 years, pretty comprehensive inpatient and outpatient, functionally oriented practice, although currently I do outpatients. Currently I do a good bit of life care planning, meaning I assess damages for patients involved in personal injury litigation. And all that becomes pertinent, and as it turns out, to be a really good use of artificial intelligence. Feel free to email me or call me up later on if you have questions about some of the specifics that I'm going to show. This is obviously just a short overview of some of the applications. I have no financial disclosures. Okay. Okay. Just to give you a little bit of background on myself, before I went into medicine, I was programming computers and doing stuff before that, and I've always, throughout medicine, been seeking ways to combine the two. I must say that artificial intelligence is probably the first time that I've seen, in all that time, that really I've seen a good use of computers in medicine. I think I'm more excited about this than just about anything I've worked on previously. That said, I will say I'm self-employed. I sometimes say I'll be the last self-employed doctor in the USA. That, I think, is pertinent because some of the things I'm showing you are new software technology. It's probably a little bit harder for those that are in a big group to go ahead and just implement it. The good news is you get an IT department to help you out. On the other side, other than for your personal use, it's probably hard to just experiment right away. So I'm going to show you some actual uses that I've put in my practice, which have been particularly helpful. Let me start out, though, with saying what not to do with AI. If we have time, we'll have a discussion on this among us. In any event, I would encourage you guys to discuss this on the online PM&R, the Academy Forum. I think there'll be a lot of discussion on this. I realize there's different views on this. There are two things that I think AI is really not good for. One is clinical charting, and the other is preparing appeals for utilization reviews and preauthorization. Why do I say that? As far as clinical charting, I think there's far more clinical and especially legal harm that AI can cause. There was a talk I saw a number of years ago that I was a part of about the chart teratoma, when EHRs were coming about and people would copy an error, would make an error. It's one thing to make an error, and then when someone copies and pastes that error, and then throughout the chart, you have this patient who has an amputee who has symmetrical pulses, or the woman who's got a normal prostate exam, and it appears five times in the chart. You just can't explain that stuff away. You're going to find stuff like that if you use AI for clinical charting. I'll mention utilization review and preauthorization because it's actually – I'm pretty involved on the side of doing the preauthorization. I do that for a number of insurance companies and others. I will say this. It is much, much more helpful to me for a physician who's requesting authorization something to send me one paragraph written by a human than five pages written by a computer. Because if clinical care is getting to the point of preauthorization and especially utilization review, it means there's an exception. That's why you want to be covered. You want to explain that exception. AI is never going to do that. AI is going to do all the things that show the normal reasons why it should be approved or not approved. As a physician, you can say in one sentence and certainly one paragraph why this case is different, and that's what's going to be effective in terms of getting it approved in utilization review. Where is it most useful? I have found several uses. Most important one I found is to search the medical literature. Not that it's going to give you an answer by itself. You're going to have to read the medical literature, but it is extremely helpful in terms of sorting the medical literature to find references that are right on point or that rebut something for you. Second of all, it is extremely useful in summarizing or outlining past medical records. That helps me enormously when I do medical legal review. It certainly would be relevant to anyone who does clinical treatment of complex patients. Not that it's something to rely on, but to give you an initial overview of the case. You have to use software that gives you hyperlinks into the medical record. You always need to look to be sure that information is accurate. You are going to find occasional hallucinations, but what I find is I find more information that the AI gives me and helps me that I would have missed than the other way around. I'm going to read the information anyway, but it turns out to be extremely helpful. What I'm going to do right now is just go through a quick survey of some applications, some practical applications that you could use right now. This is going to be from a range of free, easy stuff that you don't have to have any computing or engineering background to all the way up to stuff that's custom coded that takes a little bit more knowledge. This should really be relevant to anyone, whatever your level of tech familiarity is. Again, if you have questions about some of this, feel free to contact me some other time. I'll be glad to help you out. Number one suggestion for anyone to use, and this is true not only in medicine, but really outside of medicine, is perplexity. Perplexity for me has virtually replaced Google. It's similar to ChatGPT, except it gives you references. It gives you links to actual sources, which is extremely helpful. By the way, for those taking pictures, I think you have access to the full slides, but if not, I'll send them to you later because the details are really helpful, but I'll be glad to share those details with you later. In any event, suffice it to say that this is a query. The question asked is, what are some PubMed sources on life expectancy and renal failure? It gives an overview, and all the references that it gives are PubMed articles specific to that point. If you try to do the same search directly on PubMed, you're going to find it takes a whole lot longer to find out which of the articles is burdening, and it's free. It works not only for medicine, but if you want to find what restaurants are near the convention center in San Diego, or whatever else you want to ask it for, it works similarly. Bumping up one more level, ConsenSys has a free version, and it has a paid version as well. ConsenSys is extremely helpful because what it does is it uses artificial intelligence focused on determining whether a question is correct or not, and giving you two sides to a question. It works best when you ask it something that has a binary question. The example I gave in this one here was, does lumbar effusion help for low back pain? Whatever your position is on it, it's helpful to know both sides, so you can explain both sides to patients, or to other physicians, or medical literature, or whatever it is, and it will do a good job, and it will tell you what percentage of the literature it founds are pro and what are con, it will give you the reasoning, and it will give you the link back to that. So whatever article it may be, whatever issue it may be, if you want to understand both sides of whether it works, it doesn't work, it's safe, it's not safe, it's a really helpful way of finding that. And if you try to do a regular PubMed search, it will take you eons to do that, and to sort out what the background is from those. Taking it one step further, cite.ai is another one that's free to some extent, and then there's a paid tier beyond that. Cite is somewhat similar to Consensus, but on a much more sophisticated level. What it does is you can ask it questions, it doesn't have to be just yes or no, it could be any question in the literature, but what it does is it highlights portions of the article you read, and then it finds articles which cited that article, and it finds the paragraph in which that citation was given. And then you could click on that, and you can go semantically to find other articles that continue that debate, and then it can give you the full text of the actual articles when you find the ones you're interested in. This is probably a level of analysis that isn't really pertinent in clinical medicine unless you're doing research. If you're writing an article about something and you want to really understand the background or the bibliography of a topic, this is really terrific to totally understand who are the key players and who can determine this most accurately. The example I gave here was about epidural injections in radiculopathy. It correctly identified Dr. Manchikante, who's in fact spoken here a number of times, well known on the field. He gives his articles, and he gives the rubato articles which exist to it. It's really terrific for getting an understanding of all sides of a topic. Going on beyond that, for those who have a bit more technical knowledge, there's a site called Typing Mind. Typing Mind lets you take all these other sources and integrate them in a way that formats it to be custom for your needs. So I've written a script, Typing Mind, that lets me ask medical questions. It uses, finds the relevant AI, and it formats it in a way that's helpful to me. So a question I asked was, foreign sources that critique or rebut this source, and I put in a PubMed article, and then it comes up, and it's about platelet-rich plasma for osteoarthritis. And what it does is it gives me what's basically an essay about good and bad, and along with the references for it, and then it even pulls up the Dread Living Charts from some of the articles and gives them to you. Now again, I'm not saying you should not read the articles, but this is incredibly helpful to me, at least, in putting some perspective for which articles do I want to read, and then ultimately for saving that in my files for the future or to taking notes with it. Going on further to a little bit more sophistication, you could take any of the AI models, and this one in particular is Claude from Antropic. I think Claude is probably the most sophisticated of all the models currently. If you have a little bit of technical ability, you can write a custom script that specifically asks you to review the literature in the way and the format that you like. So what I've done is this is a table of articles in the literature about complex regional pain syndrome, and the titles, or there's a column for title, there's a column for a brief bullet point summary, there's a column for the pros of the articles, and the cons, and this goes on for dozens of articles, however many that exist. This is a really useful way, I find, for looking through the results of a PubMed search and then finding which of these articles are gonna be relevant to whatever question it is that I'm looking to evaluate. You could then write your code in a different format. This was a case where I was, for some reason, the issue came up, what are MODIC changes in an MRI? And I've been in legal cases where some people say they're useful, and some people say they're not useful anymore, so I'm curious, what's the background on that? And it goes through and writes the essay, it gives me the references to the original article by Dr. MODIC, as well as the articles subsequently that have supported it, and the arguments that have come out against it. Really helpful to find quickly, how do you understand this? If I wanted to go through PubMed and get that same background, just to find those five or six articles about it, it would be really hard to do. You'd probably get 500 articles that come out of the PubMed search, and you wouldn't know which ones to go over. Going beyond that, for those who wish to do so, there's a site called Coda, which is a way of combining a database with documents. But basically, this is a way of customizing the output of these searches in a way that's even more sophisticated. So what this has done, is this has done a search that I've done for one of the other sites, Consensus, but has added a column on the right side that I call Claude PM&R, so what it does is it pulls the articles from one site, and then it goes to AI, and it puts it in perspective. What I've asked it to do, is to look at this article from the perspective of a practicing functionally-oriented physiatrist. And that's exactly what it does, and it pulls up suggestions such as six-minute walk and Barthel ADL scores, and things that are classic indicators for PM&R that are really relevant that might otherwise be buried in the weeds of the literature. So that helps me to take a search that's for the general medical population, or general medical physicians, and really put it in my perspective as a physiatrist. And if you wanted to change it to the perspective of an invasive pain physiatrist, you could do the same thing, however it may come about. Another example you can decode is similar to. In this case, I've asked it to take a specific PubMed article, and you put it in, it finds other articles in PubMed that are similar to that, so that helps to get you a bigger perspective on things. And then finally, I mentioned before that review of medical records can be quite helpful. This is another script that's customized for CLAWD, and what I've done is I've asked it to go through a set of medical records in a way that's applicable for a physiatrist or a life care planner reviewing the medical records. This is a medley of random medical files that I had with names removed. But basically what it does is it goes through, gives you a summary chronologically of the case, and various different formats with dates, and then those hyperlink work right to the original pages of the medical record. So you can verify them and go over that. You can change it for whatever format you want. Again, if I have a case with thousands of medical records, that's really helpful to me in terms of putting it in perspective before I review the details. And that's everything that I have. In the slides that you have, these are the URLs to go to the website of the particular sites that I used that I went over. And if you have other questions, let us know. Thank you. Good afternoon. Thank you for the overflow crowd. Again, I share Evan's analysis that we sort of have the blinding light on the right-hand side of the audience. There's a lot of seats in the front. So don't be shy, we don't bite. My name's Mike Salino. I am Chair of Physical Medicine and Rehabilitation at Cooper University Healthcare in Camden, New Jersey, across the river from Philadelphia. I want to thank Evan for herding the cats. We actually met electronically in a discussion about AI, which led to our submission here. And this is actually the first time we've all physically been in the same room. So it just shows you the power of electronic communication. I've been asked to talk a little bit about the use of artificial intelligence in the administrative aspects of medicine, and this is a bit of a personal journey. I am coming up on my third year anniversary of being a chair, and it's certainly been an honor and a privilege to serve in that role. And I've been an actively practicing clinician and been involved in some other administrative aspects of medicine, but this represented a step up. And when I actually walked into Cooper for the first day, realized that I have an awful lot of responsibility that maybe I wasn't as thoroughly prepared for as I had been in terms of clinical medicine. And AI has been a good partner for me in developing my administrative skills, and I'll hopefully demonstrate that for you. These are my disclosures. Most of them are clinically related. They're not a whole lot that is super relevant to today's discussion. I have no idea if the use of artificial intelligence is an off-label discussion or not, but I'll put it out there nonetheless. And because I have a leadership position, I do have to put forth that these comments don't represent any official position of Cooper University Healthcare, although I will tell you my CEO, or one of my co-CEOs, Anthony Mazzarelli, is a huge, huge, huge AI supporter and actually helped me utilize artificial intelligence in my own administrative journey. So in an effort to demonstrate how AI works a little bit, I actually need a volunteer. Is there anyone in the audience who has become a parent within the last year? Raise your hand. Any first parents? I can't see in the blinding light. Come on up. I have to stay here because we are recording and there's no cameraman following us. Hello, nice to meet you. And your name is? Alethea. Very good. Congratulations on the newest addition to your family. How old is the newest addition? Ten months. Ten months? You look surprisingly well-rested for the mother of a 10-month-old. It's an act. Awesome. Awesome. You hide it well. So I have a couple of questions. They're not unduly personal, but just to give an example of things. So by any chance, do you shop in any, what we would call a big box store? Walmart, Target, Kohl's, Amazon, anything like that? Yes. Gotcha. Okay. Do you have a recollection of things you bought about the time you knew you were going to become a mom? I bought a ton of stuff on Amazon. A ton of stuff on Amazon? In terms of like, well, so Kohl's has carters, so a lot of clothes were bought. Okay. And Amazon was a lot of pacifiers, bottles. Gotcha. Cleaning supplies, all of those things. Okay. How quickly after you found out that you became pregnant, not telling your immediate, I mean, you're taking your immediate partner out of play, how long into it did you start to tell other people? I told my parents right away. Okay. Okay. But everyone else, I waited until like the 8 to 12 week mark. Okay. So two to three months or so. Yeah. How quickly after you became pregnant, did you start getting ads? Oh, before I even told people. Okay. Do you have any idea how that happened? It's like I took a picture of a positive pregnancy test and I got ads. Okay. My husband got ads. Okay. Gotcha. Did you get any physical documents in the mails, like flyers or anything? I can't remember. I got it early on, but I do know I got it somewhere throughout my pregnancy. Okay. Gotcha. So some of these things are going to sound really bizarre. In that first two to three months, did you buy a big floppy hat? No. Did you buy an area rug? No. I wanted one though. You wanted an area rug. If you had the opportunity, you would have bought one? Probably. Okay. Gotcha. I have hardwood floors. Okay. Gotcha. Did you buy any candles? No, I did not. Okay. Did you buy any unscented lotion? Yes, I did. Okay. Different than what you would normally buy? Yes. Why is that? I was afraid of what the scents and everything would do for pregnancy. Okay. Okay. Gotcha. And all the chemicals and everything. Do any of those things, you know, an area rug, a floppy hat, candles, unscented lotion, make you think that you were pregnant? No. Okay. What if I told you that if you bought a floppy hat, an unscented lotion, and an area rug, there's about an 87% chance that you were pregnant? That's crazy. A little scary, huh? Yeah, very. Gotcha. All right. I'll explain this. Thank you so much for volunteering. Big hand there. How many people know the story I'm going to tell? There's a few hands. Target predicts pregnancy. A Target statistician, and anyone who works at a big box store, you have a customer ID. A Target statistician put together a collection of 25 products that would help predict whether someone was pregnant or not. Obviously, if you're a big box store, you want to get those ads out to new moms and dads so they purchase earlier. A dad went into Target saying, you're sending my teenage daughter advertisements for diapers and baby stuff and pacifiers? What are you thinking? That's offensive. The store manager was like, jeez, I'm sorry. That didn't come from us. That came from the mothership. The store manager called the dad back a couple of weeks later and saying, I'm really sorry that we offended you. The dad came back and said, hey, it's actually me who I have to apologize to. My daughter was pregnant, and she hadn't told us yet. Basically, what artificial intelligence did was look at all of our spending habits and see how that correlates with a later event. Now, those things that I talked about were actually real, that if you buy a big floppy hat, an area rug, and unscented lotion, you have about an 87% chance of being pregnant. Now, I know what a lot of you are thinking. That makes absolutely no sense. What does an area rug have to do with being pregnant? Maybe you could say the unscented lotions, I've never been pregnant, I'm not an OB-GYN, so I don't know some of this data, but apparently females in early pregnancy are very sensitive to smell, so they tend to buy unscented things. I see a couple of females nodding while I'm saying this, so I'll take it to be true. That makes logical sense to us, but a floppy hat and an area rug, those are illogical concepts, but that's what artificial intelligence does. We are intuitive beings. We look at things and say, how does this make sense to us? AI doesn't do that. It just takes the raw data and says, what correlates with this? That's where it comes from. Again, proving it with a volunteer who I did not set this up ahead of time, but she had about an 87% chance of being pregnant based on those purchases. This is not an exhaustive list. In fact, Target has not released all 25 items. These are the things that are in the laypresso. Logical items, unscented lotions, soap, laundry detergents, vitamin supplements, especially those with calcium or magnesium, large quantities of cotton balls, hand sanitizers, washcloths. But what does this do? A large purse? A large area rug? A big floppy hat? This has nothing to do with being pregnant, but that's the way artificial intelligence works. Evan touched on this a little bit. This is what's called the superhuman fallacy, meaning that when an artificial intelligence mechanism predicts something correctly, especially when a human and perhaps even an expert were wrong, we make the jump to say, well, that artificial intelligence is smarter than us. That's not really the case. They just have a larger database. What the real advantage is here and what we need to take advantage of is the combination of artificial and human intelligence. A good example is autonomous vehicles. My wife has an electric Mustang. It's a hell of a fun ride to drive. But there's no way that I can keep her car, and her car has level two autonomous driving, I cannot keep the car in the dead center of the road more accurately than autonomous driving. You just can't. We're human. But if I'm driving down the road in her car and I see a fire engine two blocks away, I know that there's something there and I want to get off that road, and I pick that up much faster than the autologous driver. So what's the best thing? The best example is having both, a human driver with autologous support. That's the same thing that we need to be doing now with artificial intelligence, applying our own intuitive intelligence, which is very powerful, and using it with the artificial intelligence. So how does this apply in administrative medicine? First, consider the problem that you're trying to solve. And the underlying hint here in artificial intelligence is you are not the first person to have this problem. Someone else in the world has had the same administrative problem, and you're going to tap into artificial intelligence to help find your answer. So after you've defined the problem, the next person you should call is your privacy officer or compliance officer before you start digging into data. You have to ask the question, am I allowed to look at this person? Then do your own research. Find if there are predictive models, just like Target did with pregnancy. Can I use predictive modeling through artificial intelligence to help me solve my problem? Odds are there are predictive models for almost every administrative problem that we need to. And literally, currently, I use artificial intelligence almost on a daily basis to help me think through administrative problems. Then contact your data analytics team and say, do we collect this data to help with this predictive modeling? Sometimes your data analytics team can do this internally, and other times you might need to use a third-party app or mechanism to help analyze that data. You might need to go back to your privacy and compliance officer before you let a third party use it. Once you have that, then apply both human and artificial intelligence to the problem, and it will help you with your administrative tasks. Here are just a couple of examples that already exist for predictive modeling that would be highly useful in administrative medicine. How many people in the room who do administrative work would like to have a better handle on some of these topics? Just about everybody, right? Wouldn't it be great to be able to predict with some accuracy who's going to show up or not, whether a patient's going to be readmitted? All of these things are appropriate in administrative medicine. I'm going to take a quick example of something that I had to do relatively early in my chairmanship tenure that scared me initially, but then when I used artificial intelligence became relatively straightforward. I was asked to develop and implement a policy to mitigate burnout in my department. Now, I think when each of us hear that question, we instantly as intuitive beings have an idea of what we would do in our own little ecosystem to mitigate this risk, but would it work? Is it appropriate? Was I going to get scolded when I presented this to my leadership? So I asked artificial intelligence to help me look at some of these things, and these are some of the things that predictive modeling came up with. So some of these make sense, right? The longer you work, the predictability of your shift, working on weekends or having high volumes on your list, some of those are easy to measure. Some of them are pretty intriguing. Like, for example, your data analytics team can look at your faculty and say, how often does your faculty members log into the EMR off hours to get their notes done? That's a predictor of burnout. How much time do they spend writing each note? How many keystrokes or clicks do they use? How many times do they have to switch from one test to another to get their note done? Work environment factors might be some surveying instruments that your group might use. Some personal characteristics, younger and less experienced physicians are more prone to burnout than more experienced physicians. I didn't know that when I started this out. Some of those make intuitive sense. There are also some unusual factors. One thing that I didn't put on this list is something called microbreaks. Microbreaks is sort of the short cognitive rest that allows us to remain refreshed. For example, how many people who work in big institutions are precluded from, say, going to a website, say, to order a pizza or to check a shopping order? Do some places restrict that? Do you know that actually contributes to your burnout? That if your institution allows you to order a pizza for lunch or to check on an Amazon order in the middle of your day, that actually helps reduce burnout. That's actually one of the tasks that I brought to my leadership, showed them the data that predictive modeling told me, and they adopted it. Obviously, they didn't open the whole wide Internet to every person, but allowed folks to order from local restaurants, to check an Amazon order, to check their flight reservations, et cetera. This is an example that might be nonintuitive. Now, some of these things probably invade people's privacy. For example, we do know that people who have interrupted sleep are more prone to burnout. I can't ask my staff to wear a sleep monitor and have that fed into a central database. That's where you have that interaction between human and artificial intelligence. Data collection, again, going back, asking your privacy and compliance officer what items can and cannot be tracked, have your analytics team provide reports to you on a frequent basis. I get monthly reports now on all of these factors, and if I see, for example, that a particular faculty member is logging into the EMR off hours more often, that's a time that maybe I need to interact. I also had full disclosure with my faculty. I didn't hide any of this. I said, every department chair at Cooper was asked to do this. I'm going to be tracking this data, not because I'm spying on you, but to help me support you and help support the entire team. We already had autonomous, anonymous, excuse me, survey instruments that helped me track this, but all of this became integrated into a burnout surveillance tool. We then were able to apply human intelligence to the plans that we got. Hopefully, that gives you a quick example of how you could use both human and artificial intelligence in administrative tasks. Taking on a chairmanship or a program directorship or a medical directorship is a daunting job to keep track of all of these things. We have someone that could take the bricks out of the briefcase, and remember, don't think too hard with that brain, that artificial intelligence. It's only temporary. I am then going to pass the baton on to my colleagues, and then we'll have some questions afterwards. Thank you for the opportunity. All right, I'm going to invite all the lovely people in the back to come in if you want. I'm also going to mention that all of the slides are in the first set, and so some of the uploads need to be fixed later, but you do have access to all these. I'm Jen Zumsteg, I'm in Seattle, and a lot of the examples that I use today will be from my neck of the woods, but I hope that this will help give you some ideas about maybe what you want to look at, what's relevant to your role. We're going to focus mostly on education. I'm a former residency and fellowship program director, and also look at how that interacts with some policies, because I think that that can blindside us a bit, and is certainly important in that environment. Who's both excited and really freaked out right now? Yeah, yeah. So I'm going to invite you to think about broadly in education, whether that's how you educate your patients, if you're in a more formal educational or academic role. This is just an invitation to think about some of those opportunities and challenges and what might be exciting, and to find out some of the policies that might be relevant to you. I spend most of my time opting out of these things, not because I have an aversion to the technology, but because I'm at one far end of the spectrum in terms of corporate data use and privacy. And so I don't use these as much, but I think it's great to get this broad perspective. All right. So the first thing I want to bring up is just that we're all learning this together, and you get a sense of how many different ways there are to approach this, how complicated this can get, and it's rapidly evolving. So this is just a snapshot from our PM&R milestones, where understanding the IT systems and our health systems is like a level one milestone. So congratulations to faculty who get to learn this and teach it at the same time. And so I think this actually is a huge opportunity for faculty and learners to assess skills in evaluating AI-generated material, right? You're hearing that over and over from us again. This is a tool. It's not a replacement. We need the human part of it, but there's a lot of opportunity here, and we need to learn how to do this together. I think physiatry hopefully is maybe better than average, but it can be very hard as an educator to think about how you're going to manage learning together if that's not how you usually approach things, if you're usually the expert. And so taking a pause and thinking about how you're going to do that, I think, could be really helpful. So thinking about program administration, you've also heard lots of examples of how useful generative AI can be for summaries, right? So think about all the data within your program that you need to summarize, whether that's for a program evaluation, annual report. I spent a lot of time working on a handbook and looking for templates, and if I had just had kind of a residency handbook that I could generate with our set of existing policies and then edit it, it would probably have saved me a lot of time. And other summaries like minutes, so you all may interact with programs that are from big companies in my neck of the woods that start with an M or be on a call that starts with a Z, and there's some summaries, and you can ask it to come up with to-dos from the meeting, and it can give you a real-time transcript. That's not okay with me, so I talk to my organization about how to opt out of that and not have my data back to the corporation associated with my name, and they're thankfully very supportive of that, which is great. But there's some other things to think about for minutes. For example, if you're feeding data about your clinical competency committee into a generative AI, it now has that data, right? So there are ways to set this up, but we have to be a little bit intentional about it, and hopefully as a community, keep building this together. So I'll share some examples that hopefully are relevant, and you can find what's more local for you. So this was very interesting and helped my organization support my options and really look for how I was going to opt out of things or how we were going to consent to a meeting. So who's been at a recorded meeting that says you're giving your consent to be recorded? We all clicked that little button. So my question was, what if I don't consent? Do I have to leave the meeting? Is that okay? This is my workplace. In Washington State, my employer can record me. Any place that privacy is not assumed, well, if I'm in a committee meeting, that's not a private place. It's not the restroom, right? So actually, in Washington State, this is evolving, and I work at a public health district hospital near the airport in Seattle, and there actually is some advisory information that if you use generative AI to do meeting minutes, that may become public record. So if you're in a government institution, for the most part, your meeting minutes, your formal meeting minutes are public record because you're a public institution. So again, if it's clinical competency committee, I sit on the clinician well-being committee, where we're having pretty sensitive conversations, not about individuals, per se, but about an organization. I sit on the stroke committee, where we're talking about usually all of our failures and how to do better. And so those minutes are subject to Freedom of Information Act requests and can certainly be, you know, everything is discoverable, but to really think about when might this help you and when do you maybe not need to click the record and transcribe button. This is just an example of the generative AI guidelines from the University of Washington. Sorry, this is so small, but again, you have the links there. I think it does a really good job of summarizing, again, you're sharing this data with a third party organization, right? So if you're, as a program, for example, trying to take annual program data and integrate that into a summary, that's a lot of data, and it's very easy, I think, as a human to miss maybe some of the themes. And so you could ask the AI to summarize for you by taking out some of the identifiable data, right? So you probably don't want to put your institution in, you don't want to put the sites in, whatever you're, you know, acknowledging for that. And even as a, you know, former faculty member, I would probably have gotten better information about myself if I had fed my qualitative feedback into the AI and had it summarized for me, which takes sort of my emotional reaction, my own biases, et cetera, out of that mix to see what my own learning goals might be. But I wouldn't have included my name or my institution, I would be looking for it to summarize basically the adjectives or behaviors that were there so that I could interpret it and use that for myself. So this talks a little bit more about the data, and, you know, if you're doing research or you're doing other educational things, there really is some risk here. And so, but you can also, you know, design it appropriately. So another very data-heavy place is program application materials and processes. I should say I'm focusing a bit on GME because that's where my expertise is, but hopefully this is relevant regardless of where you're teaching and thinking about things for patients as well. So letters of recommendation, be very easy to generate personal statements. In January of 23, Cherry Dunn and I ran a chat GPT personal statements for our brain injury fellowship. They were excellent. They took just a few seconds to generate. That's in the Blue Journal if you're interested in that. And that's many generations ago for chat GPT. And I think actually that lets you dig into things like what are these tasks and what are we really trying to understand for people, right? So are you trying to use the personal statement to understand the applicant's values and their career goals and how that matches to your institution? Or are you looking for that to be a personal writing sample in English for your program? Those are two very different goals, right? And you might evaluate that in very different ways, I would argue, entirely transparently. But I think, especially if you are from a marginalized community, you don't have mentors to help you, maybe you don't have somebody from a similar background, there's this opportunity to get some of the feedback. Perhaps some of you have submitted an article to a journal that asks you to have the AI edit for you for sentence structure before you submit to try to help correct, you know, not just spell correct, but some of the grammar correct to optimize that before submission. We did that with our chat GPT article, which I just found amusing, but actually rejected most of the suggestions because it was our voice, right? I totally understand, like, checking that the grammar was correct and it made sense and was readable, but I think you can use things both ways. Yeah, so lots of summarizing, I think, can be helpful. This is the current University of Washington PM&R Residency and Fellowship Program Disclosure, which I think does a nice job of really pulling out the large language model piece of this, encouraging people to express their authentic voice, being transparent about them not using AI for application perspectives, and I think it just, no matter what your stance is, where this evolves, I think these kinds of statements can be really helpful for everyone involved. And then I'll end with, this is a great article from the Journal of GME, giving examples of they couched this, I think, appropriately as potential research areas, potential areas to look into for generative AI in GME. And so here we all get to be excited and freaked out again, probably, in my opinion. So for example, let's think about, you take a lot of time and you write an educational case, and you're trying to use that for learners, you're trying to progress their learning over I would have really valued having the opportunity to write one case and then ask the AI to generate five similar cases, perhaps with different demographic parameters, that I then proofread or take back to a committee, but not have to write the cases from scratch over and over. You can, they're thinking about faculty clinical teaching performance, formative feedback on communication based on the AI analysis of videos or learner interactions with patients or colleagues. You can take this anywhere. Perhaps a different thought about EHR data to look at clinical reasoning, and this is really like statistical word prediction. And you have to give your model parameters, like the other speakers have talked about. And so if you know what your target is, and you can put in those parameters, then you can get a better model back. Again, I appreciate that they talk about research. There's a lot of risk here with bias and what your parameters are, and really understanding that. I mentioned the clinical competency committee. That perhaps makes me a little bit more nervous, but could be exciting or more informative. I think the faculty avatars for coaching, advising, or assessment is interesting, especially if you, we've been talking about language, but if you're also looking at generative image models, it could be very helpful to diversify where you're getting information. If you created an avatar for a learner at any level that had a similar background or looked like them, or was their future self five years from now, to be able to have some of that connection potentially. Maybe that's a horrible idea, and the research will tell us that, but I think it's a great place to start. Other things in terms of simulations, I think you can continue to see those examples. Thank you for your attention. All right. Thank you all for coming. At this point, we'd like to, there's one, thank you, overflow crowd. Again, there are seats if you want to keep standing. That's entirely on you. We have a little bit of a panel discussion where I have some questions that I was going to ask people, but I can see there are already a lot of questions coming in online, and I have a feeling some of you might want to ask questions. If people want to ask questions, if you don't mind lining up at the microphone, this is a recorded thing that's going on right now. That way I can gauge whether I need to keep asking people questions or whether we actually have some questions in the room. I'm going to get started with some of the pre-prepared questions though, which the first one is for the panel, which I think we'll all have interesting thoughts on. Do you think AI should be used in the clinical setting? And then that microphone should be working now. Okay. I think as several of us mentioned earlier, AI should be used in the clinical setting, but never as a primary need. I think it's just that mic now. Okay. I think AI is great in the clinical setting to produce a differential diagnosis as well as to verify a hypothesis, but it shouldn't be used as a primary means to say, what's the diagnosis? It's a tool, but it doesn't replace the physician. You know, it's a lovely, very general question. I think the short answer is yes, it's here. Again, I spend a lot of time perhaps opting out, so I'm not trying to shortcut that. My institution is using generative AI within our electronic health record to draft inbox responses to patient messages as well as do some other work. And we didn't talk about ambient technology today, but there's a lot of this stuff is already live. The other reason I think it's here when we need to think about how we want to use it clinically appropriately, and not so much if it's here, JAMA now has an AI journal. If you're interested in this, recently they launched JAMA AI, so stay tuned. I'll turn that back on. All right. We got a lot of questions in line and on the chat. I'll go first with, so what constraints has HIPAA placed on your current use of AI? I do not feed any actual patient data into an AI model. I think HIPAA is pretty clear on that, at least maybe some institutions. I know we're at the University of Cincinnati. I was hoping it would happen before, and this is a question we got coming up in the chat as well on ambient AI. So we are going to have an AI scribe that's going to be rolling out where you use your phone, you got an app on the phone, I guess I hit the record button, and then it takes the entire patient interaction and puts it in a progress note. So in theory, no more typing progress notes. It'll do the whole thing from head to toe. We'll see how well that works. Obviously, I imagine HIPAA has been taken into account when the University of Cincinnati has partnered with this institution. But yeah, I'm not feeding it any patient data at the moment. I can give an extent beyond that. Obviously, for those who work for a large institution, probably most of you, there are policies that exist, just as Evan says, and you need to follow that. But beyond that, I think there are two questions. Are you using it to gather information, or are you using it for specific patient information? If you're just doing a bed search or trying to get a clinic information, as long as you don't have identifying information enough, then you could use whatever you want. There isn't any issue there. But when it gets to clinical information, such as the examples I showed you summarizing a chart, that does become an issue. I will say this. First of all, none of the free sites, you can't put health clinical information at any of the free sites, because the reason they're free is they're collecting your data. Second of all, OpenAI, which makes CHAT GPT, has made it the policy that if you go through their API, in other words, if you're doing this with your own software that you've written, they're not going to use it for training. But Anthropic, who makes CLAWD, which is the examples I've shown you, has gone one step beyond that. They actually agreed to sign a HIPAA BAA, a HIPAA Business Associate Agreement, which is the formal legal obligation. So if you're trying to use CLAWD with your own application, and you get the BAA from Anthropic, then it's totally legal to use it, because all the issues of privacy are there, just like they would for anything else. And so for those of you who have big IT departments, they've probably done that, or something similar to that. Yeah, I think that's a great point, where understanding your IT governance at your institution, or how you're running that in your practice, is very important, and how you're vetting products, and what those contracts are like. And there's some consolidation in the market through Microsoft and Dragon that is creating a pretty large dataset there. I will just mention, related to the ambient technology, which is, again, the microphone listens to your encounter, and writes your notes for you. So one thing is, I have very introverted preferences, and think by writing, or typing, or internally. And it can only write what you say, and it hears on the microphone, right? So I've made the case in my organization, that you're not going to get the best clinical decision making from me, by having that be the primary thing. And I synthesize by writing, or typing, and not so much by talking, which seemed to land just fine. I think the other part for us to think about as clinicians, and certainly in education, is as physicians, we're going, I think, to be increasingly asked to consent our patients for indications that are actually data use policies. And I don't think most of us are prepared for that. If you have any work applications on your personal device, communication, EHR, take a look at the privacy policy, if you haven't yet. It's quite interesting. Most of the time, you can learn a lot. But I think informed consent for using ambient technology, in particular, is much more than just saying, hey, I have this microphone program on my phone. Is it OK if I use that for our interaction? Again, some people are not going to care. If you use ambient technology for me, I'm going to find a different institution for now. So all perspectives welcome. The only thing I would add is that many institutions have very specific rulings about your use of AI in any context. In other words, if you ask CHAT-GPT, write a treatment plan for a trans-tibial amputation, and it spits it out, and you try to copy it, paste it from CHAT-GPT into some EMRs, it will, but you paste it. So they've already thought of that. Yeah, I agree with everything that's being said. I think terms of service, it's super easy, and I think we've all done it, where you get the paragraph of information, you scroll down to the bottom, and you click OK. Their terms of service today may not be their terms of service tomorrow. So just be careful with this privacy stuff, especially related to HIPAA. Oh, yeah. Sorry. I was just going to add, based on that, it sounds like you guys haven't used the ambient, but has anyone in the room used the new AI in their EHR and wanted to comment on it? Because I feel like there's probably a lot of people in the room that would be interested to hear how that's worked out. Has anybody been part of a DAX pilot? Anything you'd like to share? So DAX is Dragon Ambient Experience, and Dragon was acquired a few years ago by Microsoft. For me, it just takes a lot of the cognitive burden away of like how Yeah, so I can go around with the microphone if future people have it, but the person in the audience, and I'm sorry for having to repeat this, but it's for the people online, uses one of the AI models for ambient note summary, and it's really useful in her perspective on patients when she sees a lot of patients with back pain who maybe have a very similar complaint X, Y, and Z to organize at the end of the day when you've got 25 patients who had very, very similar chief complaints and similar presentations. Which one was this? Which one was that? And reduces cognitive burden throughout the day. Hopefully I summarized that well, theme of the talk. Yeah. Can I just say one other thing? I think the ambient technology, I think you need to give some thought to what the advantage is, or what the goal is. If the goal is to document things to meet billing requirements, I think the technology is excellent to do that. If the goal is to communicate your thoughts to other clinicians, or if your goal is to document for legal reasons years in the future, I'm not certain that that's the case. Because similar to what I said earlier, I think sometimes the one paragraph from the experienced physician that really zeroes in on what matters is more helpful than the three pages the AI comes up with. So I would suggest if you do use the ambient, maybe you still want to supplement it and have one summary paragraph that says what you need. Sort of like the resident who writes a detailed H&P, but the attending comes by and writes the summary paragraph, which really zeroes in on what you need. I think you still need that human handwritten paragraph, even with the AI technology. I think we had one other person who has used the ambient model. Yeah, so I used DAX probably, I don't know, maybe about a year and a half ago now. And I tried it out for three months. They encouraged me to try it out for three more months, so I tried it out for six months. And then I actually stopped using it. I didn't think it was... So a couple of things. One, I think for me, when I'm done with the day, I like to be done with the day. So when it's five o'clock, I don't want to go home and work on notes. So initially it took, I said it would take less than one hour, it ended up taking like one to four hours after I closed the encounter for them to process it. At least at that time, it was the AI model, and then there's someone that comes in afterwards to kind of clean up things. And so one, I thought it took a really long time. Two, I found that a lot of the information that I thought was relevant and important was not being documented. And just fortunately, I remembered a lot of those things, but that's not always going to be the case because I'm so reliant or was reliant on DAX at that time. I didn't write down anything with the assumption that this will all be captured, and it wasn't. And then at the same time, there was also other information that was just unnecessary and superfluous that I didn't think really helped with the notes. So they asked me to come, they said there were some modifications made, and to try it out again. So I haven't done that, but I didn't have a great experience the first time, despite what I was told what it would do, and so I stopped using and just relied on what I've always done. Yeah. That's great. Thanks for sharing. So I also use DAX originally, and DAX is a mix of AI and a scribe coming in and writing everything out for you. So things can get lost in context because I think the medical scribes are people trying to get into med school, is what some people have said. So the language that we know as physicians versus the language as maybe a college student in the medical field are very different, and so things can come out sounding very strange, and they did. And honestly, I hated DAX for a really long time. But I recently started Suki, which is similar to DAX Copilot, which is fully AI. And along with the cognitive burden, the time burden is gone of being with a patient. I can be fully with a patient for my entire encounter, and what I'm using the computer for is discussing their lab results, discussing their imaging, showing them their imaging. And because I'm saying all of this to them, it is fully documented in the note without any misinterpretation of what I'm telling them. If I'm like, well, I want to start you on this medication, let me look at your lab results and make sure everything's okay. Oh, great. Your kidney function is good. You have no history of any kidney function. I say that out loud with the patient. Time burden gone, chart review burden gone, cognitive burden gone. That's what I've personally found in using it. That's also great. Thank you. I think we'll move on to the next question. Yeah? Thank you. Yeah, no. First of all, thank you for holding this symposium. It's really, really timely and important. Regarding maybe as an adjunct to the comments we just heard, I think that the AI scribes are really more unidirectional. It's basically taking the input that you're giving and formatting into a note. I mean, I'm interested more in bidirectionality. So we can go in, for example, if I say, write a physician supervision face-to-face note today, where we go and grab the functional data and format it to show that the patient's reflecting progress toward goals, et cetera, et cetera. But with that said, Dr. Kaplan, you seem to be reluctant to endorse using AI for clinicians' notes in general. Is that . . . Did I hear you kind of intimate that it's because the clinicians may not take the time to edit those notes to make sure that the proper content and not crazy stuff is in there? Yeah. I say it's using two hats, right? One is the hat of a utilization reviewer, and two, as an expert in legal cases. I can tell you in both cases, when I see . . . almost without fail, when I see charts that for AI, the likelihood of success goes way down, right? If I'm doing it for utilization review, rarely do I get the focus point of why they're appealing. Like I said before, what's the exception? Why are you appealing this? I usually get the first-line argument rather than the appeal, the exception argument. If it's a legal case that I'm looking at, almost never can I find a chart that there isn't something in it that impeaches the case. The problem is this. Even if all that you want to achieve in the note is there, if your note says that the woman has a normal prostate exam, the credibility is gone. It doesn't matter whatever's there, because then the argument goes, well, how do you know that this was right? How do you know that this was right? I think it just completely destroys that. The question comes down to, where's the benefit? If it's for yourself, you probably remember the patient already. If it's for billing, okay, maybe it works for that, but if it's for actual, to help your colleagues and to help protect yourself legally, I don't see where the benefit is there. Yeah, I'll just make one more comment. In that regard, by the way, I'm Joe Still, I'm CMIO for Encompass Health. We have 167 hospitals, you can imagine the number of physicians, but even still, we see a lot of our docs who, because they're being advised by the academic partners to put in a disclaimer saying, this note was generated using drag-and-voice-to-text technology, I'm not responsible for the content, grammar, blah, blah, syntax. I've seen that one before. Right, but now it's probably going to be the same thing for AI. This note was generated using AI, generative AI, I'm not responsible. But it doesn't help you. But that doesn't help you. It gives the opposing attorney an orgasm. It's the greatest thing. I mean it. It's the greatest thing there is, because you cross-examined the guy. So you mean you're not responsible if you're a note doctor? You mean Dr. Smith, you didn't bother to proofread your note? Dr. Smith, so it means we don't need to exactly, it's completely useless. Right? And I've seen that all the time. Yeah, I would agree. That's, you know, just as another organization, that's always the advice from risk management is to not put that in there. Yeah, I agree with all that. There are a lot of more questions online, and we can probably stay after for a little bit. But just being cognizant of everyone's time, it is the end of our session. Feel free to reach out to at least me, and I think any of us with any questions. We're probably going to put a post in Fizz Forum in the next little bit, if people want to continue this conversation. But yeah, our information is on the app and all of that. Please reach out. Thank you all for coming. Have a great rest of your day.
Video Summary
The panel discussion focused on the application of artificial intelligence (AI) in clinical, administrative, and educational settings. Evan Sheldon, a clinical assistant professor, introduced AI as a tool that has presented both opportunities and challenges across various fields. AI has rapidly become an integral part of society, leading to questions about how it might change job functions, particularly within healthcare.<br /><br />Richard Kaplan emphasized AI's utility in searching medical literature and summarizing patient records, advocating caution against using AI for clinical charting due to potential for errors. He shared tools and approaches for integrating AI in medical practice and research, emphasizing that AI should supplement rather than replace human analysis.<br /><br />Mike Salino discussed using AI in administrative tasks, sharing a personal experience of implementing a policy to mitigate clinician burnout. He highlighted AI's role in assisting with predictive modeling and decision-making processes, stressing the importance of human oversight alongside AI.<br /><br />Jen Zumsteg discussed AI's potential in educational contexts, noting both opportunities and risks. She pointed out that AI could assist in program administration and application processes by generating summaries and templates, highlighting the importance of considering confidentiality and data privacy.<br /><br />The discussion included real-life applications of AI tools such as ambient scribe technologies like DAX, reflecting mixed results on their effectiveness and accuracy. Participants emphasized the necessity of understanding institutional policies on AI use, data privacy, and the importance of maintaining rigorous oversight to complement AI's capabilities.
Keywords
artificial intelligence
clinical settings
healthcare
medical literature
patient records
predictive modeling
decision-making
educational contexts
data privacy
ambient scribe technologies
institutional policies
×
Please select your language
1
English