false
Catalog
The Secrets to Making Data Your Friend: How AAPM&R ...
The Secrets to Making Data Your Friend: How AAPM&R ...
The Secrets to Making Data Your Friend: How AAPM&R Registry Participants are Using Their Data to Improve Care for Stroke and Low Back Pain Patients
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Well, I want to welcome everybody for sticking with it and surviving towards one of the last sessions. Yesterday I was in a session on stroke and they started talking about what we have to do to show our outcomes and the importance of data. And I like almost jumped out of my seat in excitement because I wanted to say, yeah, we've got the idea for it. So we're going to talk a little bit today about the registry. This is the secrets to making data your friend. And I really do believe that data can be our friend. What we want to accomplish today, we want to talk a little bit, get you to better understand the workflow challenges from early adopter sites in capturing common physiatric data elements for low back pain and the ischemic stroke. Discuss a little bit the needs and implementation of patient reported outcomes and physiatry because really our patient reported outcomes is what's kind of unique to this registry. And we want to explore how aggregate data or big data from the registry can be used to advance our position in local and national quality improvement initiatives. We've got a very good faculty here today who will all be speaking. First I'm going to give you a little bit of overview of the registry and then Dr. Fine will give us the NYU experience, Dr. Harvey will give you the Shirley Ryan Ability Lab experience and as an alumnus of RIC, it kind of hurts me to say Shirley Ryan, but everything changes. And then Dr. Kennedy will talk about the Vanderbilt experience. Now I think our current president, all of about two hours, Dr. Flanagan said it very well, as healthcare is changing it becomes important to show our value through evidence. Evidence that's real, collected across the country and encompasses a host of conditions, that's what the AAPM and our registry can help us do. We don't have a lot of ability to collect good data. So we need to be able to do that. What's the challenge? Well, creating an effective care continuum is what we all want to do. And what we see is rehabilitation, what we do, physiatry, is often consistently devalued. There's little national aggregate data or research available. The care continuum sometimes is inefficient, ineffective. We don't have, you know, we all know what we do. We do inpatient therapies, strokes get better, but can we really say that always? So there's unclear outcomes which can stress us, poor clinicians, and we can't always prove our results. So what's the solution? The solution, we think, is the registry, and we're going to talk a little bit about it. So the registry is a single repository of data that will track real-world care nationally with the goal to define rehabilitation practice, move rehabilitation as a specialty forward, manage patient populations, and give us a benchmark, what's considered standard, what's good quality, and help us with the ultimate goal that we all want to achieve of improving patient outcomes. Your registry steering governance, we have a steering committee which is chaired by Dr. Hazakis. Dr. John Lesher chairs the spine subcommittee. I chair the neurorehabilitation subcommittee. And then we have Dr. Fine, Dr. Gordon, Dr. Wong, and Dr. Sliwa also on the committee. We're very fortunate. We have about 10 organizations who are now participating in the registry. Not all of these organizations are currently transmitting data yet. Some of them have gone through the contractual process and are getting ready and putting the infrastructure together to submit data. Right now we have three organizations, Shirley, Ryan, Vanderbilt, and NYU, who are submitting their data. And that's why we're going to give you some examples of what the early data is showing. But as you can see, we have other providers, Illini, Brooks, Lifespan, Rehabilitation Institute St. Louis, Shepherd, and you can see. So how does the registry work? There's two sources of data. And that's really what's making our registry a little bit more unique. We have data that's automatically fed into the registry through the EMRs. And then we have patient reported outcomes. And since our goal is really how is the patient doing, having them contribute their own patient reported outcome is so essential to really judge were we successful or not. So we're using the PROMIS-29. It's a global tool used across clinical diagnoses. It completes the assessment, including 29 questions over eight domains. This was a study that, a very recent study, that sort of showed PROMIS was a good tool in our rehabilitation world. As you can see on the whole, given that obtaining patient reported outcome measures is no longer strictly optional for most clinicians, barriers to PROMIS implementation largely seems to be outweighed by the extensive benefits. The ability to choose domains that fit the patient population, the ease of score interpretation, and the ability to track changes over time all makes the PROMIS a very good tool for us as PM&R physicians. So if we look at what the process is, we have EMR data going directly into the registry. We have the patient reported data going into the registry, which together develops a very rich patient-centered outcome data set. And then at different intervals, we have the data feed in again, so at different stages in the patient's care. So PROMIS, as I said, had eight domains, physical function, anxiety, depression, fatigue, sleep disturbance, ability to participate in social roles and activities, pain interference, and pain intensity. In addition to the PROMIS, then some of the other factors we're looking at, you can see, does the patient return to work? Are they taking blood thinners? Different complications, patient satisfaction, readmission, medication adherence, recreational drugs. In the ischemic stroke population, we're also looking at alcohol use. And in the low back during this early stage, we're looking at exclusion criteria, including prior surgeries, cancer diagnosis, and workers' compensation, factors that could potentially skew the data. So how is the data collected? For the low back pain, at the initial visit, the patient will complete a survey. If for some reason they don't complete that survey at that initial visit, they will be sent a link during the first week to complete the survey. And then there will be follow-up surveys for the patient-reported outcome data sent to the patient via email, 6 weeks, 3 months, 6 months, and 12 years. For the ischemic stroke, slightly different because we're going to be collecting that data during the rehabilitation hospitalization, the preliminary. So the patient will complete the survey at the time of the discharge process from inpatient rehabilitation. If for some reason that doesn't happen, they will be sent a link in the first week. And then they'll be sent links via email, 60 days, 90 days, and 6 months. So let's look a little bit at what the dashboards and how reports are set up. So we can look at the data as aggregate, as the whole population across all the different participating facilities. We can look at the data per facility, or we can hone down to the individual patient level. Just some of the things we can look at, unique patients by race, by sex, by age, to kind of give us an idea of what the population, are you treating the same population that the whole group is as a whole, or do you have outliers for some reason? We can look at it by encounters per patient. As you can see, the number of encounters here, the most is one encounter. And part of that is because we're early in the process, so a lot of the facilities, the patients haven't had their subsequent follow-up appointments yet. But as you can see on the far right side, we have a lot of different filters we can set up. And so we could look at, you know, if we wanted to look at all of the 40- to 50-year-old patients who had three encounters, how did they do? You know, we can filter out down to a lot of different data. So our low back diagnosis, what we're seeing, the number one diagnosis so far has been lumbar radiculopathy. And you can see reading down the list, followed by spondylosis without myelopathy or radiculopathy, low back pain, and so forth. And again, we can filter each one of the diagnosis to a different particular type of subpopulation. The number one treatment has been injections, but we can break it down by what injection for what population. Same for ischemic stroke. All of the ischemic stroke we're looking at, all the codes are different types of ischemic strokes. As you can see, the unspecified cerebral infarction has been the number one diagnosis so far. So what are some examples of things we can look at? And you have to understand that right now, we are in the very early stages of data collection. So our numbers are not what we would call big data yet. That's the goal. That's where we want to get to. So some of the interpretation, because the N is so small, doesn't really make sense yet. But on these graphs, if you can see, the top, the green line is this is low back pain data. Patients that have returned to full-time work is the green data. The blue is patients that are unable to work. And what kind of intuitively obvious, I think, the patients that had a better outcome score are the ones that we're seeing went back to work. I think we kind of expect that and hope that. And same better social roles and activity, they're having higher scores, they go back to work. But we would ultimately hope to be able to sort of hone in that data if we're seeing trends where they're not. It's not the way we would expect, and we can analyze why and what treatment. Was it epidurals worked better than conservative treatment and really kind of fine-tune the data. Similarly, if you notice, the lines are a little reversed on this graph. With pain intensity, pain interference, the blue graph, or the unable to return to work, is showing higher pain. Again, a little bit intuitive what we would think. But ideally, we'd love to see that our intervention made the pain intensity go down more. That's what we'd hope to see. And again, as we get more data, we're going to hopefully be able to show exactly what treatments are of benefit. And with that, for the initial overview, I'm going to turn it over to Dr. Fine to give us a little bit of their particular individual data. Okay. Thank you. Okay. So, hi. I'm Jeff Fine from NYU. I'm the medical director of our inpatient rehab unit in Brooklyn, a 30-bed neuro-based rehab unit. Our primary admission diagnosis is stroke. We have hemorrhagic stroke and ischemic stroke. We have traumatic brain injury. We're a level one trauma center. And we're one of the few Joint Commission certified stroke rehabilitation units in the Northeast. So, we do a lot of data collection around our stroke patients already, and we're quite interested then in participating in the registry to complement the work that we're already doing. NYU already is a very data-driven organization. Our dean, Grossman, really transformed the university with providing a wealth of data around patient safety and quality that help our service chiefs manage risk and our patient safety score is number one in the country. Along with that, we use data regularly to manage our departments, and we feel the registry is important in that way that it helps us to compare our performance of patients with stroke against another national data set. The only other data set that we have is we use UDSMR as our data vendor for our inpatient rehab unit, and we match our metrics against the current tools that we use in UDSMR. With our implementation of the registry, we elected to use all 92 elements of the data library. There are many elements that are identified in EPIC, which is our EMR, that we pull into our registry. And we've sort of, as we've grown and accomplished that first part of the task of identifying the addresses for all the locations for each of the items, we've had to adapt our workflow in terms of what we do and how is it that we identify those patients that are strictly ischemic stroke. We have an email process where each day we update our patient census where we send out to the entire inpatient rehab team who are the patients that have the diagnosis of stroke, what date they were admitted, whether it's ischemic or hemorrhagic, because our joint commission certification requires that we have to implement clinical practice guidelines that are based on either ischemic stroke or hemorrhagic stroke. So we already had a process by which we were notifying the team of patients that were on the service with ischemic stroke. We identified then after our first team meeting the discharge date, and we created an email thread that is routed to a subset of providers on the clinical team as a tickler to remind them that we need to complete the survey in preparation for discharge. And we built that process after we discovered that when we, the design of the registry is such that it can be fully turnkey, meaning that the clinician conceptually has no additional workload to complete the survey. There's the opportunity for the patient to fully complete all the promised surveys at the specified intervals through an email reminder and a separate secure login to the Arbor Metrics website. But our experience when we turned it on was that patients were not motivated or trusting or thinking that it was important to log on to the website, so we had a very low response rate of patients doing it independently. So we had to then sort of reconfigure our workflow and create a process by which we had then providers assigned to complete the survey and have effective notifications that we would do it in a timely way. So we elected to have the surveys completed within three days of the discharge. So if a patient's going home on a Monday, we have a process of discharge by 10, so everything is done and prepped for the discharge in anticipation of the discharge. So really the discharge day is consolidating the after-visit summary, reviewing medication education, reviewing equipment that's ordered for the home and home care services. They're not a time to insert a promised survey on the day of discharge, so we're trying to do it before, and so if it's the day before for a weekday discharge and the Friday before if it's a Monday discharge. I'll show you some workflow edits that we made in Epic that help make it more consistent that we know the dates and times that patients are going to be completing the follow-up surveys. So when we started, and we do a lot of data collection, I'll show you a slide that shows the other tools that we're using for a patient population with stroke. And oftentimes the way we're administering, say it's the MOCA or stroke-specific quality of life survey, we're oftentimes doing it with a paper version at the bedside. It's still convenient, and you don't have to have a technology interface to do that, and then the provider takes that piece of paper and then codes it back into the record. That's how we do it for most of our surveys. Since patients were not completing the survey, we created a dot phrase in Epic that was the promise and a second dot phrase that has the other accessory questions as well, and it exists in Epic. So this now becomes a way where it's part of the medical record. We don't have to scan it in, so after the survey is completed on paper, the PA will then complete this care coordination note in Epic, so we have now a digital copy of the tool. And then from there it can be retrieved to enter into ArborMetrics. We also created just with a calendar, nothing super complicated, projecting when the follow-up surveys would be completed, and we make it part of their discharge follow-up navigator. So in Epic, as you're making appointments for follow-up clinicians, you can make other appointments for things like a telephone survey. So here, right in the discharge navigator and printed on their after-visit summary and printed in the discharge summary that would go to the next provider of care are the dates and times that the surveys will be completed. Just to make it so that way we wouldn't run into a circumstance that we're in right now where we don't have a lot of surveys completed, we've elected to schedule them as telephone encounters where the PA would call the patient at the specified time. So even though we have a high activation of MyChart, we have over 80% of patients that have turned on the patient portal to access their medical record through a quality improvement initiative that we had done in the previous year. We leverage that to communicate with patients, but we don't yet have a method by which they can complete the promise through an Epic questionnaire, although we're building it. So this is our workflow. So the workflow is as described, PA sits with the patient and completes the survey within three days of the discharge, and we complete the subsequent surveys on follow-up calls. Part of the design of us doing it with a specific phone call that's not necessarily linked to an outpatient encounter is that it gives the provider the time to have this data entered into the registry and then available to use with the patient at the time of the encounter without consuming additional time on the day of the encounter. Not that that's not a workflow that could work, but several of our providers were concerned about that additional time that was necessary to complete the encounter. As most of us know, the administrators give us 15 minutes for a follow-up encounter, and 15 minutes oftentimes is not enough to do everything you want to do and, you know, chat with the patient, let them know you care about them, and do all the clinical work and the documentation and the orders. Fifteen minutes is quite short, so we tried to uncouple the completion of the survey from the time that was necessary in the appointment by doing it with the telephone encounter. So as we kind of implemented and sort of talked, we have three different rehab hospitals with different clinical focuses, different providers, different staffing ratios, different cultures in a way, even though it's all NYU. We have to play in the same sandbox. And so just as we have all those dashboards that I told you about before that help us to manage departments in a universal way across all campuses, as we kind of were implementing, these were some of the conversations we were having among providers about the things that could facilitate initiating that the registry and the participation and some potential barriers to implementing. Some of it was necessary to educate our providers about what PROMIS was, what is it as a tool and why would we want to use it. And I have another slide that will show you the value add of PROMIS. Concerns about survey fatigue, because we have Press Ganey, we were using IT health tracks to call patients about their patient experience as a complement to Press Ganey because Press Ganey never released their rehab version, the rehab module of their survey never came out, so we used another survey. We had a third survey that had to do with calling the patient about their NYU experience and then a fourth encounter with an NP to do a telemedicine visit for a TCM encounter. So there was a concern that another survey on top of that would just be too much and patients would get dissatisfied like a telemarketer calling them too many times. As I stated, patients were not independent, it wasn't as turnkey as we thought. Patients were not completing the survey in spite of us telling them about the survey, informing them they would get an email, it still either went into their junk folder or they just didn't respond to the survey. Patients having some reluctance about entering their personal data in a non-NYU website. The patients that have the portal turned on, we have a process of sort of maturing them to be comfortable to use the portal as a way to communicate with us. So at one of our other campuses they had a quality improvement project where the patient in preparation for the discharge, the attending sends them a message through Epic Chat to inform them that there's a way for them to communicate directly back to them and the patient in order to sort of graduate from rehab has to send the attending back a message. So now we know that they know how to use that link. So we've implemented that across all campuses so patients are comfortable communicating that way because they recognize that it's in their medical record but they had a little reluctance that it was outside of the medical record in the sense that they were entering data. We talked about the clinician's concern about the time that it takes to complete the survey and the visit and the perception at least among providers at my facility where we have the joint commission certification for stroke that we already were doing a quite robust evaluation of patients with stroke longitudinally and did we really need another tool to assess. This is our value added sort of pitch for why PROMIS was useful. It's specifically designed by NIH as a tool that can crosswalk across many different populations of patients, it's validated by many studies in the stroke population and to us regardless of whether it overlapped with some of the tools that we were already using we wanted to participate in the registry. So even if it was some re-duplication of assessing patient's anxiety and depression, fatigue and social roles we still thought it was an important contribution to participate. So this is part of our joint commission certification dashboard, it's longer, it's probably about twice as long in terms of length, in terms of the items but the point of it is that for our stroke population these are all the tools that we're using to follow the metrics. So we were as disappointed as everyone else when the PHM got retired because there was so much linkage to PHM and Barthel before that that allowed us to do lots of quality management and analyze our population whereas the new QI is still too new really to really say anything longitudinally about how patients are doing. But we keep it as part of our dashboard. We use the stream for upper and lower extremity motor function, it's a quite useful tool. We use the cognitive linguistic quick test as our cognitive assessment as well as the MOCA. We use the Mayo Portland Adaptability Index to assess for coping and some elements of social performance and then we do the stroke specific quality of life measure so you see them all listed there. So we're tracking this for every patient every month. We complete it during the acute phase when patients are, some of the tools when the patients are admitted to the neurology service, we do it on the consultation side so we have an acute measure, we have a rehab discharge measure and then we have follow up measures that are done in the outpatient. So we study these patients and we're monitoring them but we still think the promise adds value in addition to the demographic matching that the promise adds an additional layer that can help us to sort of dissect that data in a different way. I think I said most of what's on this slide already. The other value add is that we felt that by having the opportunity to show patients their performance in relation to other patients would further enhance their motivation to continue to participate effectively in therapies and continue to do their reps at home and really for the stroke patients we do a stroke notebook as well where we document their performance daily for them in their language, not in our speak. And patients really appreciate that because they can look back through their journey on rehab for two weeks, two and a half weeks, three weeks to see how they've progressed because sometimes patients are just refracting that they want to be back to normal. They want that it never happened and they can't fully acknowledge how impaired they were when they got to rehab. We take them pretty early. Our onset time is about two days shorter than the average in UDSMR and we feel that's a useful way to help keep them motivated each day. And by using this tool we figure this is another tool that we could use in the outpatient setting that would continue to show them that arc of recovery their personal story with their story matched against other patients that are similar to them in terms of their stroke distribution or how they performed not just on promise but on the other tools when we match them on the inpatient unit. A request from one of our providers that has come up in our registry meetings was that if we could decode the promise tool and the administrative questions into flow sheets, then conceptually that would be another discoverable address that our Epic analysts could sort of program and just drive right into the registry, taking out one more step from that. And that's not built yet and we don't have a contractual obligation from Arbormetrics to do that yet. But the providers are looking to make it more turnkey than what it currently is because there is a significant front load of provider assignment, a clinician, a therapist, a nurse to have to code the instrument with the patient in our current model. And as we get more mature in our workflow, our logistics will get better and we'll get a better idea of how to do it with more efficiency and with more patients completing the survey. But from a data standpoint, if we can pull in meds, we can pull in diagnosis codes, we should be able to pull in promise if it exists in the flow sheet in Epic. So here's our data. So since we turned on the registry, we have 83 patients that are entered in the registry, all ischemic stroke. And you see we have a fair population of patients that are young with stroke. So 31% in the 50 to 64 range, which I consider to be young. I'm in that age range, so I'm going to call it young. But there are ways to analyze this data in multiple ways. Our graphs of the promise surveys, patient reported surveys, as I said, we don't have a high completion rate. So we had to design a different workflow with an assigned provider now that's assigned to do the promise with the patient. So I expect that when I get back next week, we'll have a workflow where we then will capture every patient as we continue to go forward with the data entry. So since the numbers are so low, the N is really very low, the curves aren't meaningful yet, as Alan said. Not just for our site, but for the registry as a whole. We need to have many, many patients in there to start to look at trends, because right now the data set is so small that when you're comparing yourself to one other facility and you have all the patients, it's really not a national registry yet. But the more that we have facilities recognizing the value of where this will go in the future as a academy owned and managed registry that we own, that we will continue to refine and resolve and likely add to as time goes on, then it becomes a real tool for every member of the academy to use to guide their patient care. So I really think it's in the early days of the registry, but it's a tool that I think will be very valuable as we go forward in time. And then maybe a foreshadow, something that came up at one of our past meetings was that there was a thought that maybe we should make a third arm of the registry that had to do with COVID. Now, I don't know if I'm sort of announcing that too early, but I thought it was a great idea, and you know, we have to kind of get this one off the ground. But I do think that's really another next step in terms of what we're managing to leverage all that work that we're doing through the collaborative in a way that provides a data set. Because as a specialty, we are leading the way with the management of those patients. We should start to think about ways to do data collection on them as well. Okay. Thank you. Hi, everyone. Richard Harvey from the Shirley Ryan Ability Lab. We are a freestanding hospital. We're academically affiliated with Northwestern University and the Medical Center. We're right next door. But Northwestern University uses Epic as does NYU. We have a separate electronic hospital record, which is Cerner. So we are truly a freestanding center with a freestanding electronic record. So our experience may be a little different than a facility that's linked into an entire multidisciplinary medical center. We are also participating in both the ischemic stroke and the low back pain data collection. So the way we implemented was, of course, a lot of back behind the scenes work initially to collect the data from our electronic hospital record. We established a connection of our EHR to the registry. And we then, because we're collecting patient report outcomes, we match the PROs to the collected patient data from the EHR. And after that match is done, every night, the data is then pushed up to the registry. A lot of the upfront work that we had to do to collect patient report data, and again, this is for ischemic stroke at this point, was concerning to me as a stroke specialist. I've taken care of patients. I've done inpatient care my entire career. And the thing that occurred to me right up front is these patients have stroke. Almost half of them have some sort of language deficit. The other half have cognitive deficits. How am I going to get valid patient report data from these people who have difficulty communicating and may not have the cognitive function to even understand the questions well enough or be able to contemplate valid answers to those questions? So being a physiatrist, I put together an interdisciplinary team. And so we had physicians. We had nurses. And then I brought in speech language pathology because I knew they would be critically helpful in helping us figure out how to collect the data the best we could. So the goal was to collect the PROs completely as possible and include patients who have cognitive and communicative deficits. Because bottom line is, it doesn't matter if you have language or cognitive function. You need to be heard. And we need to know how you're doing. And we need to have that data as part of a larger registry. As a team, we created a flow diagram to assure that we could collect this data. We decided as a group that the nurses, the charge nurses, which at Shirley Ryan Ability Lab we call nurse coordinators, would identify ischemic stroke patients who are discharging that week, at the beginning of the week, so that they could tag them for PRO collection. Because of the challenge, we wanted to collect the data directly into the electronic, into the registry. But because of a lot of logistic issues around that, we decided to be more efficient up front. We should put it on paper. So the nurses collect the data, the PRO, on paper from the patient. And then they enter it in batch forms. They have their stack of papers. And they enter it in batch form into the registry database. If the patient is identified as having language or cognitive deficits, the speech pathologists were really supportive of the idea that the nurse could, with them, collect the data. And the problem is, of course, that the speech therapists have to see their patients provide therapy. But we had a very good discussion. And we did conclude that, if the speech therapist is working with the patient to collect data, using supportive conversation, that that is, in and of itself, therapy. And what could be billed as therapy time. Because they're practicing that communication skill. So the nurse and the speech therapist would schedule a time during a therapy session with speech therapy to collect the PRO as part of that therapy session. Also, in order to communicate with the physician, the attending physician, the paper copy of the PRO is also, after being entered into registry, is scanned and uploaded to the electronic record. So this is our flow sheet. The patients with stroke are admitted to the Shirley Ryan Ability Lab. Of course, we admit hemorrhagic and ischemic stroke. This is just for ischemic stroke. So we are using the ICD-10 code that is in the system assigned by the attending physician as the trigger to identify ischemic stroke patients. There was a bit of conversation that I had to have with my physician staff to assure that they actually put in a ICD-10 code for ischemic stroke. So I really emphasized I63. Put in an I63. Now, what I didn't emphasize so much, at least initially, is to narrow that down to a very specific I63 code, which is why, at least right now, you're seeing a lot of unspecified stroke, ischemic stroke. So ultimately, if the patient has an ischemic stroke, then they're entered into the registry. The nurses then identify which stroke patients are being discharged in the next seven days. At that point, the nurse contacts the physician, if necessary, if they know that there is an ischemic stroke patient who did not show up in the database, probably because they don't have proper coding. So hopefully, we can catch those. And then if, indeed, the patient is appropriate, the physician puts in the ICD-10 code, and then we enter them into the registry as well. Then the RN will schedule a time to collect the PRO. If the patient has cog-com deficits, they will schedule that with a speech language pathologist at an appropriate time during therapy. Once that data is collected, the RN enters the data into the registry and sends the PDF to the medical records, which is then uploaded into the medical records for access by the attending physician. The attending physician gets an alerted message that PRO has been scanned into the chart, and therefore, they can access it and review it. And so at that point, the patient's enrolled in the registry. They have their initial PRO data collected. And then, as you heard in the previous talk, they will get follow-up emails after discharge for a collection of subsequent data. So our motivation to participate in this whole system was really around the opportunity to participate in a national registry, because we, as an institution, feel it's important. And I think that's what we're here to talk about, is that this is important for us as a field. And we wanted to contribute to that. We also wanted to access the outcomes of our patients, including patient-reported outcomes, because, again, as has been mentioned already, how that patient subjectively feels they're doing is really important for our care and for the ultimate outcome of the patient. We wanted to benchmark our outcomes with other centers. And we wanted to increase our power to justify the clinical impact of IRFs for the care of stroke patients. We also wanted to utilize this data for creating quality outcomes for long-term rehabilitation. And then I also consider this, although it's not designed for research, I consider that once we have a robust registry, it can potentially be used for research purposes. A lot of challenges along the way. One challenge right off the top is that no institution is going to add time to somebody's, you know, hire anyone to help collect PRO. Nobody's going to add FTEs. So how do you collect this data consistently in the context of our normal workdays that we have to do? And we're asking people to do it. One key thing to that, though, is institutional support. So if you want to be in the registry, from the top down, from the CEO and the CMO down, it needs to be clearly articulated that this is important for our institution. We also wanted to make sure that we put our best effort in to collect the data from patients with cognitive and communicative deficits, because that is a real challenge given this population. Also determining how much family can assist with entry PRO. This became a big question. So if the patient has a hard time being able to express their feelings, but the family seems to understand, well, I know how my spouse feels. Well, is that true? And in some cases, they may have a very good understanding. In some cases, it may be more about them than it is about their spouse. So this is a challenge, and I think we still need to work this out. Again, utilizing speech language pathologist time. How does that work into therapy? Is it billable time for them? The other is getting buy-in from the patient and family. I think a lot of that was brought up. This is important for you. You need to participate. We really value your input. Can you follow through on this? And then also getting our physicians, who are really very helpful in promoting this, to actually promote it. Hey, docs, can you actually tell your patient and family that this is important? We want you to participate. So here's some of our data. This data right here includes, because we're participating in both, this data shows both the low back pain and the ischemic stroke. So being the stroke guy at our center, when I look at this, I say, this is not the demographics of my patients. But again, look, there are more low back pain patients than there are ischemic stroke patients in this. Because of our effort to work with speech-language pathology, have nursing get in there, really put an effort in, we've actually had really good baseline collection. We've gotten up to a third of our patients collected. We could do better, I think. As you can see, the subsequent data collection drops off, again, for some of the reasons that have already been mentioned. And because we are not necessarily contacting them personally to make sure the data gets collected, that may be something we add later on. And again, if I have aphasia and I get an email, am I even gonna be able to read it well enough? Is my family helping me with this? So these are things that we have to think about. So as has been mentioned, this database is small. We've collected a substantial amount to contribute to it, but because it's small and not all the centers are contributing, our data pretty much reflects what the national data is because we're such a big part of it. But I just wanna point out that when you look at things like fatigue, sleep disturbance, pain interference, pain intensity, if your patients on this scale are doing better than the national, then as a center, you can say, what is it that we're doing that's making us successful? What are we doing? What program is it that we're doing that are contributing to these outcomes? And then perhaps even share that with others in the future. If we're not doing as well, opportunity for quality improvement. Somehow we're missing the boat here. Our patients are more anxious. Our patients have more pain. What can we do to improve that and get back up to a level that is consistent with the nation? So it's a very good way to gauge your own performance. So I'm gonna go ahead and move right into low back pain. I am not personally part of the low back pain collection because I don't do that, but I have the data. It's my center, so I'm going to represent low back pain here. So in terms of our implementation, the baseline surveys were, we ideally want to collect them at the time of the first office visit. And what we have available to our low back pain patients coming into outpatient clinic is an iPad that they can enter the data into. That data is then reviewed and printed into a printed version and added to the chart that the physician who will see them will have access to. The surveys then match back on the back end with the patient matching in the registry. And then it also is nightly pushed to the registry with the patient's ICD-10 diagnosis that fits with low back pain. We have a flow diagram for low back pain as well. New low back pain patient comes in, they have an opportunity to agree or not agree to participate. If they do agree, an iPad is provided to them. Once they complete the survey on the iPad, it's printed by the office staff and placed in a folder for the clinician's review. The physician reviews the survey that is provided to them. And the printed survey is placed in a bin for the medical records to scan into our EMR. And the attending is then alerted via a message that the completed patient report outcome is now scanned into the electronic record so that they can access it. The physician also enters an appropriate ICD-10 code. This is going to be done after the PRO is collected because that's the only way that they can access it. So Challenges on the low back pain side are low. We've had low enrollment rate I'm not so sure exactly why that is on the outpatient I think it has a lot to do with just the organization of our front staff some the outpatient clinics are very busy and sometimes these kinds of things get get overlooked The enrollment does take time at the time of registration and can affect the flow of the clinic The survey does take about 10 minutes to complete The also printing the Survey results takes extra time it takes staff time and if the staff is busy up front that may not happen So we're trying to also educate our clinicians and encourage them to ask the patients to participate in a registry So as you can see the Data collection has not been as robust as we'd like to see so there's lots of room to grow here But again, we've just really gotten started with this and hopefully we'll work out a lot of the kinks and get some better rates of data collection And again You know not too many sites contributing so the data looks pretty much like it does in at a national level But there's some encouraging data here showing that we are indeed And with that, I'll turn it over to Dr. Kennedy to talk about Vanderbilt University. good to see a room full on a Saturday afternoon, beautiful day, so it does warm my heart, especially talking registries and big data. So I'll be talking about the Vanderbilt University Medical Center. I don't have any Ohio State people in here maybe. So, and as a side note, I am filling in for Dr. Schneider, who is a single parent. So if you saw on your things that Dr. Schneider is doing it, so I'm filling in. Therefore, if there's any tough questions, I'm gonna plead ignorance because he is the director of it and knows many more things. So, how is a registry implemented at your site? The big lift here, I can tell you the background. The big lift was at a major academic medical center. The big lift was the political lift, right? That was the big lift. At a place like Vanderbilt, it's easy to play on egos. All I had to do was go and tell them that they asked us and we're doing it with some of the top centers in the nation, and that got the dean and CEO excited about it. If I would have said anything else, they probably would have said no. That's different than other centers and what's driving them in terms of these things. But I will tell you, it was fairly seamless. The academy staff made this a seamless integration where I was working with IT. I cannot quote from the IT side how seamless it was or how easy it was. From the physician side, it was fairly seamless. We made a lot of decisions that may affect our outcome and data and ability to actually measure things, but I think there's a really harsh chair there that was concerned about clinical efficiencies and time and everything else and really said, we want to start this with the least burden possible on the physicians. So, we talked about 10 minutes a chart. I can't have a physician when they're doing 15-minute follow-ups spending this time doing these things. This needed to be seamless from there and back, completely seamless. And we'll get to that because that causes challenge. So, has it changed our workflow? No. It was really a seamless transition to go live. We did not even notice it at all. I think that's pretty universal. I think the onboarding was what? I think a 30-minute session or something along that line. It was very, very seamless to know what was happening. So, we did a patient record. I will note there was one other really key reason I did it, and I just did this, I was reminded. I did this to be better than Stanford, who Dr. Levin's in the back corner. So, I did that just to show we had some data there, but they're too scared to get in the registry to compete with us. So, just to say. So, we have some basic, where we're doing age 18, back and leg pain diagnosis. This is a spine center, right? This is not a chronic pain clinic. This is truly a spine center where people are coming in with disc herniations, stenosis, there's an overlap there. But a new patient, they get a patient email has to be present, and they have to have a way to trigger it because this is no FTE equivalent. This is all automatically done. So, then it will trigger and send an email to a patient at baseline enrollment. So, we put in, and we see somebody in clinic, we put in an ICD-10 diagnostic code, they have an email, they're over 18, boom, they get an email going to them. Which again, easy on the doctor, but creates some back end issues. So, we got the follow up, we get reminders, and then review response, similar to everybody else. So, again, workflow has not changed. Problems, and there are challenges to this. So, really, if we wanted to do this in clinic, where we wanted to have them in clinic, so we are tight on space. We have room turnover questions, I mean, they monitor how many rooms are turned over, room obligations, so even if I see a patient and say, you're appropriate for the registry, and it takes them 10 more minutes, that's not acceptable in our clinical scenario because I might have two rooms for 14 patients in a morning. So, I can't have them sitting in there, that's why it generates an email. We could have them do it in the waiting room before they came in, but again, these are new patients. So, in my back clinic, I see people with hip pathology. So, how do we tease that out? Like, so now I'm enrolling them, and they might have hip osteoarthritis giving them buttock pain that they are coming to see a spine specialist for back pain. So, you know, I think those are logistic challenges that we struggle with in a busy clinic with limited space and ability to do that. So, hence, the email goes out afterwards. That's why we get small response rates. So, on a physician level, right, there was really no convincing. This was easy. The chair came in and said it was gonna happen, and everyone agreed because they said this is no problem, and everyone was excited about it because they actually get data on it. Institutional advocates in the field, we really felt that this was important. We felt this was important for a lot of reasons. It's truly an opportunity to measure ourselves. So, one thing that I will say in a spine clinic, you know, in the back pain world, you will hear all kinds of opinions about how to do things and what I do, and it's based on your unique referral patterns. So, I'll hear people saying, well, you should, the first thing you should do is cognitive behavioral therapy on everybody. I've heard that. I've heard that at this meeting. And I look at my patients and say, you know, my patients aren't really that anxious or depressed. And to go and treat depression that doesn't exist, so this lets me measure them. Maybe I'm wildly wrong. Maybe my patients are depressed and I have no idea. Maybe they aren't depressed and I just need to treat pain. So, I mean, it lets us measure ourselves, measure our effectiveness, measure our baselines, contribute to research. We are a research institution. We do this. I think it's important. And truly, the goal here is to truly incorporate this data into clinical assessment. It is not happening now because of the nature of it, you know, because we're not putting that ICD-10 diagnostic code till the end of clinic and then they're getting enrolled. How nice would it be if we actually had all these measures right upon showing up and every time through? Because that's what we want. Just like we measure pain score, we now have depression. We have all of these things measured. So our numbers, I'm proud to say 1,500. It does drive a lot of the national stuff. You see our demographics. That is not a demographic equivalent to our clinic. That is not. You know, I look at those demographics fairly regularly. So these are people that are getting emails and enrolling. So just, that's a diversity issue. Clear and simple. We look at our patient reported outcome, you know, our completion rate. You see 12 month, we're not there yet. That's why we're at 0%. But we're hovering around 6, 7%, 8%. Bad news, it's less than 10%, right? That's bad news. So a critic's gonna say, wow, you're gonna have 95% of people aren't there. Good news, this is equivalent to Prescani. Now, do not get me started on Prescani and the mathematical invalidity of it and all the problems with it, but it is readily accepted, right? And in the absence of data, some matters. And when you start to get big numbers on these things and you start to get sites and you start to get stuff, we can start to make a difference and you can buy this. So that's where we're going with this. So, and our numbers are consistent. People, once they enroll, they stay in and enrolled with, I would say, no cajoling currently from the physicians. I don't even know which of my patients are enrolled or not enrolled. Maybe bad doctor on me. Just to give you some ideas, so we look at things like fatigue, pain interference, pain intensity, sleep disturbance. So promise measure is 50 is the national average. So you take a group of people, you take this room, our promise average, if we had a normal distribution would be 50. So when I say fatigue or sleep disturbance is not really my patient's main issue, while the data is actually holding true on that, right? That would be wildly different in another clinic. And as we showed some of that data earlier that showed people that are returning to work and not returning to work in a low back pain registry are different. This is how we start to tease these things out. We start to pull out, let's take a group of people that are high anxiety, high depression, not working. Their outcomes are going to be different than somebody that does not have any of those issues. And we can start to look at it once we get sufficient numbers. Other ones, anxiety, depression, you see we're actually not bad. Again, we clearly see these patients and we treat them differently, but we usually we have other groups that are treating these patients and we send them to those groups. A few examples of how we'll be able to start to look at it, and these are cases, but start to think bigger than cases. So case one, we have a 70 year old male, spondylosis without myelopathy or radiculopathy. As you can see, again, all the depression, anxiety, when you think for it, they're at our baseline, depression, around around 50, not depressed. Pain score, again, pain intensity scores are tough. When do you ask somebody what their pain is? I can't tell you how many times I've had somebody say, it's not hurting today, why are you here? Because walking in, it was a 10 out of 10. These are things that are tough to get when you're asking a single, you know, on an email survey, right? It's just challenging on some of these things. But you can see an increase in pain, you know, we generally start with conservative measures. You know, maybe that it wasn't working for this patient, then we go to an invasive measure and it seemed to go down at three months. But you see no real changes in depression because they weren't depressed. Another case, 30-year-old female, disc displacement with radiculopathy. You see they're a little higher on the depression scale to start, maybe a little higher on the anxiety. But if you look in the bottom corner, again, you see pain baseline, first eight-week check-in, a little bit different. Maybe it's natural history. Again, we're doing something at that point. I'll tell you, we always, insurance makes us start with conservative care. This case, conservative care wasn't working. We did an injection, boom, down. These are things we can start to look at. I actually do not know if those are the cases. I'm making up a narrative here in these cases that we can pull and look. But I mean, knowing our practice patterns, these are the kind of things we can start to look at. Another case, 80-year-old female, pain in right leg. Again, you're seeing, you know, different ways of we're looking, sleep disturbance, depression, anxiety. So you can actually even start to use this data to tell us, you know, chicken or the egg on, I mean, when I was in med school, we were all taught, well, if you're in chronic pain, you're depressed, right? And now we're learning the nuances are much better. The question is, should you treat the depression or should you treat the pain, right? And which one changes first and how does that work? I mean, these are things that this data set, when sufficiently large, could probably tease out, but we're gonna have to get some of that. Another case, 60-year-old, radiculopathy. You see not depressed, not anxious, and you see maybe more of a natural history of the pain scores. And again, we're predominantly treating pain and function. That's what we're looking for more. We are screening and making sure we recognize when someone's really depressed or having trouble sleeping or something else. And that's where we have a chronic pain program for them. So with that, I believe I'll turn it back over. Thank you. So I hope everybody gets a little, sorry, applause for him. It was good. I hope everybody got a little feel for what the potential is. And I think one of the things you see is that three organizations, very different, three different workflows, but the registry is flexible enough that we can integrate it. It's a matter of working with it for your facility. I think for the sake of time, rather than going through the last couple of slides, if you're interested in participating, I think I'll open it up to questions. If you are interested in participating, please come talk to us afterwards. Thank you all for a great presentation. I'm Talia Fleming from JFK Johnson and Edison. I have three quick questions. Number one, beyond the altruism of different organizations and physicians participating, has the Academy thought of maybe developing either some kind of participation symbol, logo, that would help the physicians not only get support in terms of their stakeholders. So patients would love the fact that their organization is participating, as well as the healthcare organization itself would love that. In addition, I hate to say it, but it could be promoted on Doximity, which later on translates to US News and World Report. So I don't know if there's additional incentives that we could get to help ease the burden of initiating this program, which is really good. Another question is, is there a role for PCORI to provide some resources to get at least the initial startup, six months, 12 months, 24 months for different organizations? Because it sounds like there's certain initial investment that would need to be made, and once it's running, certain things would be taken care of. And then last, is there a role for us to be letting our board know, C-suite know, the fact that we're doing this? This way, if we can show, even on an institutional level, that we're having these outcomes, then maybe that would funnel back into providing resources, which would help to continue to build these other programs. So, just a couple things. You wanna take it, DJ, or you want me? I was gonna start with the third one. It's absolutely let your C-suite know. Absolutely. Can I, I'll start with the first. Your question of, have we thought of anything? I would turn that back to the audience. What do you need? What would you look for to say, this is important at my institution? And wow, if everyone in here said, we got a pin, a branding logo, and we would all be on board with it, that's probably what we're doing. You know, I mean, and I don't know what drives individual organizations. I was pretty, I mean, Vanderbilt is pretty honest. I mean, it was a competitive spirit is what drove me to be able to get it through. I mean, there were tons of other reasons to do this, but what I was able to sell it to was that administration from that standpoint. But I think that's a question, you know, what is it that is gonna drive people to say, I'm gonna do this, and there are things we can probably address and things we probably can't, at least for the first time. Yeah, I think it's important to realize this is our registry, and we're molding it. So, what the audience and our academy members tell us is where it's gonna go. As far as the C-suite, absolutely tell them that, you know, hey, we wanna be the best we can be. This is a good way to measure us. This is a good way to ultimately get data to sell to the insurance companies, how important we are. You know, the more we kind of carry the banner, the better we are. I forgot what the other, oh, PCORI. Do you wanna take that? Do either of you wanna? Stakeholder calls for us as a representative. We have not advanced any of those discussions too much. Early on, with Dr. Kennedy, we did explore PCORI opportunity with their low back pain offer, but I can tell you that the specialty did not feel that the way PCORI set up that particular grant was appropriate for physiatry. Good afternoon, Michael Opleson from Shepard Center. Thank you for a good overview of the registry as we are about to be joining you on the registry. Thank you for joining. Absolutely. A question, as any of you can answer, it is specific to back pain just because that's the part of the registry that we will be participating in, but are your physicians explaining anything to the patient when they are in the visit? Because I'm sure that there's more buy-in if the doc is saying something to them versus a nurse or versus just getting the email. And then part two of my question is, in your after visit summary or whatever your discharge paperwork is, are you putting anything written in that, sort of explaining what it is? Yeah, that's a great question. I will tell you, if you do that, like patients connect with their doctors, and if the doctor initiates it, I bet you would have a much higher rate than we do. We don't, that I'm aware of. I don't know if Yang or Snyder. John Wesher does at Carolina Neurosurgery. Yeah. He does, and he has a higher participation rate. So what we've worked on is, you know, do we have a handout for our patients that the doctor can give? And we're very keen on doing that. The drawback to us, we've had nursing staff turnover, pandemic has just, you know, we're now, it's not even the pandemic, it's the ramifications of nurse. So like, I had a nurse that lasted literally a week, had a nurse of, you know, five years, then one came in, came off maternity and said, I want to go back to my kids. And so it's really hard to get that consistency going when we're at multiple sites with multiple nurses and change, because the doctors can do it, but they have so many things on their mind that, because we are enrolling in multiple prospective studies, they're pretty good about remembering those things. But to be like, oh, yeah, it's a new patient with back pain. Oh, yeah, let me give them this, let me do this. It's been challenging. I mean, is that what you guys? Who are you? And probably the best practice is to have the doctor individually give it out and do it, not likely to happen on the high prevalence. But you could do things like where it's just routinely given out to every new patient at check-in. We've even struggled with that a little bit because of our front desk staff turnover at check-in. Are they news? Are they follow-ups? And we're in a multidisciplinary clinic. So to remember to do it for the PM&R doctors but not the surgeons and not this. I mean, it's logistical challenges that every site's gonna be a little different. If you are, this is my front desk staff for PM&R at that group, that's gonna be a much better buy-in from that than you are, okay, we're two of 12 exam rooms, you know, and the other ones don't want it. And we have three different registries going, right? So it's just. And one of the things that we've envisioned and we've set up is a governance council. So there's gonna be representatives from each organization participating who will have meetings on a regular basis to discuss what's working and what isn't. And that may come up in one of those governance meetings to then sort of control everybody towards what seems to be working best. Hi, I'm not sure who this question's for but I heard a lot about calling and writing and scanning and I'm curious, how does the data get to AAPM&R? Where is it housed? Is the AAPM&R housing this data or is each institution? It's magic. And then, you know, like the charts, the graphs that Dr. Kennedy showed, is that from the Vanderbilt EMR? Is that from the AAPM&R website? Like, where's all this happening? So we work with a independent company that we're contracted with called Arbor Metrics. And correct me if I'm wrong, but Arbor Metrics is housing the data and they serve as the repository. I can speak, can you guys hear me? I can speak loud. So the scanning and things are happening. Yeah, and if we are looking at the way... So a lot of the scanning is happening just simply so they can put a PDF in their OmniARM and they don't have to click it to another database. And that's actually the way to go, right? Is that. We trust on a validated scale, public and national. to correct the treatment, because measuring something is how you're going to correct the treatment, right? Yeah, so we don't, you know, technically, you wouldn't have to use paper or scan anything, but we do that for immediate feedback to the physician. Again, just like Dr. Kennedy says, if your patient's expressing depression, you want the physician to know about that earlier than later. Any other questions? Okay. Hi, my name's Chris Lewis. I'm a resident at the Shirley Reinability Lab. Thank you so much for sharing. This is really interesting, and the collaboration's really inspiring. One thing I really thought was interesting is the creative solutions everyone was coming up with. They're individual institutions. You know, everyone kind of has a different situation. It's a very complicated system. Given that, is there any way or methods for kind of translating those solutions to other institutions to potentially decrease the activation energy it takes for a new institution coming and trying to create this? Maybe borrowing, you know, the way that the PROMIS survey was implemented in the EMR at one institution, saying, okay, can we, like, take that code and give it to someone else, and potentially not rebuilding at each place? I understand everyone's different. We would need somebody with good informatics skills. Oh, wait a second. But, no, I mean, that's the whole idea of the governance committee, that we share best practice, and this is a learning curve. Everybody is learning as we go along how to best do the workflow. Workflow won't be identical at every facility, but certainly we can learn from each other and try to take best practice and most ease of being the least disruptive. And hopefully, you know, the experiences of a freestanding, independent hospital versus a rehab center in a large medical center, hopefully we can tease that out and say, oh, you're a freestanding, independent. Look at what Shirley Ryan and Belita Lab. Oh, you're in a large medical center. Look at what Vanderbilt did, so. And I will say that what Shirley Ryan and Belita Lab And the great thing is every organization has the same goal. Better patient care, you know, better financials. So as an organization, AAPMNR, we all have the same interests. So sharing is only natural. We want it all to be better. I really think if you look at these groups, I mean, the goal here is that these are big bureaucratic institutions that have done all this thing we want, right? And the goal is we're showing we can do this across CERN or across EPIC. And, yes, I'm proud of showing the 5% data collection rate. Right? I'm not a China for that. So, no, I think we're doing our part. And I don't think that the implementation process is really the barrier. I think the barrier is getting bigger. But we have everything from small, private practices. I don't know how Carolina Street is a small, private practice, but it is a private practice. It's private. It's very much a private practice. Well, and the important thing, too, is that we do have all these different types of organizations so that we're going to get more robust data. Correct. You know, it's not just always seeing the academic, and that's only that type of patient. We should see a little spectrum of everything. Because you'll see different things at different institutions, and everyone thinks we have the hardest patients imaginable. Right? Well, I do. So what should you tell us? Right? This will tell us all these measurables. So we know there are problems, we know this is not perfection, and perfection is the ending of good here, right? But if we get enough places involved, enrolled, and go in... We're actually out of time. If anybody else has questions, come on up.
Video Summary
The video transcript discusses the importance of data in healthcare and introduces a registry that aims to track real-world care and patient outcomes. It emphasizes the need for data collection to improve patient outcomes in the field of physiatry. The registry serves as a benchmark for quality improvement initiatives and is described as a single repository of data that will define rehabilitation practice. The participating organizations and sources of data, including electronic medical records and patient-reported outcomes, are mentioned. The PROMIS-29 tool is used for patient-reported outcomes in various domains. The workflow for data collection is explained, with surveys being completed at different intervals throughout the patient's care. Examples of data analysis and the experiences of three participating organizations are shared. The transcript also covers the challenges of data collection, including limited resources and the need for buy-in from patients and physicians.<br /><br />In terms of implementation, the nurses identify stroke patients for data entry, and the physicians enter the ICD-10 code into the registry. Patient-reported outcomes are collected and sent to medical records for access by the attending physician. Follow-up data collection is done via email after discharge. The motivation to participate in the registry is to contribute to a national registry, access patient outcomes, benchmark with other centers, and justify the clinical impact of IRFs for stroke patients. Challenges include limited resources, determining family involvement in data entry, utilizing speech-language pathologists' time, and getting buy-in from patients, family, and physicians. Data collection rates vary but can be improved by increasing physician involvement and educating clinicians and patients. The registry provides opportunities for measuring and benchmarking outcomes, performance assessment, and quality improvement. Implementation at Vanderbilt University was seamless, but challenges include limited space and patient diversity. The data is housed in a repository managed by Arbor Metrics, and creative solutions are shared through a governance council. The ultimate goal is to improve patient care and outcomes.
Keywords
data
healthcare
registry
patient outcomes
physiatry
electronic medical records
patient-reported outcomes
workflow
challenges
limited resources
buy-in
×
Please select your language
1
English