false
Catalog
AAPM&R National Grand Rounds: Exploring Function L ...
Exploring Function Like a PRO - video
Exploring Function Like a PRO - video
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome, everyone, to this month's National Grand Rounds. Tonight, I have the pleasure and great privilege to introduce our featured speaker, Dr. Sean Smith. Dr. Smith is an associate professor and the medical director of Michigan Medicine's Cancer Rehabilitation Program with a clinical emphasis on restoring function and reducing symptom burden in patients with a history of cancer. He previously chaired the AAPMNR's Cancer Rehabilitation Medicine member community. Currently, he serves on the editorial board of the Journal of Clinical Oncology, and he is also a deputy associate editor for the journal Supportive Care in Cancer. Dr. Smith has really been pivotal in advancing the visibility and communicating the value of our specialty to medical, radiation, and surgical oncologists. And he's been deeply instrumental in building the tremendous enthusiasm and energy that we're experiencing for cancer rehabilitation amongst our physiatry colleagues and within the AAPMNR. It's been exciting to watch Sean's endeavors and achievements as his career has progressed. While his clinical focus of his work is within cancer rehabilitation medicine, I think the presentation tonight is extremely relevant across any of the subspecialties or clinical interests in our field. Anyone grappling with a research question or struggling with a quality initiative can feel inertia, you can feel stuck or overwhelmed at times, and I think this talk will be encouraging, if not inspiring. We'll have some time for questions at the end of Sean's presentation, and we'll see how things are going. With that, I'm going to turn things over to Dr. Sean Smith. Thanks, Sean. Thank you. That was a very kind introduction. Let me get my screen shared up. So thank you, Teresa, very much, Megan, for introducing but also having me be a sort of megaphone for some of the cancer rehabilitation and other PMNR initiatives that we have going on. As Teresa alluded to, this is, the title at least, implies this is a cancer rehabilitation talk or a talk about outcome measures or talk about a very specific study within cancer rehabilitation about outcome measures. But I hope that everybody watching, no matter what your sort of specialization is within the field or rehab in general, can take some of this and maybe use it to apply to what they do or what they and their colleagues do to sort of maybe move the needle a little bit more. This really is a story as well as a little bit of research presentation about how we can do things. No conflicts of interest or anything to declare, but I will say before starting all this, I was incredibly naive about research in general and how to conduct it, certainly about what an outcome measure is and definitely about multicenter coordination and collaboration with people before this. I am slightly less naive and yet I was asked to be here. And so I hope that I, at least this naivete can sort of inspire you to be, if we can do it, anyone can do it. Sean, I'm going to interrupt you and tell you you need to be, you're in presentation mode. So can you hit your show? Oh, am I? I'm sorry. Now am I out of it? Perfect. Okay. So, well, then you may have seen, you know, the we kind of creeping up, which is this picture. So I was taught long ago, some very good advice to post your collaborators early on in a presentation while you still have the audience's attention. So I'm going to do this and I'm going to do it a few more times throughout this. This really is a group project. I'm just the megaphone tonight for it. All 12 of those people up here were instrumental in this project that we're going to describe. And so thank you all. And I enjoy working with everyone. A special shout out to Sam Shapar and that super cool Shirley Ryan black and white photo that they have. Gail Gamble on the bottom right, looking like a badass, but everybody is really cool. So I'm really happy to be working with these people. So the impetus behind this is a story that we all kind of know, and that's that there's a growing need for rehabilitation for almost everybody. And cancer rehabilitation is pretty destructive in that. So there's, right now 16.9 million survivors projected to be over 22 million within 10 years. Survivors are getting older. So they're carrying multi-morbidity there. And one of the reasons they're surviving is because we're actually treating this disease better and that's wonderful. So it's curative, but the cures are toxic a lot of times, chemotherapy, surgery, radiation, immunotherapy. So a lot of those soon to be 22 million folks are gonna have impairments and reduced function and need rehabilitation. Contrast that with some of the more traditional, if you will, PM&R sort of subspecialties. Spinal cord injury has 294,000 roughly people in America. Essentially all of them are gonna need a physiatrist, but that's still a relatively low number compared to the wave of cancer survivors coming. Stroke, traumatic brain injury have a lot, but still I would put cancer at least on par in terms of the number of people who are going to need good rehabilitation with those subspecialties. Unfortunately, we're a bit misunderstood in terms of what we do and patients don't always get to us. So a patient who's having active cancer treatment or surveillance is going to follow closely with their oncologist, obviously, or if they're in the survivorship mode, meaning they're sort of out of treatment or emerging out of treatment, they're gonna be seeing their primary care physician for these problems that come from cancer treatment. But as has been documented in many studies, and I'm only listing three of them up here, the primary care physicians and oncologists don't really feel comfortable and don't refer for rehabilitation or manage these themselves. It comes down to essentially time and inadequate training that they have self-described. This isn't me bad-mouthing these specialties, this is them and these survey-based studies that I'm citing saying this. So if primary care physicians and oncologists can't manage or address the rehabilitation needs, are we getting those patients? The answer is not as much as we should. So this is one example, what this study here that came out of Mayo where they looked at 244 patients and they systematically reviewed oncology notes and found that 82% had at least one physical impairment. The oncology notes discussed function relatively infrequently but balance and impaired gait were a little bit more, a fifth and a fourth of those patients had that. And you know that it was higher than that that actually probably had balance or impaired gait problems and I don't know how you have balance problems but not impaired gait, but regardless, it's probably a lot higher number than that. Despite that, the symptoms that generated supportive care referrals, not to rehab but for supportive care were pain, weight loss and nausea. Function was one of the least and lowest referred problems and no functional impairment generated a PMNR referral. And that was at a hospital with a very well-integrated PMNR program, a very well-respected PMNR program and one with cancer rehabilitation services at the ready. That's a problem. So how do we meet the needs of this growing population of patients with impairments when oncologists primary care physicians can't do it and we're not getting the patients? The answer is we need research to move the needle. We have to demonstrate our value. We have to say there is clear evidence that we can help with problem X. So you should refer us for that or our treatment Y can help your patient, have them see us and we might be able to administer it. We are behind in research though, as anyone watching this probably knows. I'm gonna go through a couple of slides to illustrate that point. There's more that I could do. This was interesting that when you looked at a snapshot of oncology randomized controlled clinical trials, and that really is the gold standard, the double-blind randomized control trials. If you wanna get into an oncologist guideline paper or recommendation, you need to have that level of evidence to really be recommended without reservation. And quite frankly, you probably need more than one randomized control trial. In oncology in this period of time, they had 694 phase three randomized control trials to test to see if their interventions were helpful. It contrasts that with rehab, which is broad, not just PM&R, 194 total randomized control trials in the year 2019. Now you probably are familiar with this with the COVID and the vaccines, but if you're not, trials often go in phases. Phase one being, is this feasible? Does it maybe work? And is it safe? We do this at an institution, a relatively small number of patients. And if you get a signal that it's helpful and not harmful, you can move on to phase two, which might expand it regionally into a larger population at your institution. And you check for more adverse events, but also maybe add a few secondary outcome measures. And then if that works, phase three, it is, okay, let's test it maybe across the country or the world, different populations, urban, rural, wide variety of patients. Does this work? And really drill down to what are the adverse events? There aren't many phase three rehab randomized control trials. I can think of one or two off the top of my head. And I've been giving this presentation in my head a couple of nights this week, and I still haven't thought of more than one or two. And they had 694, essentially the same number of total rehab trials in one year that we have. That's a big problem that's reflected in this, which is our journals. If you look at this, this is impact factor, which is, it's not the be all end all of how good a journal is. I don't wanna make that claim, but when the numbers are this disparate, it does tell a story. So impact factor, if you're not aware, is the average number of citations per article over a two year period. This top oncology one, CA cancer had 292 compared with the archives of physical medicine rehabilitation. One of our best journals, a really good journal, great editorial board. They publish randomized control trials frequently, 2.69. Go down, Lancet Journal of Clinical Oncology. I didn't include JAMA Oncology, which is right up there on the 30s. They're dwarfing us, okay? And so when they're making their guidelines, if we don't have the same sort of, if we don't clear the bar that this specialty is set, we're not gonna get in there and our patients aren't going to get the care that they need. So we're really behind. Why is that? There's a lot of reasons and folks in the chat or with Q&A later can certainly bring up more. I'm gonna list four. One is that we have inherently a really highly variable patient population. We're treating function, which is anything, but if you look at any department or a practice group within a hospital, you might have the spine, back pain doctors, you might have pediatrics, spinal cord injury, et cetera, and who they treat is vastly different. And so to study an intervention that says rehab works, it won't necessarily apply to everyone. In addition, within our practice, even if we're in the same subspecialty, we might practice very differently. Some do bedside procedures, some don't, some are inpatient, some are outpatient, and how we approach a problem might be different. How I treat shoulder pain after breast cancer might be different than somebody else. And our interventions might be different, but we might still help the patient. We can't study an intervention if we do two different interventions. So we need to sort of, to an extent, be on the same page in either what we do or what we measure. One of the reasons why we're not on the same page is that we don't have the evidence to make clear rehab guidelines to say, this is how you treat shoulder pain in a breast cancer survivor. And so because we don't have evidence, we don't treat the same way, and then we can't generate that evidence. It's a vicious circle. And finally, we don't have the infrastructure that a lot of other medical specialties do, quite frankly. You know, there are very few rehab subspecialty organizations that have national research coordination. There's the model systems and TBI-SCI burn. Other groups like the Cerebral Palsy Research Network have a large rehab component, but they're few and far between when you compare it to, say, oncology, because I can't count the number of cooperative groups they have in organizations that have infrastructure built in for data sharing and for basically plug and play. If you want to run a trial, do this, go through this organization, and they'll get it out to their patients, have coordinators, et cetera. It's a lot easier to start a trial in oncology than it is in rehab. Our startup cost is higher. I'm going to go back to the variation, though. So this is, I made this kind of off the top of my head to illustrate the point that, you know, I think that we fall on the spectrum of having patients that look differently, that act differently, and that have completely different problems when you compare us to, say, cardiology and ophthalmology, these single organ system specialties. That's not to say there's not variation within there. Research is easy for them, but it's probably a little easier to have similar outcome measures and similar intervention styles if you're that specialty. When you break it down even further within PM&R, there's some specialties that see a more varied population, and don't take this as gospel. Again, this is off the top of my head, and this is not to say who have PEDs in cancer, who I put as a higher variability population is any more challenging or noble or anything like that than a less variable population, but I do think it has implications for research. You know, with cancer, we have a huge amount of variation. We treat patients who are just diagnosed who might need to get stronger to tolerate treatment. We have patients in treatment who develop impairments like peripheral neuropathy, fatigue, cognitive impairment. We have then patients who go on to have no evidence of disease, but deal with chronic cancer-related issues, even in survivorship, or patients whose disease is incurable, but still have impairments as their disease progresses that we treat. Compound that with the fact that we have to treat every single cancer that walks through the door, every single treatment regimen possible, every location of it there, and there's near infinite possibilities of what patients can walk through the door with with a cancer rehabilitation clinic, with a problem in a cancer rehabilitation clinic. And so with that variability, how do we begin to approach this idea of, you know, we need to have more clinical trials or better research to show that our interventions work? And so this is gonna get to sort of the heart of the presentation of what it is that we're doing, or did, and what we're going to be doing. So, you know, we wanted to address this problem, and this problem was sort of thought of as a similar origin story than I'm sure many, many have. We were at, in this case, an AAPMNR meeting. We were, you know, leaving. We had like an hour in a room, and we had a great discussion, but we thought of more problems than solutions and questions than answers. And then we were walking out, and I think it was Andrew Cheville that she said, you know, yeah, we should be measuring the same things. And it just kind of clicked and made sense. And there were five of us who then went across the street to a bar and, you know, had a pint of Two-Hearted later, and we were scribbling ideas on a napkin. And for some reason, these ideas that we've all had when we meet with our colleagues and we talk about like how we should do things, whether or not it's research or clinical or what, for some reason, this one stuck, okay? And we decided we can sort of start to show our value, even if our interventions are different, if our practice patterns are different, we see different tumor diagnoses, if we do different interventions for patients, if we're measuring the same outcome to say rehab is good, then that's going to help us. So unfortunately, there wasn't a clear candidate for a great cancer rehabilitation outcome measure. I'm gonna talk about two of them that sort of purport to maybe get to things that we would think rehabilitation would help with. These are existing cancer rehabilitation measures, okay? This is called the FACT-G, the Functional Assessment of Cancer Therapy General. And sorry if it's a little bit small on your screen. You'd think something called the Functional Assessment of Cancer Therapy would be right up our alley, but it's not. This is just one of a few pages of it. But you look here at this physical wellbeing section, they have something about nausea. And these down here about social and family wellbeing. My family has accepted my illness. I feel close to my partner. There's other questions about spirituality. Those are all important things if you're going through cancer treatment, but they're not modifiable by rehab. So if I get a patient who comes in with ephemeral neuropathy because of where a tumor was, I can give that patient an orthosis, an exercise plan, potentially pain relieving interventions, and that patient can be walking around a lot better. But if they're still nauseous or their family hasn't accepted their illness, the score of the FACT-G might not move and it might not show that I've improved their function. So this isn't really good for rehab. We wanna show that our interventions work. There's this, the PROMIS Cancer Physical Function. PROMIS is the Patient Reported Outcome Measurement Information System. Cancer Physical Function, that sounds right up our alley. But this is about 45 questions long. It has considerable repetition, and you can reduce it to make it a computer adaptive test if you have a computer and software, which is a non-starter for a lot of practices already. And even then, it's only physical function, okay? It's not getting to fatigue. It's not getting to other elements of it. It's just one narrow area where we can help with patients. So our team sort of assembled and rallied around this idea that we don't have something that really will show our value, but we need it. Okay, and this group comes from a lot of different geographic areas, a little bit heavy in the Midwest and the East Coast, although I think probably a lot of PM&R departments are at least. But we sort of all have this problem, and we said, we want to come up with a measure that's easy to give to patients, it works. If patients are getting better, it indicates that. If it's getting worse, it indicates that. It can help us with clinical decision-making, hopefully research or at least QI, and it's applicable to the outpatient setting. It's a little bit more clear when to refer a patient for inpatient rehabilitation, and our outcomes are more clear with FIM scores and other QI measures now. But outpatient is where we're sort of lost. So within these measures, as a really quick sort of refresher for people, there's clinician-reported outcomes we could choose, there's objective measures, and they have their advantages and disadvantages. Clinician-reported outcomes are quick and easy, but they require training, they require the patient to be there, and they're full of bias, right? If I want to see if my intervention works, I will probably rate that the patient's doing better as much as I'd like to not do that. Objective measures like hand-grip strength are really reproducible and they're accurate, but they're narrow in their focus and they require equipment training and that sort of thing. So we as a group decided we wanted to generate a patient-reported outcome measure to find one that would work to show value of rehabilitation in cancer patients, no matter what the disease state, no matter what the treatment, no matter what the problem is, we wanted to find something, and that was a big task. But patient-reported outcome measures, they're easy, patients can fill them out in the waiting room or at their home, they're cheap, and there's some evidence that in cancer patients, at least there's a survival benefit because if cancer patients are able to tell you about their problems early and often, intervention can occur earlier and be more effective. So how do we get to this, we've identified the problem, how do we get to the point of generating a patient-reported outcome measure that can actually show what we do, whether or not it works, okay? So back before the pandemic, there were these things called phone calls that I had to scratch my head and think about it. And we had some of these, phones are like where you talk to people but you can't see their faces. And we used a modified Delphi method to come up with what we want and how we want it. And a Delphi method is basically you answer some questions and we see sort of what the group thinks, and then you might circle back and generate new questions and eventually you synthesize sort of what it is that the experts or what it is that the people involved want to see. And so doing that, we reviewed the existing measures and figured out what we liked and didn't like about them. And we kind of agreed upon what we wanted to do. So our steps were this, and I'll walk you through a couple of them. So we decided that we wanted to make a tool using existing items out of Promise, okay? So our options were we could write our own questionnaire from scratch, whatever questions we want, or Promise, which I showed you earlier, they offer this sort of option to kind of mix and match if you're not familiar with it. So I copy pasted a few different items from different Promise measures, and they have thousands of items about fatigue, sleep, pain, mood, physical function, cognitive function, whatever symptom you can think of, social participation, and they have questionnaires that might be long, they have short forms that could be, you know, four, six, eight, two questions long. I put a few of these up there. Say you wanted to make a questionnaire, you run an MS clinic, and you wanted to develop a new questionnaire to look at fatigue, sleep, and pain, because that's important to your patients. So you could say, I like this item from fatigue, I like both of these from sleep, and I think this is really important for pain. You can then put it together and boom, you have a new fatigue, sleep, pain, short form for MS patients. Now, you don't know if this is any good or if it's gonna actually help tell you anything, but you've just created it, and that's kind of how it works. It's not, as long as you acknowledge that these are Promise items, there's no copyright, you know, regulation, you can email them and kind of make sure everything's okay, but it's sort of designed to do this. That's why it's really cool. Promise is, you know, the way that these items can be sort of interchanged is because they designed this couched in something called item response theory, which I'll get to in a second, that essentially is, it measures how good are each of these items at sort of telling you the information they're supposed to. That's a very general way of saying it. So you can mix and match whatever items you want, and you can create your own form. So that's what we were going to do. So then we had to decide what it is we wanted to ask patients about. You know, we didn't wanna ask about, you know, something we weren't going to be able to modify, like nausea, okay? So we, you know, through that modified Delphi approach, we decided we wanted to look at gross motor function, upper extremity function, fatigue, and social participation as these primary domains. But then within that, we wanted to capture a lot of cancer-specific issues, okay? So we wanted to look at, you know, peripheral neuropathy. So balance and walking, you know, had to do with that under gross motor. For upper extremity function, a lot of fine motor things with the fingers. Upper extremity function also got to lymphedema in a lot of our patients. Fatigue and cognitive interference overlapped. And you get the idea that we wanted these four domains to sort of branch out and cover a lot of important areas. So then from there, we went through the PROMIS items and we got to what was important and what we wanted to pick and choose from the thousands of items available. So this is what a PROMIS sheet looks like. It's a Likert scale, essentially, and you're asked all sorts of these things. And I'm gonna drill down into this one here to give you an example of how we can tell if one of these items is gonna be useful or not, okay? You can see on the far left next to the, are you able to climb up five steps? There's really tiny font that says PFB10. That's because each of these have their own kind of unique identifier, okay? So let's say we wanted to include, are you able to climb up five steps in our questionnaire? That seems like a reasonable question to ask. It's directly modifiable by rehab. Let's see. So we got this printout that I printed very little and didn't abbreviate or anything on purpose to show you how Greek it looked to me when I first saw it, and to a certain extent still does, that this is kind of what you're up against, but it's not that difficult to figure out once you, you know, if you have somebody who can clue you in on what you're looking for. So I'm gonna try to do that. So I found PFB10 right there. You know, are you able to climb five steps? And you see these numbers here, okay? You see this here encircled is a range, negative 2.6 to negative 1.09. That's sort of, think of it as the difficulty of the item. So if zero is the population mean, like an average healthy person would be a zero, you know, this would be a little bit easier for people to do than an average task, if that makes sense. Whereas if you were to ask patients about running 10 miles, that would be very much skewed in the positives, not the negatives from zero, because that's a difficult task. This number here is what you'll see on the next page. This basically tells you how useful the item is, how much information the item's actually telling you. It's the theta on a curve. And what that is, is how likely is the question going to give you the information you want? So if a patient says, I can climb five steps very easily, how likely is it that that's accurate, that the patient can do that? Or if they say it's hard, how likely is it that it's actually hard for them? So this is how those numbers are generated. So PROMIS, when they were field testing these items, asked thousands of people these questions, and they generated these curves of difficulty. And you can see here, there's that zero. And if you shift it to the left or the right, you can get easier or tougher and more difficult in terms of the task. And so climbing five steps would be shifted a little to the left, running after a bus would be shifted to the right. And here, this is that other number, that theta I was talking about, which tells you the amount of information provided. This particular item, I couldn't find the curve for PFB10, but this one is really good. It gets to over six, which is a really good number. It tells you a lot of information. It's nice and evenly distributed, like a bell curve. The negatives aren't really high and the positives aren't really low. This is a really nice even curve. So we wanted items that we thought were applicable, but also had some scores like this. So after all of us sort of came up with our items that we thought were good, we went over them, we voted on them, we added in a few factors we wanted to measure in clinic, like what type of cancer you had, what's your age, treatment history, BMI, that sort of thing. And then we came up with these 21 items to say, this is what we picked, these are what we want to test in our patients to see if they actually will help tell us what we need them to do. Okay, so after all that process and all those phone calls, we came up with 21 candidate items to say, this might be a questionnaire, each of these, we don't know yet though. I'm gonna jump back here and show you that last bottom right dark blue circle that says, we had to estimate the power and the number of patients recruited that we needed. That is, that sounds like research to me. And that also sounds like something somebody else with the skills to do that would need to do, which also sounds like you need money to do that. And you would be right. So I wasn't able to do a power analysis for something like this out of nowhere, and none of us really were. And we also weren't able to coordinate a study ourselves. Okay, so the snag, if you will, the problem is that money was the missing puzzle piece. And this is true with most all research, that's not like a retrospective chart review or something like that, and even those. So what we did, we could try to get some big NIH grant and strike out and spend months preparing for it only to lose out and not get a million dollars because we didn't have preliminary data. In our case though, we went to the foundation of PM&R and submitted an application and applied for a grant. And I have no affiliation with the foundation of PM&R, but I will say that we applied and we were funded because we had a good project and we had this sort of base of work to show them and say, hey, if you give us money, we're going to do this. And I want to say that it's actually, it was compared to grants I've since applied for, it was actually relatively painless, a lot easier than EMR and a lot of other things we use. So if you have a research idea, this is a PM&R directed funder, and I really encourage you to apply, but also to support them. This is a bit of a tangent I'm going on, but the money we got was fantastic, but it was really small. It funded our research coordinator, Gina Jay, who was one of those 12 people in the pictures. And without her, this project never would have gotten off the ground. She did a lot for coordinating things. But if it had been a larger grant, we could have bought out clinician time or more research support to make this even easier. As it was, we had to use a lot of elbow grease and unfunded time to do this. And so please donate so they can give larger grants to us so we can get bigger and better projects going or participate in the 5K that they offer. And Foundation, if you're out there listening, there's ways to get passive income. You can jump on Amazon Smile or something like that. You weren't down there last week when I checked, so please do that so it's easier for us to give you money. All right, back to the problem. So we have our 21 items, and we need to now test these in cancer rehabilitation clinics. So of all the sites involved, six of them became performance sites, and that meant that we were giving the survey out to outpatients who came through clinic. We had over 1,000 encounters given on paper or tablets. And we had stats help for the analysis, and Andrea Chaville really walked us through a lot of that, too, because she was familiar with the process and has been the MVP of the team, I would say. So we kind of just started giving this survey out, and we were storing the data that Gina was helping to coordinate, and we got this. So this is a pretty busy slide, but what you need to see is that we had six different sites. We had 616 unique patients, again, over 1,000 visits because of return visits. This really validated our thoughts on being disparate practices. If you look at the different sites and the breakdown of the cancers that we had, some were heavier in certain cancers than others. There were different numbers and certainly different styles of how we practiced. And so this really validated, to me at least, that, okay, it's good that we did this, and we started to find an outcome measure that could actually apply, potentially, across six sites. And if it can apply across these sites, which are, again, pretty different, it's reasonable to expect that it would apply to other cancer rehabilitation outpatient practices. So when we got the data, we analyzed it. So these are our own curves. So PROMIS has their curves, and we were able to get our curves because we had a bunch of numbers in an Excel sheet, and you can do this. I don't know how to make the curves, but members of the team did. And so we got to see on each item how much information did it give us and what was the range of difficulty. I'm gonna walk through, this is just six of the 21 curves. I'm gonna walk through a couple of them, okay? So this is, remember, that y-axis is how likely the answer's giving you the information you want. And this is the ability, you know, in terms of the challenge for the patient. You can see, like, the upper right one is a little skewed to the left. The bottom right is a little skewed to the right. Those are easier and more difficult, respectively, in terms of tasks. It doesn't make them better or worse. You want a range. You don't want to have all easy questions or all hard questions, because then you'll miss patients. And if you have all middle patients, you might have, you know, or, I'm sorry, all sort of medium tasks, then you're gonna have a hard time differentiating patients. So this one was really good, and this one was a stinker. I'm gonna walk you through why. These are what they were. So this good one had a really high number in that y-axis. It gave us a ton of information. This was the superstar. Out of all 21 items we asked, this one gave us the most information about a patient. So if a cancer rehabilitation walked into our clinic and said it was really easy or really hard to carry a laundry basket up a flight of stairs, that gave us more information than any other item of those 21, which is really fascinating to think about. But it's, you know, it's gross motor. It's aerobic capacity. It's, you know, balance. It's upper extremity function. It makes sense. And this was sort of an afterthought. We threw it in towards the end because it captured a few of those subdomains. We weren't sure it would be worth it. Contrast with this one, how are you able to shave your face or apply makeup? It performed terribly. That y-axis goes up to, you know, less than 0.8, and you can see it just drop off, you know, and it's not very evenly distributed. It was not hard for anyone, it seemed like. And we really thought that this one was going to be a homerun, no-brainer question to include because it gets to, you know, peripheral neuropathy, fine motor, you know, upper extremity function to get to your face. So if we had just, as doctors, sat around and said, these are the questions we should ask, we would have included this one, which really didn't help us with anything. It didn't tell us any information we really needed to know about patients. The other way we analyzed our numbers was regression analysis, which, again, is sort of seeing how much a certain factor played into, you know, impacts the scores, you know, to say it simplistically. So we looked at, you know, do patients with more advanced disease or with active disease in general score worse? If they did, then those items were a bit better. If they, you know, if it matched with BMI or performance status, which is how clinicians can sort of gauge a patient's physical independence, then we were more apt to keep them. And so using the regression analysis and the IRT graphs, the graphs that I showed you, we were able to cut or cull 12, or I'm sorry, nine items to get down to 12. And so that one on the right now is those items were highly applicable to our patients. They spanned a range of different traits in terms of easy and difficult. And they also got to a lot of our domains and subdomains. And so this is, you know, our initial validation. And from this, we were able to get a scoring algorithm that Promise made for us, because again, they want you to mix and match and make your own, you know, questionnaires. We published a couple of papers so far, and we have two more we're working on. The one on the left goes into much greater detail, our process, if you're interested in learning more. And the one on the right is where you can read more about the numbers and the tables that we had. And, you know, I'm really proud of the group for doing this. This took, you know, it took a few years and fits and spurts, and it could have been done quicker, you know, if we had pushed ourselves or if we had, you know, more funding to buy our time. But, you know, the fact that we're gonna be coming out with more papers and that this really is truly the only, I think, really good measure of function in outpatient cancer rehab patients is important. And so it really helps us to, you know, we can say, okay, we might practice different or we might see different cancer patients, but we know that this is going to measure function in that population and it's related, you know, the questions can be related to the disease status, treatment status, et cetera. So it now is the potential for us to take this a little bit further and start to do some multi-center research or at least retrospective analysis. And we have common language we're speaking. So before we get to the questions and answers, and we're gonna save, you know, a lot of time for that and discussion, there's sort of four take-home points that Teresa and I came up with. You know, the first one being, you know, we need more research and it's in quotes because it seems like every paper I review and used to write, you know, this would be a sentence in the conclusion. And it's true and it kind of gets repetitive, but, you know, if we want to move the needle and get sort of the respect we all want, you know, and the respect that we need for our patients to get our services, we really need to move the needle this way. You know, there's research to show that what we do is effective or cost-effective. Either way, we have to do it one or both of those ways to really move the needle and to get these guidelines, be it cancer rehab, be it spine and pain, be it musculoskeletal sports, you name it, we need to do better as a field. I will also say that your ideas that you kind of sit around with your friends and just, you know, talk about and scribble down on napkins can actually turn into results. You just need to find, you know, the people who are sort of enthusiastic about this. And I promise you, and I said this in the beginning, if we can do it, anyone can do this. It's just, you know, some passion, some phone calls and, you know, hopefully this is just the beginning for us. It seems like it is. But even if it's not an outcome measure, it's if it's an intervention you want to measure, if it's, you know, some other coordination, you can do it. And to do that, to make it easier though, we do need more funding. So I'm going to circle back to that and say we need to maybe advocate, you know, to continue, you know, model systems funding and that sort of thing, or expand it. But also, you know, groups like the Foundation of PM&R need our support. So please consider donating them and other relevant organizations so that we can get better and bigger grants. And then finally, multi-center research is possible. It really wasn't that hard. I thought it would be incredibly hard to get data from Seattle into the Ann Arbor RedCap server. But, you know, there's lawyers who take care of this. And if they say, yeah, we'll do it, we'll do it. And they just do it. You know, I know that sounds, it's simplistic, but I wasn't writing up, you know, research contracts or anything like that. And you wouldn't have to either. You just, again, need the right people and maybe a little bit of seed money, a tiny bit in the grand scheme of things. But we can be coordinating more than we are. And once you have some projects like this, you can then try and get bigger funding, right? Or you can get philanthropy if you say, hey, we actually can generate results. This is what we did with a small chunk of change. If you give us a lot of money, these are our plans. So that's NICS and that's how it starts. And it doesn't take a whole lot to start. So I'm gonna, you know, wrap up by circling back to the 12. Again, we have to acknowledge their contributions to this in formulating the questions and sort of the concepts and then certainly the ones who field tested this and helped us with the research. Selfishly, I'm also putting six people at the bottom who have taken time out of their busy days, multiple times and their careers to help me and help me grow to the point where I can participate and hang with the 12 above them. And so I highly encourage you to be a mentor, but also continue to be a mentee. And thank you to all of these folks at the bottom for that. And Mike O'Dell gave a great National Grand Rounds that you can still stream about the importance of that, that I encourage you to look at on the APM&R's webpage. And then finally, finally, you know, with this unfunded time, it's a lot of phone calls and nights and weekends, but I was fortunate enough to be able to, when I hung up the phone, kind of hang out with the coolest people I know. And so shout out to them and everyone can relate to this, that, you know, when we get done with these busy days, this is, you know, our family and our friends and everyone is the reason we do this and it's a great thing. So thanks to them and thank you for all listening. And I know I kind of ran through that fast, but I was hoping to save about 15 minutes for discussion. And it looks like we have that. So I hope that this at least can get some ideas generated and that, you know, even if you're not looking at outcome measures, cancer rehab, if you have no research experience, trust me, you can do something and the field really needs it. So I hope you consider it. So thank you. And I will turn it over to Teresa now. Thanks, Sean. I think that was really awesome. And I really appreciated this presentation. We have a few questions, but I think first off, I wanted to have you mention the journals, the journal on the left that really talks about your process. Could you tell folks where they can find these publications? Yes, I will scroll back. So this is in the Frontiers of Oncology. We kind of shop this around to a few different journals. This one, you know, has a section specifically for methods and they kind of like the paper. So it's all, both of these are open access. You don't need to log on to like a computer at a major hospital or anything. And there's lots of charts and graphs and it's kind of like a recipe or a cookbook, if you will, about these are our steps. And like the appendix literally has some notes from our phone calls. It's that sort of painstakingly in detail if you want that. Or you can just email me or anyone on that paper. And then on the right is in the Archives of Physical Medicine and Rehabilitation. That's open access too. Great. And so one of the questions I've noticed is that a question about, are these items applicable across different PAM&R subspecialties or would you, the question is, would you simply test these same 12 items in those fields or come up with entirely new items? I would probably come up with new items. Depends, it depends. You know, if you're, you know, in spinal cord injury, you know, can I walk up five steps easily is probably not a great question, you know. And then, you know, some of the questions about work wouldn't apply to pediatrics. So you kind of have to, you can look through the PROMIS items and get to the ones that you think are applicable and then just as easily test those new ones that you would, the 12 that we came up with with our patients. Philip Chang asked, said amazing presentation and asked, once you have a centralized multi-centered database, are you able to change the kind of information you collect after you've started data collection? So you need to, so yeah, so we had something through REDCap, a program that Adobe has that many people are probably familiar with that sort of collected all these data and put it into one file that you could download as an Excel spreadsheet. If we wanted to all of a sudden say, hey, in addition to this, we want to select or we want to measure hand grip strength or whatever, we would have to run it through IRBs at every site and then also update the contracts that the site signed with lawyers to say, we're going to share this new data as well. So you can do it, but it's kind of a pain and it's one of those things that you probably shouldn't modify a lot, but kind of worth doing every once in a while to keep things refreshed. Sam Meyer asked, is this PROMIS subset accessible now to the general public? Yes, it's in the archives paper and we're going to put it, we're going to make a webpage and try to actually get it in healthmeasures.net, which is PROMIS's sort of central database. But I think we're waiting on putting out the next two papers just to kind of further say this is a good idea before we kind of put it out there to that extent, but you can use it now. Great, another question came in about what's the educational level for this instrument? And I imagine all of the PROMIS items are on a similar reading level, but do you know what that is? Yeah, and I didn't get to this in this talk, but it's in our methods paper that we actually, we put these all into, we checked the reading level on everything and most of them were, you know, around eighth to 10th grade. I think there was one or two items that were like 12th grade. So not, we paid attention to that and made sure that it was going to be applicable to as many people as possible. I think there's a good question about the relationship or connection to the FACT team at Northwestern, the FACT and FACET work that is done by David Sella. Have you interacted at all with Dr. Sella to see about the interface with his tools? Yeah, I mean, and he was instrumental in PROMIS. So we, you know, in working with the PROMIS group, we kind of worked with him indirectly, so to speak. But yeah, and they helped answer our questions. You know, like we would say, can we do this? Is this a thing, that type of thing. And they were really, really responsive. They want us to do this. We're, you know, we weren't stealing their work or anything. That's what, this is what they want you to go do. Dr. Atchison asks, do you see the PROMIS tool equally effective in the three different instances of visits that you mentioned, i.e. start of treatment, during treatment, post-treatment? And is it possible to track those changes in the tool and be able to design a rehab plan more specifically? Yeah, you know, in our regression analysis, we really tried to look at, did the scores of these items change when people were, you know, in treatment or post-treatment? Or if, you know, if they're post, if they're post kind of primary treatment, if it was because they're now cured or if they have incurable disease. And so our items do sort of respond to that. We have some more historical data we're gonna maybe analyze for the next paper to look at if they even had a history of chemo or radiation or surgery and what that does for their function and sort of use that as well. But I do think that, yeah, it's possible to see that the scores kind of oscillate depending on where they are in treatment. And then you can sort of reach a threshold of, you know, we should intervene or not. And I think that goes into like, how much have you really interwoven this into your clinical practice? You know, gathering information. What's that? You're gathering information, but does it leading to some decision supports or, you know, earlier interventions, for example? So, yeah, that's sort of 2.0 and what we've been now as a group talking about integrating it. And we're trying to do this in a way that's very sustainable and not burdensome. And so different sites have now put this into, you know, the sites that have integrated into EMR, right now I'll use Epic. That's not say you can't in another EMR. I don't care which EMR you use. But as an example, we're putting it into Epic where patients would enter, you know, their answers and it would generate a, it would say, this is your standard deviation. And this is where you were at your last visit and it can track it over time and all that sort of thing. So you can say, see if a patient's improving, staying the same or getting worse. The nice thing about that too, is it's easily extractable, I'm told, by people who know how to do that into, you know, RedCap or something like that. And then, you know, generate some data for publication. I think you're touching on the answers to a couple of questions about what's next. What's your next endeavor with this? And I hear what you're saying about next steps. Are there other things you want to touch on in terms of next steps for you, for the group? No, I would say that we're a very open group. We're trying to get more people involved than just we're on this. So, you know, if it's something you think you could participate in, you know, let me and others know. It's informal right now. There's some talk about going after big funding, but you know, that hasn't, we haven't found the right opportunity yet, but that's next. And then I think what it is, is rather than having one or two of us, you know, get a K award and then go, you know, make your phase one trial, phase two, phase three, then publish something after 10 years. What if we just had thousands and thousands of data points that showed, you know, an outcome of say, patients who got into rehabilitation at this cut point or this point in their treatment, you know, there's some signal of improvement. It's not that phase three randomized control trial, but we might be able to generate a lot of that type of data and put it out there to at least, at least get better evidence than we have now and set the stage for bigger trials. So that's sort of, that's my thought at least. And, you know, hopefully this questionnaire gets used in those trials that people are, you know, going to write for their K awards as they come up. Thank you. Sean, I think one of my questions is geared towards, you know, we get driven to try to answer about our interventions, you know, the dose, the duration, the amount of whatever the intervention is. And then you look at the Jennifer Temel article about palliative care, where palliative care was sort of like they had a consultation that included these components. And like, that's the intervention, you know, and I wonder what your thoughts are about these two sort of directions of research for our field. I think that's where we need to go. And so for people who might not be familiar with the Temel study, this was the one that shot palliative care into the stratosphere as far as its place in healthcare in general. And Jennifer Temel basically did a randomized control trial where she took patients with advanced lung cancer and she gave them early palliative care. They didn't wait until the patients got really sick and were very close to death. And what she found was that patient's symptoms were way better, healthcare costs were down. In her study, which was a phase one, they actually saw survival benefit, but that wasn't reproduced on the subsequent phase threes. But that was enough that now palliative care is everywhere and no one questions it. So, and it makes sense, right? So if she had done a trial that said giving morphine at this dose for this problem works, that wouldn't have shot palliative care to where it needs to be. So I think that if we sort of, if our intervention is the physiatric evaluation, this could be for back pain, right? There's the study out of West Michigan that, you know, physiatric, you know, involvement early, reduced the number of spinal fusions, but had the same or better outcomes, right? That's all that it took. It wasn't, this is what the physiatrist did. This is just patients saw a physiatrist and things got better. I think we need to think about that. Right. Yeah. I would agree with you. We need to pick the right outcome measures though, because physiatrists do practice differently. So that's, you know, coming back to this. There was another question also about were there other clinical areas, cancer rehab patient outcomes that the 12 standardized items were not able to address? Did you see any weaknesses in terms of, you know, impairments that weren't caught well by your items? Yeah, I think we probably could have done more for the cognitive symptoms that patients develop, at least during treatment. We sort of, with the fatigue questions, we tried to get a little overlap and cheat there because thinking of somebody who's really tired that they can't think clearly, you know, they would check that box. But, you know, patients who receive chemotherapy and other treatments often go through a sort of brain fog, you know, and we'll test similarly to maybe some concussion patients. Most of those patients will get better, but a subset won't. And I, you know, this, I don't know how much we would pick that up. And, you know, that's something that's potentially a weakness. So if we were running a cognitive trial, you know, I wouldn't make this the primary outcome, this questionnaire. Very good. I think we're reaching the end of our hour, but I, again, want to thank you for a really engaging talk that I think, again, is relevant across the breadth of our specialty, not just for those interested in cancer rehabilitation. And really, again, thank you for being willing to join us this evening. Thanks for having me. And to anyone listening, if you're interested in this or your own projects, don't hesitate to reach out and, you know, let's talk, because we've got to move the field forward. Great. Well, thanks everyone. And I hope you have a pleasant evening. Thanks. Bye-bye.
Video Summary
Dr. Sean Smith, an associate professor and the medical director of Michigan Medicine's Cancer Rehabilitation Program, presented his work on developing a patient-reported outcome measure for cancer rehabilitation. Dr. Smith and his team wanted to address the lack of an effective outcome measure in this field. They used a modified Delphi method to come up with the domains they wanted to measure, including gross motor function, upper extremity function, fatigue, and social participation. They then used existing items from the Functional Assessment of Cancer Therapy and PROMIS to create a questionnaire. They tested this questionnaire in six different cancer rehabilitation clinics and analyzed the data. They found that some of the items performed better than others and were able to refine the questionnaire down to 12 items. This process allowed them to create a patient-reported outcome measure that can be used to assess function in cancer rehabilitation patients. The next step for Dr. Smith and his team is to work on multi-center research and to seek funding opportunities. Overall, this work highlights the importance of research and the need for standardized outcome measures to demonstrate the value of rehabilitation interventions.
Keywords
Cancer Rehabilitation Program
patient-reported outcome measure
modified Delphi method
domains
questionnaire
data analysis
refined questionnaire
assess function
rehabilitation interventions
×
Please select your language
1
English