false
Catalog
Research in Physiatry - Opportunities to Engage in ...
Session Presentation
Session Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good evening, everyone. Thank you for joining us today for the community session for research. We'd like to share with you, hopefully, a really engaging evening of opportunities to engage in and optimize quality of physiatry and research. And this is the tag team Heather Duo tonight. My name is Heather Vincent. I'm serving tonight as the community session chair for this year and the moderator. I am from the University of Florida, Department of Physical Medicine and Rehabilitation. And my area of interest and expertise lies in sports performance, but also performing PM&R research. I hopefully can share with you some pearls and information that I've learned over the years as associate editor for medicine, science, sports, and exercise. And I've just sunsetted my term this year for PM&R. So bringing some of that real life experience, I'd like to introduce Dr. Ma, who's joining us from the University of Rochester. Hi, I'm Heather Ma. I'm a physician. I completed residency in PM&R and then I did a brain injury medicine fellowship at Spalding Medical Hospital, Harvard Medical School. I was an RMSTP pre-applicant, and I have since kind of taken a clinical path for most of my career, but I am trying to incorporate research. I have an MD with a master's. So my emphasis is really on clinical care, but also there's so much that's needed in brain injury, rehab medicine, so much that's not known. So I kind of focus a lot on that. Excellent. Thank you. So what we hope to bring you today is some real life experience from clinicians in the field, as well as the research side, and bring together some information hopefully you can take with you and put it into practical terms. So tonight we want to cover just a few things. Please feel free to use the chat if you have questions or comments, we'd love to hear from you. We'd like to share with you the best practices to ensure high quality research if you decide that's what you want to engage in, or you want to build on what you're already doing. Dr. Ma is going to speak about bringing together a research enterprise in your specific physiatry environment. How might you leverage the opportunities that are there and make something happen? And then finally, bringing it to a close with how once you get your work, how do you successfully disseminate it, and giving you some tricks of the trade and some insight on, from the editor point of view, what do we look for when you're trying to make that first cut of your research, or going beyond that step to understanding what it means from a paper rejection. So we want to bring you some real life experience related to that. So we look forward to working with you today. So let's go ahead and we'll start first with the concept that when we think about research, if you decide to take it on, take your time. Any project that you decide to do, whether it's a literature review, the classic lit review, or if you decide you actually want to collect data and move forward, don't take any shortcuts, carefully plan it out, think about it, seek advice, and take the time to do it right. And if you do it right, it will be a well cited piece of work. And just to give context to it, for those of you who might have already done some research, you know that at minimum, it takes about six months even to get a basic retrospective or chart review study off the ground, over years to do a single study, depending if it's interventional or rare population. So the first place we need to start is really getting the right question. And keep in mind that if you have the idea for a project, it's likely that somebody else has also had that same idea too. So instead of doubling down and maybe doing the work that somebody already else has done, if I could give you one strong suggestion, that would be do a thorough literature search. And this is really critical to make sure that your question is novel. And it's perfectly okay to have the same line of thought as someone else, but the right question also has to have certain characteristics. And this means you have to be able to test it, or you have to be able to characterize the particular area of interest that you have. The question should be discreet, so we don't want to dump a ton of variables in there and get it complex, convoluted, and then it's not feasible or not finishable. We want to make sure that it's also ethical. So if you decide that you have an intervention group, make sure that even if there's a control group, what are the controls getting something, what are they getting out of the research experience? And then also think about if you do the project, what could the possible next step in the line of work be? And if you start thinking in these thoughts about not just one-off projects, it's going to be really exciting for you as the researcher to explore potential understandings of what your findings actually mean. And so Dr. Ma is going to speak a little bit to the clinical importance about what it is that you're doing. What are the real implications of the findings? And avoid at the endpoint of all your work to simply say at the end that, quote, additional studies are needed. If you do the right question development up front, you will get some type of an answer that you can publish. And so one of the most common mistakes that's encountered in clinical research is not appraising the literature correctly. So as you mentor your trainees or if you're working with students, if you're a fellow who's listening, have a really good understanding of what has already been done in the literature. And I mean, it's not just doing a simple lit search that can be done in a week. A true, careful appraisal of the literature usually takes weeks or months. And this means you're going to read the studies, look at the reference list at the end, maybe read some of the others that are in those papers, and really carefully reviewing what's actually been done and what, in your opinion, is the quality of the work. So are they picking the right population? Do they have enough people in the studies to make conclusions? What's missing from those papers? And from your perspective, what could you do that's unique to fill a gap? So as you appraise the literature, it's very helpful to have going on the side an Excel spreadsheet or a notebook in which you keep a log about what you're reading, what has been found, and what the date was and sample size. It's a great way to sort of register the work that's already been there, but then what you're actually creating is a true, systematic review. And over time, you can keep building to this, and you become more and more of an expert in that area, more on what you read. And from that, after those weeks or months of really pouring through the literature, start to get comfortable with being uncomfortable in thinking about your question. So really sit back and question yourself about whether or not what you want to answer, what does it mean, and what could the possible outcomes be? So here's an example that is a real thing that I'm working on right now with one of our faculty. To determine whether obesity impacts functional outcomes after an inpatient rehabilitation stay for stroke. So this particular faculty is interested in stroke functionality after stroke and what characteristics of patients affect that. So from this question, I encourage this faculty member to write down what are the possible outcomes. First, patients with obesity could end up worse than those without obesity functionally. Second, patients with obesity are going to do better than those without. Maybe there's a physiologic reserve associated with that condition. And then third, maybe there's not going to be any difference at all, whether or not a person has obesity or not. And then once you've got those choices, then think about what each of those findings could actually mean clinically. And does it provide something valuable? So really start to challenge your own questions and work through it. You're developing your research model. What if there's only a few differences in the function, but not a lot? How would you interpret the data at that point? So it's a really great exercise of wrapping your thoughts around what your actual question is. Once you've got through that process, this will really help you shape your true hypotheses. So once you've got your question, you finally work through what you truly think is going to happen based on the strength of the evidence or your clinical observations. And just remember, take some time and don't rush through this. This is probably one of the most important things that you can do in setting up your study. The next most important thing is developing your actual question itself. And as an editor and a grant reviewer, when I look at someone's questions, the first thing I look at is, is this person's question testable? So here's an example of an aim that is not testable. A therapist-run psychological support program is better for patients after traumatic amputation injury than a web-based program. So why might this not be testable? Because we haven't defined what better is. We don't know specifically what is improving. We don't know the length of the program. We're not providing enough specifics yet. So what we need to think about is adjusting our wording to get the question right. A therapist-run psychological support program will produce greater improvement in Beck depression scores after traumatic amputation than a web-based program by six weeks after hospital discharge. You can test that. And again, what it forces you to do is really understand the literature and produce something that's valuable and meaningful. And you do not have to be a primary researcher to develop a really good question. So I just want to make sure that that's clear for everybody listening. It doesn't matter what your background is. You can still develop a really good question. It just takes a little bit of time and some feedback. So this part I will emphasize, take some review and processing by others. Get feedback to see if it makes sense. We also want to make sure that we're choosing the right measures. So if you look at the papers that ultimately don't get published in research, or when you read a paper and you say, ooh, that wasn't a very good one, what are some of the limitations here? Sometimes the measures just aren't right. They don't provide the information that they set out to answer. They're not valid or reliable for that population. And most importantly, they're not sensitive to detect the changes that you're looking for. These measures could be affected by other conditions in the study that you hadn't really planned or thought about, or have huge variation, or were performed by testers who really didn't have the qualifications to do so. So an example of that might be, let's say someone who's doing EMG measures, and maybe they don't have many years of experience doing that. And so their interpretation might not be at the same level as those who have electrodiagnostic specialty. I want to share with you an example of a limitation to an instrument that you might think about using. So in our field, one of the things that we look at is fear of movement due to re-injury. So this is called the Tampa scale of kinesiophobia. And this is a subjective rating about how fearful a person is about something going wrong due to pain. This is a 17 item scale, and it's fantastic for people that have chronic low back pain or knee pain. It's very well-published and a good use for that, those conditions, but it's not clear whether or not this instrument is good for things like osteoarthritis. So keep in mind that just because it's been well-published doesn't mean it's appropriate for the population that you want to look at. So take some time to think about that. In addition, some of the items on there are difficult to interpret. So maybe you don't want to use these in populations like children, for example. My body is telling me I have something dangerously wrong. Maybe they don't have a context for that. Or my accident has put me, my body at the risk for the rest of my life. Maybe they didn't have an accident, but they have some other type of pain. Everything lets me know when to stop exercising so I don't injure myself. Well, maybe they didn't have an injury, but if they've got a chronic disease, they can't relate to the question. So I think you see where I'm going is that some of these items might not be relevant for your particular population. So just pay attention that some of the items on scales that you initially choose might not be right. So explore some others before you make a final decision. Here's another example of one that we tried to use, and this is a real life example. We were interested in running athletes, trying to get a sense of what happened when they had a lower extremity injury, if we could use the lower extremity function scale in this group. And it's a 20 item scale, and it's fantastic for things like hip or knee osteoarthritis, but when we tried to apply it in runners, it had a ceiling effect. So basically these people were way too healthy and this scale did not show us actual limitations in this population. So an example might be, do you have any difficulty with getting in or out of the bathtub or walking between rooms or putting on your shoes and socks? Runners didn't relate to that. So you can imagine that they all scored really, really high, even though from their injury perspective they were debilitated. So just be aware that there's different ways to look at the type of condition that you're exploring and try and pick the right tool with input from other investigators. If you're unsure of which measures to use, an example is you can actually test them out. And so again, a real life example in our laboratory is that among children with juvenile idiopathic arthritis who have chronic pains and pain flares, we want to know how do we best measure functionality in these children? We didn't know what medicine combination reduces pain and improves function the most. At this point, we have no idea what scales or functional tests are most sensitive to joint pain in these kids. So what we've been doing is testing the ones that are successful in adults, things like chair rise, walking speed, some gait mechanics, joint flexibility, hand grip strength. And what we're doing right now is determining which of these measures can discern whether or not a child is having a pain flare versus not. Are children the same as adults? And we don't know. So that's what we're exploring right now. So we can go to our next stage of the research, which is then applying the intervention. So test them out if you're unsure. And that's also itself a great contribution to the literature. Never underestimate the value of testing out your measures as getting started. Be aware of the population that you're trying to study as well as accessing them. First of all, really get an understanding of who you're sampling and get a real feel for if you're studying something in particular about that population, what's the age range typically is affected? Is it male, female? Is there a specific type of race or ethnicity? Is your population generalizable or highly specific? So take some time to really, when you look at the literature and you're making your list of the papers that you're reading, who is it that has been studied and where are the results actually finding some significant differences for your interventions or testing? So refer to previous work in that area and then think about, okay, I think I've established my population, but do you have control over accessing this population? So if I could give a practical suggestion to all of you tonight is that is really think about for the success of your research, what can you best access or are you dependent on somebody else to get patients? And I have seen more studies fail to get the recruitment numbers because they were dependent on another clinic or another facility to get patients. So think about doing the work in areas that you can also control. It's going to make your life a whole lot easier from a practical perspective. Never hesitate to reach content experts for input. And a lot of times the experts like to be asked and appreciate being asked because they know you're trying to do the right thing and setting up your study design the right way. So reach out to others who've conducted work in the area. And often in publications, their contact is actually listed there. Or if you go to social networks like LinkedIn or ResearchGate, take a look and see how you can get in contact with folks just to kind of get a feel for their experience. More often than not, a simple 10-minute conversation can give you more insight on whether a study is going to work based on their experience about the process, the types of measures they used and the barriers that they've encountered so you don't make the same mistakes. It can be very helpful. I'd also encourage you to really use your librarians. They are an underused resource at any academic facility. Please get in touch with them. They are really wanting to be a part of research projects. And including them as an author in what you're doing really is extremely helpful. So going forward, a new process in literature searches now is no longer just a simple lit review. So if you assign fellows or you're working with other faculty or students on a literature review, it's no longer just a willy-nilly go do a lit review. Let's do it right. And include your librarian because, gosh, they can really help you get this work done. So in working with them as a partner, they can help you set up the right search terms. They will actually prepare and help you write up, in fact, the last one that we worked with actually wrote up the entire section on the literature search, the methods about the keywords and how things were generated so that way they're giving their expertise and you do it right. So it can really help you organize your thoughts and get you set up on crafting that list of papers to start going through. It can be very, very helpful in the organization process. And if you decide you wanted to do a literature review, take some time in advance. If you decide you want to do this and you know you're going to have students coming next year or the year after, or you have med students or your fellows coming, plan a year ahead and think about, I might want to register my protocol for my literature review. Get your librarian involved and use this registration process, which is called Prospero. What that is, is you put in a little application about what it is that you're trying to search, what your purpose is, who the authors are going to be, and what the registration does is protect your idea. So if you register it and somebody else in a few months says, oh, I have a great idea and they try and do the same thing, they can't because you now have the stake on that idea and your idea is already in the works. So nobody else can come in and take what you're doing. So it protects your idea, but it also gives you legitimacy when you go and publish it. You can say, I registered my literature review here. It's a great step forward, and this is the way a lot of journals are going. So think about that as kind of a year-to-year process as you develop your work. If anyone is online, Has anyone used the Prospero process before? And if you have, feel free to put some chat comments in if you have. It's pretty straightforward, but also very, very helpful with publication. And this is what it looks like. So what your librarian can actually help you do is shape and organize your search. So that way you're not just gathering a pile of papers and trying to get through it yourself. There's a systematic way to look at the papers, what you started with, which ones you weeded out and why, which records were screened, and then which ones you included in your review. It's very, very helpful and saves so much work at the end trying to define for reviewers why you did what you did. So it gives you order and justification to your research process. It really helps you become a better researcher. And please, please, please, if you can, get help with your stats before you start if you can. More journals now and more editors are looking for, did you think about in advance how many people you needed to answer your question? And sometimes if you don't end up with a significant finding, they might say, well, is it because there's no real finding or because you didn't have enough people? And if you didn't plan ahead, your paper might not be published. So just think about that. You might not need it for initial pilot studies, but once you have your pilot data, you can then plan your sample of what you would really need to do a bigger study. So this is really important when you think about doing an intervention. I want to share with you another issue that's coming up on the horizon. In the field of physiatry, one of the things we are challenged with is not having a good handle of social determinants of health and research. And so as you develop your research question or your lit view, whatever it is that you're thinking about, be cognizant to try and include as many social determinants as you can. So these are features about a patient population that can impact outcomes independent of anything else. It's not just age, sex, gender. It's not just ethnicity. It's also issues about geographic location, poverty status, education. All of those features can have a significant impact on the outcomes that we measure in physiatry. So we have a paper coming out that shows how we can get better at some of these processes, particularly when you're planning your work. So feel free to delve into this when it comes out. It's in the queue right now, so it should be published really soon. But I hope it sets a really nice foundation for you if you are in more of the post-acute setting, if you're in the transition setting, or you are in community care or long-term care with your patient populations. Think about study design and include as many of these as you can. And I bet you'll have a really strong paper when you report this. Think about keeping your data collection process as clean as you can. And so for interventions or patient interactions, try and make sure you blind where you can. So if it's your clinical coordinators or your staff or your students, try and keep them as blinded to doing what they're doing as you can. It's not always possible, but just do your best. With respect to data collection, always practice your measures first, even if it's just conducting a study. Always practice through it a couple of times. Do mock sessions on it to make sure you can get your language right and it feels comfortable. Everybody on the study team should feel comfortable with it. And try and keep your conditions and environment as clean as possible. And so what I mean by that is try and keep the situation as consistent between patient to patient as possible. If it's a busy room one day and a quiet room another day with different conditions, your data are not going to be clean. So if it's a busy outpatient setting, keep all the testing in that busy outpatient setting and vice versa. If conditions deviate, make some notes about it and keep track of it. So if things are weird, you can have an explanation as to why something may not have come out correctly. I also want you to think about feasibility of your work. And this is also why a lot of studies either can't finish or they can't get published because not enough people finished. So this is a real life example that I wanted to share that we did with one of our study teams between physical medicine and orthopedics. And so we had to really think about, can this project be feasibly accomplished? And what we wanted to do is we recognize that people with orthopedic trauma and who had functional deficits, we wanted to know if a psychological support and movement program early in recovery would be very beneficial for measures of emotional wellbeing. Sounds great. It sounds important. But when we really tried to put it into practice, we ran into some issues. First of all, when we went into the patient rooms to collect our measurements, most of the time the patients were asleep because they were so medicated. So they were on pain meds. The fatigue was an important issue and we would get halfway through a survey and they had to stop because they were too tired and we'd have to come back. Sometimes the cognition was really, really poor. So post anesthesia, people were not understanding some of the questions we were asking. That was a problem. Then we had to come back on a different time. Sometimes we had patients who were frankly just suspicious. So with some of our different racial groups and ethnicities, we would find that some just did not want to participate in this time, especially when they were vulnerable because they felt that the researchers might be taking advantage of them. So that's also a problem. In this case with an orthopedic trauma, depending if they had acute amputation or polytrauma with multiple fractured bones, but they were too overwhelmed and really didn't care about a research project and were just simply disinterested. So we realized that although this is a great idea, maybe this just was not the setting to put this intervention in and wasn't realistic. So in this case, we found that this type of a design was not feasible, but we learned from it and could now go forward and think about better designs that might be more effective. So as you design your work, think about minimizing burden for yourself as the researcher and for the patient. And I know all of you who are on the line have so many clinical duties. You have so many obligations to doing all your paperwork, your teaching, all the things you need to do from the administrative perspective. So realistically, how can you spread out the work to make it feasible for you? And during your patient time, what might be available for the research? So if you only have 10 to 15 minutes per patient visit, is there something you can do in that little snapshot? Or do you have to catch it before they come into the patient room or after at home using their login records? We also want to think about the patients themselves. Again, you might have a great idea in theory, but if you look at the patient flow and clinic flow, can you optimize the awake time that you spend as the clinician? And by the way, as the clinician, you are going to have a lot more patient buy-in than the clinical researcher walking in with a clipboard. So they're going to listen to you as the physician who's caring for them. So are there opportunities that you can work with, with a research team to take advantage of awake time or with a family member who's there to help with some of these things so it's not too overwhelming for the patient as well? And so maybe a solution could be in some settings using follow-up times or follow-up appointments during the waiting room, or when they're sitting in the clinic room waiting for you to come in, maybe they can fill out some things while they're waiting. So try and think about opportunities and windows and patient flow where your research can work. And if you've got an idea, try it and time it and make sure that it's not too overwhelming for you or for the person sitting in the chair that you're trying to collect data from. When you're collecting and maintaining records, I can't give more emphasis on documenting everything, being transparent with all of your changes and be in touch with your IRB and then audit yourself. It's very helpful to say every six weeks or eight weeks, just randomly pick a few charts, see where you've made some, and I mean you as in the generic you of your study team, where potential errors were, and then make notations of how you're going to fix it. The IRB loves that, that you are self-monitoring. But also when you're logging in your data into whatever method that might be, whether it's RedCap, Excel, or otherwise, keep your original unfiltered data separate at the end and then save whatever you're processing separately. That will protect you from other issues that could happen. Scan your records or use electronic format wherever you can, and please save all your calculations and all your data processing steps. So if you have to reproduce it, you can. It protects you and your study team from problems, and I've seen at different universities where things can go very poorly when there's not a really good tracking system of where the data are coming from. And then finally, know your institutional rules of how to protect your data. So just keeping something on a little drive is no longer appropriate anymore. Find out what the rules are, and if you put it on a OneDrive or some other encrypted appropriate device for your institution, just be safe. It takes 10 minutes to make a call, and it protects you. And then check and recheck your data. Some common things that can easily happen are mistakes with coding. So if you've got multiple people in your group that are entering in data, I guarantee there's going to be accidental coding errors no matter how skilled that person is. It happens. In addition, automated calculations in Excel can also be problematic. So sometimes if you type in a date of service, it might pop out on the other end as an actual number instead, and then your numbers get all confused. And then when you run your numbers at the end with statistics, look at the number ranges at the end just to make sure there's no absolute weirdos that ended up in there. You go back and check, and again, you protect yourself. They're simple steps, but they can avoid so many problems at the end. A couple last things, and then I'm going to hand this off to Dr. Ma, is that when you decide you want to go ahead now and put your thoughts together, your data are collected, take advantage of existing reporting guidelines. Don't try to wing it on your own. Let these different practices help you. So I'm showing you a couple here. One is called the Strobe Statement. These are more for observational studies. So if you don't do interventions but you do observation over time, that's fine. There's a great way to report your findings and making sure that you're meeting the minimum standards when you write it. It's actually really helpful just to make sure you're covering all those little points and the quality is up to snuff before you submit. You don't want to get rejected before you can let your data get out there because of these types of mistakes. So this is a very common reason for paper rejection, and in the end, if you report well, your data are going to be cited more and it's going to give you the power to be able to compare your data to others. Another reporting guideline is for randomized controlled trials. This gets a little bit more sophisticated with the types of reporting. Maybe some of you on this call tonight have done this already or have seen it, but it really is truly helpful for organizing the thoughts as you write up your your final paper. Save your copies of everything. If possible, if you have to, save the backup paper copies, but the original and the processed can really save you from things going poorly. So some practical take-home points from this part of the talk is really your good research question, no matter what your starting point in research is, is a good, clear, testable question with a hypothesis that you've really thought about for some time after reviewing the literature. Careful planning goes a long way to preventing problems and determining whether or not something's feasible. Seek input from other experts in the area, and good documentation really will help catch errors and prevent a lot of the issues that happen with paper rejections, and use available tools and checklists to help you plan after you're finally done. So I'm hoping that these are some good pointers as you're getting your thoughts together for excellent quality of the research if you get started. Feel free, as you're thinking about this information, use the chat function if you would like to, and I'm going to go ahead and we will go ahead and switch screens, and I'm going to let Dr. Ma go ahead and get her talk set up for you. So again, I'm Heather Ma. I've been in clinical practice outside of fellowship for about three years, but I did do an MD with a master's. I did a master's in clinical research actually at the NIH, and then I continued at residency and fellowship. So it's just how to create a research enterprise and you're really busy, or it's my really busy physiatry environment with this, you know, the RVU targets that we all have to meet, and the patient flow, and whether it's outpatient clinic and, you know, limited amount of time with patients, or inpatient clinic where you need to, in inpatient practice, you need to see as many patients as you can, you know, time is limited. So how do you, how do you fit in research? So this is just an overview of what I'm going to talk about. Really it's steps to create a research study, and our Dr. Vincent touched on a lot of this. We're kind of going to be saying the same thing, but you're starting with a good question. I promise in brain injury rehab medicine, there's lots of questions to be asked. There's lots in physiatry that we have not, that we haven't good quality evidence for yet. Developing a hypothesis, knowing the prior literature, and then creating a population. Again, and she talked about that, you know, population is important. Defining outcome variables, identifying collaborators, writing an IRB proposal, and hopefully completing the study, and answering the question, and writing the results. So all good research starts with a good question. Honestly, in clinics several times a day, I ask these good questions, and I often don't have evidence to cite them. This afternoon, I looked at a resident and said, I'm doing this for a patient based on a case report that came out 30 years ago, but this is all I've got right now. So there's very limited evidence. So just start with a good question, and then develop a hypothesis. So you're always trying to reject or find an alternative to the null hypothesis. If the null hypothesis is something you're trying to disprove, and so the null hypothesis is if there's no difference in the groups you're comparing. Or in this case, I have a five-year-old son, so this one seemed to hit home. You know, if the null hypothesis is that Timmy brushed his teeth, but the alternative is he didn't brush his teeth, so what's your data? Well, if there's a dry tooth brushing, you reject the null hypothesis. So you say, no, you did not brush your teeth yet. That actually happens if I ask occasionally. So know the prior research. Dr. Vincent talked about keeping notes, or keeping a blog, or something like that, of the articles you're reading. When I got to a fellowship, I realized there was more research than I was able to digest for a year. So I just started collecting studies, and then I came up with a system. So I have folders after folders. This is just a screenshot of the folders I have right now in my box drive. And this is, they're just all separated by subject, by title. I guess there's a few repeats, capitals, and lower cases. But every paper is stored by the first name of the primary author, last name of the primary author, and then a last name of the, you know, the ending author, or an author that I recognize, the year that their publication came out, and maybe a word or two about the title, about the topic. So that I can search my folders, my whole box drive, and come up with something to, you know, find the article that's on the tip of my tongue. And then create a population. The question is always, should it be clinical research, or should it be basic science? Well, if you're a busy clinician, then the easiest thing is going to be clinical research. You're, and I at least don't have time to sit down and look at cell cultures, or talk about devices, or, you know, look at devices, or things, or, you know, basic science mechanisms. Those are fascinating to me, I promise. But I haven't done that since I was an undergraduate, because I really haven't had the time. My focus has been in clinical research. So if you want to incorporate research, if I want to incorporate research into my clinical practice, it has to be clinical. And so you see all these patients who diagnose themselves on the internet. But where did that come from? So maybe that internet hypothesis is your question, or is your null hypothesis, and you're going to try and prove that it's not the case. So there's lots of types of clinical research studies, as most of us probably already know. The basic ones, the case reports, which we see a lot at conferences, abstracts, case series, if you have a few patients where you did the same thing too. Then there's your cross-sectional studies, where at one point in time, what is, at one point in time, what is the group you're doing? What does the group look like at this one instance? But there's retrospective, so looking backwards. Case control studies, cohort studies. Again, we're defining cases and controls and comparing them, or you're looking backwards at a whole population, a whole cohort. And then there's prospective studies, case controls, cohorts. And the gold standard is always the randomized double-blind placebo-controlled trials. And then the things that we all think are easy and should take no time at all, and really can take a year, are the reviews and meta-analyses. I will tell you that things that are low pressure, things that will leverage the patient interactions that I have on a daily basis, that I have a clinician on a daily basis, can be done on my own time schedule, include case series and retrospective studies, whether it's retrospective case controls or retrospective cohorts. I have a population that I happen to give a medication to. And so, yes, they are biased. There's lots of things we can talk about. We'll talk about it in a few slides. But in general, you know, I can look at the population who all had this diagnosis. Or a series. I have lots of patients with a TBI, and some of them I needed to give dinepizole, and they actually did better. And so there's really limited evidence for that. And so we can look back at a case series. So the ideal, there's always been a randomized double-blind clinical control trial. That's the ideal study. But the practical study, granted, the practical thing is a cohort. Whether it's retrospective or prospective, it's the practical side. And I at least can find, I can fit in more easily into a busy clinic or a busy clinical practice. Practical cohorts, cohorts are subject to selection bias. But in my experiences, if you articulate that, if you confront it initially, if you say, I recognize that this research is limited because my cohort was all men, or was all veterans, or was all whatever the cohort may be. You know, that gives you a, you recognize it. You say, there's a selection bias, and here's what it was. Here's why I decided that these patients were part of this cohort. Articulated. Something is better than nothing. In physiatry, in our field, there is so little evidence for so much of the things that we do. And so these journals of anecdotal reporting are so incredibly biased. And something is better than nothing. Something that has been peer reviewed, something that has more than one patient on it, something that's more than just a case report is better than nothing. So my hospital uses Epic as our EMR. A one thing that I have started doing, actually I started doing when I first began attending, was to create patient lists. So you can see the top ones are my clinical lists, whether it's, we have red and blue services on our inpatient unit. So it's red or blue, or our unit name is 51200. And so consults. But then the second part is patient lists that are of interest for my research hypotheses or my research questions. And as I'm seeing patients, as I'm seeing consults, as I'm seeing outpatients, I am just throwing charts into these folders. And then, whether they're interesting patients, and that one actually has a Excel spreadsheet with a log of the patients and the reason I think they're interesting. The research patients one and two is from a certain study I did. Then the, but my hypotheses, things like whether you should give somebody methylphenidate or imantadine, since there really isn't evidence for which one to choose. Whether you should use Stenovit. There is a lot of good case series for that. The effects of marijuana, which is still coming out. And then there's a debate in my hospital between neurosurgery and physiatry. Whether timing of cranioplasty. And patient outcomes and things like that. And then, actually, I am, we just submitted a second round, second draft of our comprehensive review with a very small case series on the use of denephazole in her TBI recovery and how it's kind of my, become my go-to second line neurosimulant. So potential outcome variables. So think about outcomes. Define your outcome variables, okay? In our clinic, we use PROMIS, which I know is, as most of you know, this patient-reported outcome measure information system. That's something that I find I can hand to a patient while they're in the waiting room and say, here, fill this out. And then it is put into a database and I can search the database later on. So that is where you're handing the patient the research or the research tool and saying, here, do this. Consistent questions. Before we, I had, before I was able to include PROMIS, I actually had all my patients fill out the PHQ-9 and the GAD-7, because I'm screening all these patients after a brain injury for some of the psychiatric disorders that often, unfortunately, coincide and affect the same organ. They all affect the brain. And so I, you know, for consistent history questions, everybody got the PHQ-9, everybody got the GAD-7. So then going back and saying, well, of these patients that I saw, of this defined population, what were their GAD-7 or what were their PHQ-9 scores? And was it dependent on X, Y, or Z? We, and for a concussion clinic, we use the impact concussion symptom score. I don't give everybody the impact, the entire impact questionnaire. I just give them the symptoms. I screen a lot for sleep apnea, because that occurs with TBI, with stroke, as most of you know. So I use a stop band questionnaire. Consistent physical exam maneuvers. If I do, always do vestibular testing on somebody, it's always done a certain way, that can become my population. These patients got, the cohort is that these patients got this specific physical exam maneuver. Now, my biases are why did those patients get the physical exam maneuver done? Why did I choose to do that? But then the results are how was, you know, the treatment or whatever affect their, affect the outcomes of these patients? And therapists, I think that often, I at least forget to think about the therapist, you know, do the therapists always do the same screening tests? Do the therapists always do the same testing, the same cognitive testing for inpatient, for outpatient, for, you know, when they see a patient with a diagnosis like this, do they always start out by this, a certain screening? So consider incorporating some of these things into your note templates, so that you don't have to think about them later on, but also so that they're easier to search for retrospectively. So we talked about asking a question, developing a hypothesis, knowing the prior research, Dr. Vincent talked a lot about that, that's really important, creating a population and finding out some variables. So next, identify collaborators. So potential collaborators at a major university, University of Rochester is a big academic university. So I have access to undergraduates, medical students, residents, fellows, therapists, nurses, other attending colleagues, and then administrators. What I have learned in the few years I've been here is that undergraduates take a lot of time, whereas really good medical students or really good residents don't take as much time and fellows don't take much time at all. The amount of time that you or I need to invest as a researcher is inversely proportional to the level of their training, their clinical training. At the same time, there's some basic translational scientists that I have access to. They don't take a lot of oversight, but they often struggle to ask the clinical questions. So I'm having a conversation about what the clinical implications are of the research that we're doing. Therapists, it depends on their experience. Nurses, the same thing. It depends on how much experience they have or how much exposure they have to research. They may require a lot of time because they don't have a lot of exposure to research. Colleagues, statisticians, I put that in quotes because really, do you need somebody who has a statistician label? Or in my department's case, we have a tremendous neuropsychologist who has statistics skill. And so she functions as a statistician in our department. Now, she can't do something very complex, but for a basic retrospective cohort study, for a basic question, she can do the basic statistics. So statisticians. And then administrators. Department administrators are incredibly helpful for either reaching prior patients or collecting some data that you have or knowing what kind of data is collected. One of our administrators has to keep track of certain characteristics of our inpatients. And so I have a cohort of inpatients where she's keeping track of a lot of the, at least demographic variables for some of these patients. And so I can say to her, can you run this query in your database? And within a few minutes, as opposed to, for me, it would take a few days. She can come up with a whole, an Excel spreadsheet which has a list of all the patients with this diagnosis and certain characteristics. The IRB. The IRB is often scary. It can be easily, people, colleagues of mine can be easily intimidated or concerned. Know that the IRB is not something that you should be afraid of. It's not a multiple-eyed monster. Retrospective trials are often given expedited reviews. The IRB, for an expedited review, still is often a few weeks, at least in any institution I've been in so far. But use it as a tool. So you have to write a background section for your IRB. So write it as a introduction to a future publication. Do your good literature search and then summarize your good literature search. If you're doing a retrospective trial, there are often IRBs you can ask a colleague to borrow and modify because lots of the wording of certain sections are often just able to be repeated with some minor tweaks. Proofread it, but with some minor tweaks. And think a lot about your outcome measures, your statistics. Think a lot about what your goals are, what your question is, and how can you best answer that. Your research goal, I've learned this a little bit the hard way, but your research goal should always be the answer to a question, to prove or disprove a hypothesis, not to publish. When I started a couple of years ago, I walked into our nurse manager's office and he had this box sitting on his floor. And he said, these are a whole bunch of surveys we've been giving to patients. I've just thrown in this box. Someday I'm gonna get rid of this. I looked at him and I said, wait a minute, that's data, hang on, hang on. So I pulled out the data, we started going through it. I got some medical students and some undergrads to help me. It took a lot more time than I anticipated with some administrative help. And we went through all the data and then COVID happened and then isolation happened and all these things happened. We've made some administrative structural changes in our department, but all of a sudden we couldn't compare what we wanted to compare. So we didn't publish because really the goal initially was to set it to publish. What was the question? Well, there was this novel idea and I felt like everybody would be interested in it. They were having nurses do organized weekend homework or like nurses do almost arguably skilled therapy with some patients on the weekends. And so I thought that was like a fascinating, practical, any novel idea, but there was no way we could answer any question because we didn't really have a good question or hypothesis. So real world examples. Have one to three online prospective observational studies that are collecting data on patients of interest. Establish these studies up front of enrollment and data collection going on each day. So your practice, if you're a general physiatrist with an interest in amputee long-term care, in your clinic, you're going to see about three patients with an amputation in a day. So know again, know how long it would take you to collect all your data. During these visits, collect survey data, during these visits collect survey data and relevant functional tests like gait speed, chair rise, grip strength. One or two measures of quality of life. Enter this into a data set and repeat measures over your follow-up period. Keep it simple, but prognostically relevant, clinically relevant. And questions that you're going to ask or things that you're going to test like gait speed, chair rise, grip strength are going to help you treat that patient clinically also. So you're overlapping, you're overlapping your clinical care and your research. Create a data bank. In this data bank, you collect data over time in a population of interest. So for example, if you're a physiatrist with an interest in sports injuries, especially endurance athletes, as these patients begin treatment, offer enrollment to the data bank and simply collect information at that time. Be ready to ask your research question similar to a retrospective study, application to the IRB, which can be expedited. Or involve trainees in data abstraction from patients in your system. Depending on your setting and environment, you can train people who rotate with you as members of a study team to collect a certain number of data points for a specific question or have them enter data into a rolling data bank. Either method gets you help. Use this downtime of the trainee to contribute to your research. Accrue interest in cases for case studies or case series. Develop a focused brief review on a timely topic of interest. Use partners at your own institution or other traditions to prepare clinical pearls. These are high impact, clinically useful quick write-ups for specific journals. If you're gonna start with something like a review or a clinical pearl, you should contact the journal first and just inquire if they're interested, if their authors would be interested, their readers would be interested. So hopefully you write the results. The goal is always PubMed, right? But there are other things. These clinical pearls can show up in sometimes in very quick publications in journals or sometimes in some images. And then at the same time, you can even have an abstract, right? An abstract is presented at a conference or can be published also in certain journals. At the very least, if you don't publish, if your trial falls apart, like my first one that I tried to take on when I was a brand new attending, at least you increase your clinical expertise on a topic. I would argue that I am one of the Danepezil experts right now, at least in Western New York, because we're writing a comprehensive review. And I have done, at this point, I can say I know the literature. As a resident, I wrote a comprehensive review on imantadine. I know the literature. So I increased my clinical expertise. You're practicing evidence-based medicine. Even if you know the evidence is minimal, even if you know that there isn't a lot, at least you know what's out there. And you're up to date on the state of the science. So your literature search is actually not a waste of time. It's going to help you treat patients better clinically. It's going to help expand your clinical practice. And it's gonna make you a better clinician the more that you know. So Dr. Vincent, I can turn it back over to you. If anybody has any questions, they can enter them into the chat. I'm more than happy to take them. Again, to contact me through email, that's the best way to reach me. Perfect, thank you so much. As she's transitioning from this screen, and I'll get into the other one, as she'll get out of the share screen, there's some excellent practical points here that if the audience can relate to these, just understanding you're at one of the points I also agree with completely is that as you're developing your research team, understanding that the intent of the person that coming forward saying, I really wanna jump into research, I really wanna help undergraduates do take a significant amount of time. The rewards can be great if they decide they wanna follow through, but it does take a lot of effort. And then also really pay attention too to some of the clinical applications that she was talking about. We actually do some of those at the University of Florida with respect to leveraging opportunities in clinic and outside. So these are great ideas to continue building the momentum if you're interested in doing this. So let me go ahead and I'm going to bring back where I need to be. Okay. And are you able to see dissemination of your work? Yes. Perfect, thank you. All right, so now let's assume you have successfully captured data from one of the methods that Dr. Ma talked about. Let's go ahead and figure out how we can give you the greatest chance of success in getting that paper through that first cut in the review process. And then if it doesn't, what can you get from a paper rejection that can actually help you successfully resubmit? And these are some pearls that we've learned over the years. And I'd like to give you what I've learned as an associate editor, being on the inside of all the meetings of the editorial boards, what is really considered to be acceptable? Or if something's rejected, why? Is it necessarily always a bad reason why? And so the first thing that we recommend to all of our potential authors is think about, do some homework on the journals that are going to be the best fit. And nowadays there are so many opportunities, whether they're open access or otherwise, and believe it or not, open access journals are becoming extremely popular because of the speed in which you can get your work published. Yes, it costs some money, but if you have the availability to do so and you believe you are onto something really hot, this could be an opportunity for you to get that work out quickly. So just understand that there are some long-term journals that are very prestigious that if you're going to submit something, it's got to be really groundbreaking or really high impact or they're just not going to entertain it. And so these impact factors are incredibly high and the likelihood of getting in is very, very low. You can also check in advance the scope of the journal and does the material fit? And what I mean by that is get on the sites, the websites of the journals, check out what exactly it is that they publish. Look at the last year or so of content and see what papers they're publishing and could your content fit there. You can also look at the papers that you cited. Where did they publish? What journals did they use? And would it be a similar match? And could you submit to those same journals as well? And then finally, use your colleagues for advice as well. I personally do this all the time. So if I'm caught between two journals or three journals, I ask input just to see if we're on the same page or if there's a better fit. And once you've made your choice and you're going ahead and preparing your paper, as you prepare it, one of the things I really want to encourage people to do is be transparent and balanced with how you do your reporting. First, when you prepare your introduction, be sure that you fairly acknowledge or recognize work in the area that's already been done. And we try very hard to avoid negativity by saying, well, there were inconsistent results or flaws in the data. It's much more professional to focus on the positives and maybe say we can add to this understanding or improve our understanding by this study. You can also cite references in the discussion that both agree and disagree with your data. It's very easy to find data that might agree with yours, but to also provide the alternative view is very important and very refreshing from a reviewer standpoint. Reporting the limitations is crucial. And then finally, being careful not to overstate your findings and over apply your findings to populations that might be a stretch or to settings that might be a little bit too far above what you've tested or to interventions that might not exactly match. So just be careful and just be transparent about how you're reporting your data. And so I encourage you to take a look at this PM&R article from 2020. It's very well done. So if you wanna do some simple fixes with respect to how you look at your data, Kristen Saniani's work is always good. So I encourage you to take a look at this one, 10 common statistical errors from all phases of research and what you can do to fix them. It's very well done. Get the numbers right. And so when we look at this, I'm gonna give you some clinical pearls related to that as well. And to get the numbers right, first of all, we need to make sure that the methods reporting is also very clear and can be understood by the reader and reproduced. So if you need to grab a friend or colleague from a different area and have them look at the methods and see if they can understand it. And if they feel that they could replicate it based on what you wrote. Where possible, cite your methods. As a reviewer, if I look through the methods section and there's not a lot of citation there, let me take that back. If there's no citation of previous methodology, that's a problem. Where you can, cite methods that you've taken from other investigators and be fair and transparent. With statistical reporting, just be honest and upfront how you did your reporting. And where possible, if you can, report the effect sizes. So that way it gives context to the reader. And what I'm pleased to see is that more papers are starting to report effect sizes. So that way, even if you have a small sample size, you can get an idea of whether or not the effect that you're seeing, whether it's a group effect or an intervention effect, whether it's a big one or it's a small one. Because it's easy to be statistically significant, but not really clinically relevant. Be careful also not to repeat data in the text and in the table and the figures, just do one or the other, keep it simple. And then the results, be sure to be clear with your language and refer to the tables and the figures. Let them tell the story and just simply point the reader in that direction. Statistically, some of the strong advice that's very helpful is be clear at the introduction, state your aims and hypotheses upfront. I can't tell you how many times I've had to tell authors to please state your hypothesis. So we know if you actually have the appropriate statistical procedures to make your conclusions. Error reporting is extremely important. So that means at minimum, means and standard deviations and confidence intervals. Now, if you are reporting the measurement methods, that means you can report other errors that other investigators have used. So in your methods section, you might say the error or the confidence interval related to this method is blank, but it's really better to do it on your own if you have it to show that you are capable of doing those measurements. As part of your statistics, a simple way to satisfy a reviewer the first time around is report whether or not your data are normally distributed. If they're not, what did you do about it to fix it? So there are simple things that you can do, whether they're transformations or otherwise, that allow the author to go ahead and make those adjustments and let the reviewer know that you've thought about this and tried to make corrections in advance. That's easy and eliminates a big potential rejection issue right up front. And then also whether or not there's a directionality of your test. So when you made your hypothesis, did you think one group was going to perform better than another? Or did you just say, we think there's going to be differences between groups? If you give a direction to your testing and then you follow up with it through your methods, that's really solid, that's good. That's a good way to be transparent with what you're doing. I mentioned in my first part of the talk, if you can please put a sample size estimate in there to show you thought about your confidence and your conclusions, and this will assure the statisticians right up front and reviewers, okay, they thought about it, we're good. And they won't question what your conclusions are. Where possible, report your dropouts or missed observations. You can put that in tables, or you can put that in the text part of your results, simply to just say, although we captured data on 50 people, three of those had unclear data on gait analyses. And if that's the case, they'll say, okay, so they reported 47. That's still a great number. And you were honest up front with what was missed as part of the analysis. That's excellent if you can do that. And then finally, once you get your results of your statistics, are they statistically significant or do they also have clinical meaning? And so I wanna tie together Dr. Ma's point of view of when you develop your question, does it have clinical relevance? So part of the homework that you need to do before you set up your study and you write up the methods is understand what is the range of outcomes that you expect that's going to be clinically relevant? And now there's so much information out there about the different surveys and methodologies to use. What is the difference you expect to see that has clinical meaning? And if you've achieved that, that's fantastic. If you have statistical but not clinical, you can still tell a story, and then what you might need to do in your next study to fix that or address it, but at least it gives the reviewer the insight to know that you've taken the time to look it up. Those are all very important issues. And then from this point of view, we want to go ahead and also avoid the adding spin to the results. So there's a great paper that I'm citing here that if you wanna take a look at it, it's beautifully written. And I think it's easy to get caught up in the process of getting excited about your results and wanting to put the most positive spin on it that you can. Unfortunately, that also means that there could be distortion in the results that you're presenting to. So just understand that it's great to be excited about it, but temper that enthusiasm with being open and honest with how you write it. So take a look at this article if you get the chance and understand that how you put the spin on it changes the interpretation of the findings. So if we look at an accurate article on the left, the methods are going to report this pre-specified methods and potential deviations from the protocol, being very honest. What you intended to do upfront and how you're gonna analyze it upfront. That's what the a priori stands for when you read that in papers. And then when you interpret it, focusing on the primary analyses, what's your primary outcome? The secondaries are the extra icing on the cake, but making sure that you establish upfront what is your primary outcome. The distorted article might then also change up the objective and the hypothesis to match what they found. Beautifying the methods to make it sort of match a little bit more or switching the outcome or analyses, which one was set up upfront to be the primary and then misreporting of the results. And what we mean by misreporting is only picking the things that might be positively changed by the outcome or over time or between groups and maybe ignoring some of the things that maybe weren't significantly different or flapping up some figures or images to make the data look a lot more enticing than they really are. And what that means, unfortunately, is that we end up ignoring the limitations or we extrapolate to larger populations and it gets confusing for the reader. So it's better really just to be upfront about it and that way it's cleaner for other investigators to come behind you and add to what you've discovered. Other types of spin, be careful how we use the language. And what's pretty interesting about this is that from 1974 to 2014, there was over an 880% increase in the use of positive words or rhetoric in published abstract. And these can include things like, this is the first study showing or this issue is a clinical health priority or very strong evidence. Tell it like it is, be honest. And if you have effect sizes, you'll be able to say if it's a large effect or a small effect, let the numbers do the talking. Data are not wrong or right, they are what they are. And so as a researcher, it's just important to recognize that it's how we report them that's wrong or right. So let the data stand for themselves and you'll get cited. Be careful also of extra publication or duplication of results or reworking of data. It's very tempting, especially if you have a large data set to work from that we don't duplicate publication of the same data but kind of twist it around. There's a healthy balance between optimizing how data are used and reusing data. The difference is whether or not you have a small or a large data set. So if you have a small data set, let's say you have 50 people and these people all have ankle pain. And in this ankle pain, they develop ankle instability and you wanna look at some functional tests and pain issues over a period of time to see how these people are responding compared to healthy controls. If you have this small data set and you take those same pieces of data and you analyze them one way in one paper and you resubmit that exact same group of people but just do a different analysis on the same variables, that's not sufficient. That is not enough to get that to the next tier to say, yeah, that's appropriate for publication. Where we can reuse data are things like very large data sets. Classic examples might be the health and retirement study in Hanes as an excellent one, the osteoarthritis initiative. These are thousands and thousands worth of data from patients with all different backgrounds, host of different prognoses, comorbidities, ages, interventions, treatments. So that makes sense where you can parse it out into numerous, numerous papers, but be very careful if you're going to use a small data set not to overuse it because it's easy to get caught. As people publish, previous publications come up and you have to make sure you can justify, is this really appropriate? So it just gives you an idea of what's likely to be rejected versus what's likely to be continued for a review. Stylistic issues that put a paper into the reject pile are so easy to fix. Being careful that we don't go too vague with topics and go off the directionality. If your text is misformatted and not appropriate, having somebody that did not have their language or run on sentences reviewed, or if the abstract is just not attractive or just poorly written, that immediately puts the reviewer in a bad mindset and they're already going to start looking for errors. So make sure the abstract is crisp, clear, and to the point. If acronyms of certain items are not spelled out, or if the paper's misformatted, or they include references that aren't listed, or if they're incomplete, do the diligence. The references are not an add-on section. People refer to the reference section as a follow-up, but they also want to read what you've read. They want to be inspired by what you've read as well. And if it's incomplete, a reviewer is going to get frustrated with that. Not following journal requirements is a big no-no, and then not proofing your paper. These are very preventable issues that can prevent your paper from getting rejected initially. Being careful that your paper is appropriately cited. And if your references are not up to date, it's an immediate way for a reviewer to say, they're just not keeping up with the current literature, and that's an immediate problem. If in your paper you indicate that there are issues to consent and assent, or there's failure to report certain issues, or you're not reporting up front in your method section that you've followed the Declaration of Helsinki, or the other appropriate ethics, that again is going to get you on the naughty list for not following that issue. And then finally, also knowing your institutional policies. If you're not aware of how to write up a case series or a case study, you could also get in trouble from that journal if you don't report that appropriately. So just take the time and do it right. Take that 15 minutes and make a phone call and make sure that you're reporting that correctly as part of your paper. If you get the dreaded rejection, understand you're not alone. All of us get rejected. And rejection rates depend on the publisher and the journal and can be up to as high as almost 90% depending on the quality of the journal. But based on the journal, how quickly did the rejection come? We can learn pretty quickly something important from this. If it's within a couple days, there has been an initial filter applied to this, which means the staff and the editors immediately check the paper for integrity. So some of the items I just mentioned on the previous screen, or the staff or editors check the paper if it's impactful or of interest to the readership, and maybe it's just not the right fit. So you can avoid potential rejection by making sure initially the fit is right first, but then making sure it's impactful enough. If not, maybe try the next journal down in the impact factor tier, you might get a better chance. So that might not be necessarily about quality, it just might be about fit. So take the time to do that correctly. Now, if a rejection came after the reviewers critiqued it, there was interest, but as written that the paper had issues. So first, it's okay to have your day of being sour about it. I think we've all been through that process as well, but you can learn some pretty important things from the rejection. First, if the reviewer stated the impact was too low for the journal, that means the topic probably didn't resonate with the readership, and maybe your question just simply didn't rise to the level of interest for that particular journal, and maybe tried a different one. So again, it might not be about quality, it might just be about the audience that might be reading it. Look for the common themes among the reviewers. If you get three reviewers back that read it, and two of the three had very consistent points, or if all three had consistent points about your statistics, or about certain methodology, or about certain limitations, address that immediately, because if you can address it, you might be able to get this thing published. The reviewers actually spend a lot of time on your paper, and can actually provide valuable feedback on it. And especially if they're fresh to the work, they're gonna be a new pair of eyes that could say, you know, I really liked it, I just wish they did this. So you might be able to tackle this head on, depending on whether or not you're given the opportunity to revise. And if you can include the reviewer's suggestions, it can really enhance your chance of success, and frankly, it can make a better paper. If you get a rejection, these are the fatal flaws that unfortunately, you're probably not gonna get the chance to resubmit either to that journal, or maybe not to any journal. So just be aware, you might get the occasional paper where this happens to you. First, if they tell you, it doesn't add anything new to the field, or the methods weren't right, this means that the homework wasn't done upfront on the first two parts of today's session that we talked about. So doing the work upfront to make sure you can add something new and do it correctly. If you got an inconclusive result, and maybe you didn't do a power analysis, if you can't prove you don't have enough people, that you're probably dead in the water. So just make sure you plan ahead, get the right sample, and make sure that the methods are tight, and practiced, and done well. Two other things that can be problematic, is if you don't have a very good control group, or a condition that's not clearly spelled out, you don't have a good comparison, which means you're not gonna get a true effect of what your intervention or your comparative group is. And then lastly, God forbid, please be careful of plagiarism, or ethical violation. And I wanna share two real examples of this that can prevent you from having this happen. Plagiarism refers to a lifting of concepts, ideas, or writing from previous papers. Even if it's a methodological similarity, please take the time to make sure you shift your words around, and make sure it's not a copy verbatim from one paper to the next. You don't wanna be caught for that. Even though the methods might be the same, mix up how you describe it. A violation of ethics might be something that was never intentional, but could be real. And so I caution you, if you're thinking about an intervention study, an excellent example of this might be the following. Certain IRBs might look at an intervention study in a condition that's vulnerable. So let's say you have a high-risk population that could get benefit from a specific kind of therapy. So let's say it's a progressive degenerative disease, and you have a medication that might give that patient some benefit. If all patients could benefit, but you withhold a potential benefit from the control group, some IRB boards might view that as unethical. So what could you offer the controls in the meantime that could give them some potential benefit that might be a good control? So consider that as part of your study design. So when you get to the publication stage, you don't end up with the problem with, oh, geez, this doesn't look right, or those poor patients were put in a condition where we don't consider this ethical. And yes, it's extreme, but I've seen it happen. So just be aware that in your study design, you might wanna think about those things. What can you do for those patients that might not get the treatment? So some practical take-home points as we wrap up today, preparation upfront on the details of the paper production, making sure that you think about what journal might be the best fit really does optimize your chances of making this first cut. And follow the journal directions, please. If you can, make sure that if you follow the details, you make your life so much easier. And then even if you get a paper rejection, initially, it's okay to be frustrated about it, but really, sometimes the reviewers can actually be quite helpful. Sometimes the comments are not, but learn what you can from it and take those points home and see if you can make a better paper from it. Sometimes it turns out a whole lot better. Are there questions or comments or anything that we can address for you before we adjourn for the night? All right. Dr. Ma, I thank you so much for your time this evening, and thank you for all the attendees to be diehard and hanging with us this evening. If we can help you in any way, let us know, folks. We appreciate your time. Good night, all. Thank you, take care. Okay. Thank you.
Video Summary
The first video covers a community session on clinical research in physiatry led by Dr. Heather Vincent and Dr. Heather Ma. Dr. Vincent discusses the importance of developing a good research question, selecting appropriate measures, and seeking input from experts. Dr. Ma focuses on establishing a research enterprise in a clinical setting, emphasizing the importance of starting with a good question, organizing research papers, and utilizing different types of clinical research studies. The session provides practical advice on writing IRB proposals, maintaining clean data collection processes, and completing studies.<br /><br />The second video features Dr. Ma providing practical tips for researchers wanting to write and publish scientific papers. She emphasizes the importance of thorough research, selecting the right journal, clear and transparent reporting, accurate statistical analyses, and addressing reviewer comments. Dr. Ma also cautions against plagiarism and ethical violations and encourages researchers to carefully consider the ethics of their study design throughout the research and writing process.<br /><br />The credits for the information shared in the videos go to Dr. Heather Vincent and Dr. Heather Ma for their expertise in clinical research in physiatry.
Keywords
clinical research
physiatry
research question
measures
research enterprise
clinical setting
research papers
clinical research studies
IRB proposals
data collection processes
scientific papers
journal selection
×
Please select your language
1
English