Data Scrubbing and Quality Improvement; How to evaluate the quality of your QPP data and prepare for reporting February 15, 2018

Size: px
Start display at page:

Download "Data Scrubbing and Quality Improvement; How to evaluate the quality of your QPP data and prepare for reporting February 15, 2018"

Transcription

1 Data Scrubbing and Quality Improvement; How to evaluate the quality of your QPP data and prepare for reporting February 15, 2018 Good afternoon, everyone. My name is Leila Volinsky and I'm a senior program administrator and the New England regional lead for the payment work for the New England QIN QIO. I want to apologize because I have been traveling and I am a little congested. Today's webinar will be talking about data scrubbing and quality improvement focusing on how you can evaluate the quality of your QPP data and prepare for reporting. Many of you who have attended this webinar before having already seen this. We are CMS contractors but the content in the slides was created by us. A quick overview of what we will talk about today. I will briefly touch on the 2017 data submission timeline and then we will take into data scrubbing, some key elements of reviewing data reports. I will not actually show you data reports. A lot of the information is protected and often times the practices do not want to share that publicly. We will just talk high-level about what the concepts are that you need to have in place with looking at your report in when you have more specific questions just know that you are more than welcome to contact us. You can send s directly to my contact information n or reach out to the New England QIN QIO to contact your correct state representative. I will then talk about quality improvement strategies. We will touch on the PDSA cycle, recalls analysis and talk about some concepts you should consider when doing your provider education. We have resources at the end and plenty of time for questions. Some acronyms. I will not go over all of these. You have probably seen this many times but CMS likes to use lots of acronyms. You will hear me mention them. I typically explain what they are the first time I say them and you will hear the acronym after that. We will do a quick poll question. Of those of you on the line, are you planning to participate in the quality payment program this year for 2017 data. If you are part of an ACO, are you reporting to the APM or through a group administrator, are your clinicians planning to report? If you could take a moment to answer that I would love it if everyone would answer it and it lets us know how everyone is doing as they plan for the next year. The poll is now closed. We will just wait to see what the results are. We will move on, for 2017 data submissions, those of you that are using the CMS portal not many of you answered. For data submission if you're using the portal or a qualified registry or qualified clinical data registry or claim, you have until March 31 to submit your data to CMS. That is the final date CMS will accept your performance data. Please keep that in mind if you are waiting to aggregate the reports or poll data or whatever the case may be. That timeline is shrinking quickly. You have about a month and a half left. We want to make sure you all have data submitted so that you don't get a negative penalty on your Medicare part B claims in 2019.

2 Of course, if you have any questions we welcome those that we want to make sure that you are all successful. If you happen to be part of an APM or large practice that is using the CMS web interface, your submission window is a little different than the other tools. You have between January 22 and March 16. That is a bit smaller. Just make sure you have the dates in mind and you only have a month left to submit data if you are using the web interface. What is data scrubbing? I will add a quick data point at the bottom not in very large text. Some of you may be familiar in terms of database maintenance. We will not talk about data scrubbing in that way. This is more how are you analyzing and reviewing the validity and accuracy of your quality data report? Are you looking at them? Have you tried to your numerator and denominator populations to make sure they are accurate to the patient population you service, to your practice, the measure has a specific population with maybe a disease condition. Do you know roughly how many patients you should have and does that look accurate? We will also talk about data elements that you should follow. Practices should have in place within the organization or within the practice itself a data dictionary and a specified workflow that is followed and agreed upon by all members of that practice. Some different measures types and reporting tools. First, we will talk about briefly the electronic health record. If you are using an electronic health record, often times that EHR can pull data for you based on the measures that you have to submit data on. Often times that have to be documented in a discrete field so those are fields that would-be checkboxes or drop downs. Those would not be free text notes or comment fields, because the free text notes cannot be easily combed for data to support reporting. The next we will talk about our registries or qualified clinical registries. These work in the same manner. The registry either that would be a manual upload or sometimes the registries have interfaces into an EHR system and can automatically pull the data out of your system. The key component of these to consider is that this is only the data included within that registry. It is only whatever you support and submit to the registry that can be supported for reporting. If you would like to report on a larger sample of patients for patient that site-specific disease condition, you may need to look for other reporting mechanisms. The last reporting tool we see clinicians use is a manual data report. That is often an Excel or some other spreadsheet type of format. Again, this is only data included in that report. If you are tracking your patients manually, perhaps you are still on paper or you don't have a robust reporting system within your EHR, you would only be able to report on the patients that you have data in the spreadsheet or access database or whatever tool you are using. It is a very manual process to pull that out. You would have to do filtering and different lookups. We could talk about that if that is your tool and you need help with it. Next, I want to touch on the different types of measures. Any of you who have participated on previous webinars have heard us talk about what is required in order to have old data reporting for the quality measures. Under the MIPS program you are required to have at least one outcome. They look for highpriority measures as well. We will start at the top. A process measure is determined by services provided to patients in a consistent clinical manner. Often times those are the screening exams. Did the patient

3 receive a colorectal cancer screening? Did the patient receive a flu shot? For example, does the provider ensure that all patients have received their full flu shot? Did this happen in order to make sure they get the care they need. The next is an outcome measure. These evaluate the health of the patient as a result of care that has already been provided to the patient. It has to have a process in place in order to achieve the outcome. It is twofold but you are not measuring the initial output. What is the rate for patients with diabetes? Another common outcome measure that is often used by practices is the hemoglobin A1c for control. That is actually an intermediate outcome measure but you're looking at the patient A1c based on the control and the other clinical activities that have taken place and have you been able to obviously lower the A1c for that patient population? The next patient experience is what it sounds like. This is looking at feedback from patients. This could be taken often times as qualitative results, not as much the quantitative results but it does look to see what was the patients experience with maybe the communication of a specific provider around the care plan? Or how did the patient feel about their clinician s bedside manner or how they addressed concerns? Whatever the case may be. Some practices use other surveys to pull the patient experience data. It doesn't have to be something you create. You can always pull in that data from another source. Structured data. We do not see too many of those measures in the MIPS program but it does assess the characteristics of the care setting. That is looking at facility staffing, other policies around the delivery of care. An example, would be are there adequate staff in the emergency department when there is an event going on in town? Did the staff cover it adequately? This is an example of what those measures may look like. Next, I want to talk about two really critical elements about your reports. The data reports that you pull really need to look many things. Timeliness and frequency. You think of timeliness. What is the lack between the data, either the entry of that data or the data collection and the reporting? If it is just minutes, you have a patient come in, you into their vital signs and you immediately report out the patient has a temperature obviously there is no lag there. However, if you're using systems that are a little more manual such as claims space reporting group, there is a tremendous lag in when you submit your claims data to CMS and when you are actually able to see reports on that. If you have a billing system that can more quickly report on the claims to submit then obviously you would be able to have more timely data. For the most part claims is a very lengthy process and it does take about three months to get the data back into the system and able to report on. I have an example here. Reports that depict the process over to my quarters later may not be accurate. Think about a process you put into place. Sometimes obviously you are monitoring to make sure it is stable and you have all the things in place you need to keep it going, but if it is after the fact, clinicians sometimes get right into the routine and they start thinking about what got them there. It is important to keep that very timely and try to keep that data fresh. Frequency - How often are you running these reports? Clinicians and staff often need to have the data in a very timely manner. You need to be looking at do I need to report these measures to the clinicians in a monthly report? If it is patient experience it

4 may take a little while to get the surveys back but that is also a great thing to have in a very frequent staff meeting type format or monthly all staff meeting format where you can talk about and dig into the details of where certain clinicians are not performing as well and look at how to benchmark against clinicians that are performing better and put those changes into the workflows early on rather than wait until the end of a year-long reporting period to see that this clinician did not have the data to support it or they should have looked into the workflow earlier and now it is too late to correct that. One thing we hear a lot of is clinicians obviously are very burdened with reporting and documentation. Be mindful of data overload. If you dump a 20-page report in front of a clinician on a monthly basis you may lose them. Think strategically about what measures you are selecting and how often you present them to those clinicians or groups of administrators. Here some quick considerations for data. What's are the requirements that are known? Do you know which measures you are reporting on in the patient population? Has more data collection process been designed? Often, we hear people say I go into my EHR and click the button to run the report. There really isn't a process so you do get into the place where there is data overload because you are getting all this data in your face pretty much instantaneously and you don't really know what to do with it or how to assess it or how to make it meaningful for your clinicians. Again, has the process been implemented for data collection and has it been communicated? Do the clinicians know how often they will be getting the reports or does your IT group know how often you would like to get the report? It is important to have a group discussion about the plans and the utilization of the data. Finally, how are the reports generated and how are they analyzed? I talked about this a little already that make you sure you have a plan in place for how you are taking your reports, how you are validating them, with the timeliness is and utilizing it from there. Some key reporting requirements you should be thinking about as you look at your patient populations in the various reporting types is review the measure specification sheets. They are a wealth of knowledge for patient population buckets. When you look at your patient population, and I will talk in a minute about accuracy and validity of your reports but when you start to look into the population and if you do a very in-depth validation of those patients, you might find there are errors in the actual reporting system. If you have a patient that perhaps didn't meet a specific measure criteria because they became a hospice patient within a day of admission, this is for admitted patients, but if that was the case, they would fall out of your population. Perhaps that wasn't documented in a way that was able to be pulled by the report and thus the patient stayed in the population it was only in the denominator that was hurting your performance overall. Is supported to understand what the measures are telling you about the patient the ball into the numerator, the denominator, and any measure exclusions or exceptions. I will briefly touch on what those are. Measure exclusion will be patients that are completely excluded from the measures. If they don't have a diagnosis or they were hospice within one day of admission, they would be excluded from the measure. Measure exceptions are when there is a specific condition it takes place. Let's say for flu shots there are not enough flu vaccines because there was a statewide

5 shortage. That would be a measured exception. You want to make sure you document that, because the clinician would not be dinged for not administering the vaccine because they didn't even have the stock. Another would be a patient allergy. Those are all the times measure exceptions, especially in the vaccination measures because of the patient has an allergy or any of the medication measures you would not want to give them that medication but you want to make sure you document the patient has an allergy hence they didn't get this medication or immunization or whatever the case may be. Next it is crucial to look at the current workflow to determine alignment and if alterations are needed. As I mentioned with clinicians looking at the reports on a frequent basis, you can quickly identify. Let's say you have a practice of five or 10 clinicians, and eight of them are working in upwards of 80% or 100% group of performance on that measure and you only have two down in the 40% range, you can quickly identify the workflow for those two in the lower percentage probably does not align with what the other eight are doing. Is it really important to say what are you doing? Does it meet the measure specifications and how our system is set up to capture that data? If alterations are needed, let's put them in place to make sure everyone is educated on what is needed to go forward from there. Additionally, if you are using an EHR potentially a qualified registry you need to make sure you are following the specific workflow supported by that system. Usually you cannot develop workarounds because with workarounds you might miss that crucial element of data entry that really drives getting the patient onto your report or getting the correct patient population into the report. Connecting all the doctors working through all the systems really is crucial to make sure your populations are correct. Determine if the data collection process is automated or manual. I talked about that already. Where is the data coming from? Is that from your EHR or your registry? Or from a manual Excel system that you have developed for your capturing patients when they come in or whatever your other process might be? If it is automated, how often is it being run? What is the timeframe? What is the look back? Manually is the same kind of thought. You want to make sure you pull the data frequently enough but not too frequently and that you have an accurate system in place. Determine the best method for displaying the method. This is not necessarily what you're submitted to CMS but getting physician back in and making sure your programs are in place. Having a dashboard or some sort of graphical representation of the data is often very helpful when communicating with clinicians because if they have to dig into the specific numbers it can be really challenging for them and take a lot of time. If you could give them sort of the stop light for how well they are performing, that often is very helpful. Just a quick check at what a measure specification may look like. This is for the hemoglobin A-1 C control measure. Just to quickly give you an idea you can see there is a denominator exclusion. It is a hospice patient and you want to make sure you include that. There are different performances met or not met which is for the claims based reporting. You need to make sure you are following this exactly the way it is laid out to enter the appropriate codes into your claims sheet so that you get your patient correctly based on numerator, denominator, or have an exclusion and get pulled out of your population.

6 Validation. I spent many an hour validating reports. It can be very rewarding because you see how well your providers or clinicians are doing, but it can be really challenging because you might find that there are errors that need to be corrected. Validity is the extent to which an instrument measure was designed to. In the report, it is the extent to which that measure that is being displayed is actually representing what it was supposed to be measuring. As you start looking at your report, you'll want to make sure that you gather enough data to provide useful information on that particular measure. In the 2017 transitional year there was the ability to report on a single measure or on a single patient. Obviously one patient one time does give you a lot of information about how accurate and correct your process is. If you're looking at more of the six months, three months, even a single month time frame you can usually get a better idea of what kind of data you have. If you don't find that a single month or even three months gives you enough data, always extend your timeframe. Have more. It can take a little longer to validate that data if you have a significant number of patients in your denominator population, but it does give you an idea of how accurate you really are. You can measure it against your patient population as a whole so, what does the panel look like? Clinicians in smaller practices and even large practices really know their patients quite well. In a roughly the size of the different patient populations so they know about the size of their panel, the note about the size of the diabetic population and the size of the party population and Kimberly say this doesn't pass the got test to say this looks correct. Once you start running reports were frequently you will know roughly where your clinicians are. Obviously with monthly and quarterly variations but you will get an idea of roughly where they are so you can quickly I what looks incorrect. And where are your reports being generated? Going back to the sources you will hear me say sources many times. It is important to know where your data is coming from. If you really like to dig into the details you can look into stored procedures which will show you if data is being stored in the wrong area for a given measure or is inaccurate and you can go back and say this does not look quite right or I think this is wrong. It is really important to vocalize concerns about where you think the data might be incorrect and get your vendor or registry vendor to start looking into that and making corrections for you. This can be very time sensitive so it is important to look at those early on. Collects the data? Is it one individual or multiple? It is supported to have processes in place so that multiple individuals would evaluate the same measure that we would see if we all got the same data. If you are concerned about the validity of your reports and maybe helpful to have more than a single person take a look and say this is correct or this is why this patient did it fall in, because sometimes we get mired in the detail and we might forget this measure had this exclusion. Having someone else with fresh eyes look at it can really help inform your process. Again, you also hear me say multiple times today about workflow. Is that specific workflow in place to ensure data collection? We have heard many times in working with clinicians, especially with EHR reporting that they follow the specific workflow for documenting given patient's care in different procedures that were done and follow-up that was done, however, registered to dig into where that is actually being documented we find that they were documenting in the wrong field and it was a workaround that was generated years ago, maybe before a process was even in place but it has really

7 gotten stuck in the routine of that practice. When they go to report they find I thought I was during this process exactly right but when I get the data it doesn't at all reflect how I think I should be performing. An example of this was not related to quality measures, but were going back to the advancing care information measures and specifically the one that often causes tremendous upset for many people is the summary of care measures so setting the summary of care for referrals and transitions of care. Often times it has to be done electronically so sometimes clinicians document their referrals in a way that they send that message to maybe a front office staff member to complete the referral or do an insurance validation. It was like a referral but they are being transitioned from practice to practice within the system so the specific practice thought they were doing everything right but when they got the report they realized they had a huge denominator and luckily CMS implemented an exclusion. If you have a practice with 300, they are way over threshold of being included in because of the way they documented there was really no way to retroactively hold us back. They had entered referral records on all their patients so the system saw all of those have been referred to settings of care. They had been counted and in this example, they had very few in the numerator. The performance was very low. They would have met the requirements but if you are looking at your cache is a correct doesn't support the measures or processes or whatever I'm reporting on? Do the gut test. If you look at and you think 300 and you know that there is no way 300 patients have been referred out of the practice go back and look at the source data. It is likely inaccurate or incorrect but the source also comes from that workflow. Is the workflow consistent in is the workflow accurate to the measure specification? Checking for accuracy. We talked about this a little with validity and everything else. Does the data accurately represent the population being measured? Consider the sources. I'm saying the same things over and over again but this is getting to the point of where you need to be. Is all of the necessary data available? That is also crucial element. Can you pull a quick dashboard if you have an electronic reporting system such as registry or EHR you are not going to be able to pull that. You will not get credit for the good work doing and care given to the patients. Again, talking about timeliness and frequency. Consider the timeframe you are being measured. For the changes to the workflow? If you run a report January to March of this year but in June you implemented a change to the workflow the earlier reports will not demonstrate that change that was implemented. They need to run another report afterwards to make sure you are capturing any of those changes. Or if you see tremendous decline in measure performance consider how your change might have affected the reporting structure. Did you follow what was required and what is supported by your electronic reporting system to make sure you are gathering the necessary data? Talking about data dictionaries. Often times as the clinicians have different ideas about what is being measured? They all have different thoughts on the various elements of care. Utilizing the data dictionary that is consistent across the organization and has been agreed upon. Having data that is structured helps the reporting process run much more smoothly. Has the measure specification sheet been followed? As I showed you with the A1c measure, it is crucial to follow it. Was the patient in the numerator or denominator? What care did they receive? Where are the lab results? Looking at those elements to track it down and look at the past to make sure the patients reported that accordingly. Are the numerator and denominator populations aligned with what

8 you know to be true in your patient population? If you look at it, and let's say you only have 10,000 patients in your entire practice and you have 5000 of them in a given measure the report is not pulling the correct patients. Again, consider reviewing all the data to ensure it is complete. If you submit data to a registry and a manual upload process or even if it is pulled from an interface, make sure you are evaluating the correct data. Sometimes there is a lag in how the systems pull the data from your system or from your upload. Make sure you are looking at the most accurate and up-to-date source file because they may not have yet been implemented into your system. Consistency. I just touch on that, but are the same measures in the same practices being collected the same way? Again, clinicians across multiple practices often have different interpretations of how to do their workflow. A great thing about electronic systems and even the paper made systems is they allow for individuality. If your clinician likes to do their depression screening first before vitals are taking, does your system allow for that or do they have to click the multiple screens to enter the data which then makes it more complex to get back in to do the vital signs? Think about how clinicians interpret what the various measures are. Do they all have the same belief and understanding of how to order follow-up if the patient is identified to have a BMI outside of the normal range? Or if the patient is a smoker or tobacco user, do they have the same processes follow the same method for ordering follow-up and ordering cessation counseling? Really have consistent workflows as a policy in place because then you will make sure you are comparing apples to apples. Again, consider the possible variations. I just mentioned that with the documentation tools. We talked to many ACO's. I work out of the Massachusetts office primarily and ACO's that have practices with all different EHR vendors. Again, that can cause differences in the data that is being reported even on the same measure because that EHR just pulled a little differently. It is important to compare like sources and like data. Again, is there an opportunity to achieve standardization? Don't take away your clinician's ability to be an individual and have their own workflow but try to make sure there is a standardized, recognize, and approved process across as many settings of care as possible because it will definitely make your reporting much easier. We talked about this already but it's complete and thorough documentation. Consider three things. What is the population being measured? What is your total patient related to a specific diagnosis? What is the element of care that is being measured? Is this related to a medication being administered? Are diagnostic testing being ordered? Again, the outcomes. What was the outcome of all these processes? And is all of the data displayed on the reports? Do the gut check. Does it look correct, what you think it should be or do you see that there are some gaps or is it clearly inaccurate? Obviously, these are things I'm bringing up again and again and again. Hopefully you can see where the data is on your report and how workflow really does determine how accurate and valid your data may be. I would start with supporting documentation quickly. It doesn't quite align with what we are talking about but it is in your report. CMS has plans to audit clinicians and they do reserve the right to audit for up to 10 years after data submission. In the final rules, you are familiar with that in there are definitely

9 provisions for randomly selecting eligible clinicians and groups on a yearly basis to perform audits. There are plans for CMS to start doing this. I don't know if it will be in 2018, but it's actually in With all that in mind it is really crucial for practices and clinicians to have the data available and the documentation to support what they have submitted. If you are submitting quality measures or other measures for improvement activities, make sure you have the data report that you are pushing the numerator and denominator from. What are the policies to follow, whatever that work for documentation is? So that when CMS says we are auditing this specific measure you can quickly and easily pulled the documentation packet and submitted back to them for evaluation. I will quickly talk about the quality improvement strategies. I'm thinking I actually want to open up the phone lines in the chat to see if there are any questions. If you have a question press pound number six to ask over the phone or you can type it into the chat. I'm not seeing anything so I will leave it open for three minutes. There is one. If we report on claims and submit our ACI and IA data on the QPP portal will that be sufficient? Great question. Absolutely. If you submit your claims data quality that would cover the quality and you do have the option to do manual attestations or if you have report you can submit it for advancing care and improving activities on the QPP portal. You would be doing great and cover all of your bases. >> Quality improvement strategies. The first, and I really like the PDSA cycle. Many of you are probably familiar with PDSA if you have been in quality for more than five minutes many years ago created this and it initially was the plan, do, check, act cycle but it is a quick way of implementing a process of permitting quality improvements into workflow. In the cycle, when you plan obviously you are looking at identifying a problem. What are you having challenges within your practice? Is it around patients not doing a certain intervention or is it about the check-in or checkout process? What is going on? You can identify your problem and plan for those possible interventions. You really want to make sure you are developing objective measures that can easily be measured and monitored to identify when you hit success and when you may need to change that process again to get the success you are looking for. As you go into the second part of the process you are starting to implement the small test of change. When you look at the possible interventions in the plant step you implement them into your workflow process but these are not boiling the ocean. You're looking at a small change. We hand the patient a paper now and in terms of getting them logged into the portal -- maybe now we have an Omega checkout and give them the token. For the study portion of the cycle, you are starting to evaluate the data coming in from whatever you did in your intervention and you are measuring it against the objectives and measures to see if there was an improvement. Again, this is a small change. You're looking at a short period of time to see if the intervention you deployed really did have the impact that you want. Sometimes the short period of time can be elongated to make sure you have adequate data. If you have one point in time, typically that is not indicative of an actual change. You need to have multiple points to see how your performing and see if it does meet your needs and what you want to get accomplished.

10 From there we have acted. You're taking all of the three prior steps and adapting a new workflow and process and plan into the next steps of your whole workflow. If it was one small thing of giving the patient a paper initially and now you have to discuss with then you implement that across the board and make sure it is set and stable in your process before you move on and implement a new intervention. You really don't want to have multiple things going at once because it is really difficult to say was this the reason we got success and improvement? Or was this the reason? You need to have one measure that you are looking at initially and then move on from there. I will admit I know that is not always possible. Practices in organizations have multiple competing priorities and they often have multiple quality improvement and process improvement activities going on at the same time but for the PDSA cycle it should be one thing. It is said to be in the small test of change before rolling it out to larger more broad workflows. Workflow analysis. I love flowcharts. I love looking at a workflow. I cannot read this, this very small but it is from the hemoglobin A1c control. When you look at workflow, there are two things in workflow analysis. You start by looking at how your workflow currently is in this moment. It may not be your ideal, but how are the staff or clinicians going to the give process right at this moment in time? Map out the current workflow and then evaluate those workarounds and any bottlenecks. Where do you see issues? Where is the breakdown? Where are the complexities that need to have staff added more new documentation tools implemented? Capture data to support what you have found and get some idea of where you can have improvement. If you make the workflow change and start identifying what your ideal workflow is. Maybe right now everything is it going as it should be, but what success look like and how do you map that out? Once you map it out and you would want to probably implement a PDSA cycle to implement the intervention to change the workflow, start educating your staff on how the new workflow will go and any new documentation that is needed. It is really important to see how you were doing and how your staff are doing and engaged in in the conversation. I was at a CMS quality conference and there were two key elements that were highlighted. One was having clear patient voice in your processor workflows and all that you do and also, making sure the clinicians are included in the decision-making. If clinicians are not included often times the practice is much more challenging and it results in burnout and dissatisfaction and really trying to make sure the clinicians are engaged and what it meets their needs. This is a nice tool that could be a quick process and can also be implemented along with the PDSA a workflow analysis tool. Initially you define the maybe it is that is that supposed to be there. Next gather the data. This is a look at all the elements that are part of whatever the event or issue is you are evaluating. Look at the cause factors in the gaps and deficiencies of knowledge. You want to see if this is a workflow that wasn't followed exactly as it should then or is there a knowledge deficit that needs to be addressed? This could be a great time to complete interviews with staff. Getting the workflow analysis. Where do you go now or what do you think the issues could be? Having staff tell you what they think may be wrong can often help tremendously in identifying options for improvement. Next identify the root cause and look for reasons why the process might not have worked exactly as it should have been a where the failures may have been.

11 Maybe it wasn't anything to do with one staff member or groups of staff members or part of the process, but more of an overall system-level issue where something is not linking correctly or is not working out quite right. Consider all of those as you move on to the next steps. Makes you identify this, consider what might work in terms of corrective action. This is a great time to do a PDSA because often times you don't want to go in with guns blazing and the whole system of things to change. You want to go with one small change and see how that works and implement another small change of the how that works. Next you will implement and monitor. And put the changes and begin to monitor. If you receive success from what you did you can move on with the process. If you evaluate it in see that in fact you did not reach the efficiency and a permit you wanted you can start the whole process again and identify the appropriate corrective action to take to improve situations. I would be remiss if I do not talk about provider education. Obviously, all of this has to do with providers and clinicians and how when they work with them and work with them in the process of education. Taking about this, what are the measures that are important to the providers and clinicians? Having the clinician voice, we know there are multiple competing priorities and that the systems are often very complex. Having those elements of what is really meaningful to the provider and the patient can help it from the process tremendously. I mentioned earlier, but how is your performance data being calculated? Do you hand them a pile of reports to say here is your data for the month which I'm sure none of you probably do or do you hand them a graphical representation of their performance that clearly shows they are doing well on the measures and not so much on these measures and talk about potentially adjusting workflows or corrective action or intervention. Again, I already set up that are there multiple competing quality improvement projects underway? This is where it can get very confusing and cumbersome for the clinicians because they are being asked to check the boxes and documents the template and make sure, taking into account, what is being asked in the clinician now and where you need their improvement to go to make sure you said the data reports that it may be a simple thing to say can you add this to whatever it may be? Do you have provider and clinician education sessions that accommodate varying shifts and schedules? We know clinicians often times have a lot of trouble getting to education sessions that take place during clinical hours and for right at the start of the clinical day or end of the clinical day because they are trying to get their notes the patients ready to go. To accommodate their schedules in different shifts likes sometimes having nighttime meetings or very early meetings can be a useful type. I will open it up for questions. Those of you who have been on presentations on my probably know that I always have pictures of my dogs. We have added a new family member. The middle is our newest addition. He is a baby pug but I welcome your questions. I see some in the chat which I will get to. At the end of the session can you address for those of us who have reported 2017 on the QPP site utilizing our EMR why under the improvement activities score it's not calculating correctly for those of us who have a small practice banner?

12 I just heard about this is a couple of days ago from CMS. They are working on getting the algorithm correct to give the small practices the appropriate number of points for the improvement activity category. For those of you not sure what she is referring to, small practices have to report on fewer improvement activities in order to receive full credit in the performance category. The way the CMS portal is currently laid out, even if you are identified as a small practice clinician it is not appropriately giving you full credit if you performed one highway to activity. I know they are actively working on a fix and I hope that it is in place hopefully in the next week. If you do not see anything please let us know but I do know they are working on correcting that. Could you please post the link for the slide again? When will CMS let us know what incentive we may get with an 80 Score?? Another great question. I know CMS is working on which are still in active data submission time so one data submission has ended March 31. CMS will pull all of the data but they will start doing performance evaluations. I know the target is to have performance report available sometime in the summer. I would say it is the time between July and September that you would know where your performance got you for the positive payment adjustments and obviously this payment adjustments would be made on the 2019 Medicare part B. I know we need to engage and involve our partners but how do you do this in major areas that they do not feel are relevant to their specialty? Great question. I think that is the million-dollar question. How do we get providers engage when it is difficult? Do you mind sharing with specialty clinicians you work with? I will say CMS is looking into ways to add more specialty specific measures to be more appropriate to given provider group. That doesn't happen overnight. Obviously, there are different areas they are focusing on and I just want to say for clinicians who feel they are underrepresented in terms of measures, CMS really welcomes clinicians and clinician specialists to recommend measures work with their specialty society or whomever has evidence-based practice in place to make major recommendations because CMS does welcome most and what to make sure there are measures appropriate for all clinicians. Lucy mentioned urology. It has a very small subset of measures. I would say for quality is going to be tricky, because those measures relate to apply to them often times. They're all those -- there are cost-cutting measures that apply to any clinician with any practice type. Specialists might say I don't do this or it is not appropriate, but in fact they should be. Things like assessing patients for tobacco use. All providers should be assessing patients for tobacco use and providing cessation counseling. Another that has been a hot topic this year has been influenza immunizations. While you don't necessarily have to have clinicians providing the influenza immunizations of patients, you do need to at least ask if they received it or not and documented in your system. Also tells patients to go to a local pharmacy and receive it. If you document that in your system you would get credit for it. The other that is a great with a lot of clinicians do anything sometimes slips through the cracks is utilizing the documentation of current medications on the chart. That is obviously one that pretty much every clinician would do where they are asking patients about medications in following up. Another is

13 advanced care planning for your documents and that the patient has a healthcare proxy or a living will they can go to get information about how the patient would like to be cared for sure they be in the situation that they need end-of-life care. I know that is a generic answer, but there are measures that do cover a lot of specialties even if they are not specific to the group. I will say these are general care measures that all patients and providers should be talking about. Just to add to the 80% score question into the clinician will receive 4% plus a minimum of.5 for an exceptional bonus program the higher of the score the more the bonus score's and a guest. That is correct. The exceptional performance bonus is over five years. It is a bucket of $500 million that CMS has set aside so about 100 million set aside per year. If you hit that score of 70+ you would be eligible for an exceptional performer bonus. As Christine mentioned, if you are in the 80-point score range you would receive more of the exceptional performer bonus. It is a calculation about your total score, total claim. There is an algorithm that takes place so it varies. It also depends on how many clinicians are going for the exceptional performer bonus how the money gets allocated. More to come on that. We don't know how it will all shake out once data submission has been completed. How would we see the performance bonus. I believe you will not necessarily see getting the bonus. You will just see it on your claims coming back. I don't know that CMS will actually even note that it is your performance bonus. You will just be a slightly higher rate. Any other questions? For those of you have questions or do not want to necessarily ask for now, we have some resources. This is our New England QIN QIO page and we have a link to resources there. You can also ask direct questions. These are CMS tools. If you have a question you would like to ask me directly, please fill free to me. I want to also provide you with the PDSA resources. I just put that into the chat. You may me. Do you think the general care measures will be higher than 90 percentiles for higher points which is more desirable lesser used measures? That is a great question. You're right. The measures I mentioned are getting to the top down point. Just the current will make it, CMS has identified and they are doing a four your phaseout for topped out measures. They are specific to specialties and I cannot recall what they are for this year. Because those measures, the tobacco use and cessation counseling advance care planning, medication management, because those are really cost-cutting measures and there are not a lot of them, I don't see, even though typically clinicians are performing very well on this, I don't see them getting topped out and removed anytime soon. There will obviously be changes to the benchmarks in coming years based on how clinicians perform this year in the next year. I can't speak to what the actual performance thresholds will be, but it is up to you and your clinicians and what you think works best. It is important to make sure the measures are meaningful and that you can perform well on them. I want to start thinking, you could still report and so do well even if you are reporting.

14 We have just a couple of minutes left for one last question? I did want to mention we also have the patient and family advisory Council which is a great tool to have if you want to have a tool evaluated. Our group is great at giving feedback on workflows that might make sense for patients or might not make sense for patients. Feel free to reach out to us. We would be more than happy to have them assist you in your practice and quality improvement. We also have some social media tools. We are on Facebook, LinkedIn, YouTube and Twitter if you would like to look us up in any way that we have made different materials available or if you would like to see what events we are going to. We also print out five reference materials which are also great resources to check to check out. We have two minutes left. If you have other questions please fill free to type them into chat. We will be sending out a follow-up in the next day or so with a link to the various materials. I apologize it is not working right now but we will make sure you get those. Hang tight if you are on the registration list and you will get that in a day or so. Think you all for participating today. If you have any questions, please feel free to reach out to me. We're here to help you and to make sure you're successful. Have a great afternoon.