Product & Services > One Care Street > Predictive Modeling > 

One Care Street™ - Predictive Modeling
spacer


Predictive Modeling was developed over 50 years ago and has been used in fields as disparate as archeology, air traffic control, financial services, customer service managment, weather forecasting, and actuarial science.

One Care Street™ uses a predictive model that is based on Health Perception Science (HPA). This takes the individual perception of participants about the current state of their health/wellbeing and compares it to what those participants expect and want their health to be. When there is a large gap between those perceptual states it has been proven that the person is at high risk to use the medical system and be increasingly less productive during the near future (9-18 months).

AT this point an active intervention process of telephonic coaching is initiated by a One Care Street™ coach.

How to Compare Predictive Models: some questions to ask vendors

Identification
• What type of predictive modeling data are needed? What timeframe on the data is required for the predictive model (PM) to be valid?
• How do you know you’re finding the right people? Is there a published validation study?
• What is the sensitivity of the PM? Specificity? Is that accurate enough to fit with client needs?
• What % of the total eligible population does the PM put on the original call list?

Engagement
• What percent of the total call list and then total population are engaged in a meaningful coaching intervention?
• What is the intervention process, and what is it’s validity?
• What are the skill sets of the coaches?
• What percent of those coached received 1-3 sessions? 4-9 sessions? 10 and higher sessions?

Outcomes Analysis
• What was the research design? Pre/post? Randomized control/treatment group? How was the cohort established?
• Who did the analysis? Who paid for the analysis? Did an independent third party firm do the analysis? Do they have the credentials/experience to do such an analysis?
• What are the steps in the data clean up prior to the analysis? How are outliers handled? How does the process handle the fact that people received coaching at different points and time, thus their “index date” for pre/post claims would be different? What claims paid field was used?
• How are biases controlled so that there is confidence that the outcome was due to the intervention?
• Is there enough detail in the outcomes analysis report so that you can see how the data was cleaned and analyzed?


How One Care Street™ Compares to Other Predictive Models

A Case Study in Comparing One Care Street tm to other Predictive Models

IDENTIFICATION questions:
• What type of predictive modeling data is needed? What timeframe on the data is required for the predictive model (PM) to be valid?
• How do I know you’re finding the right people? Where is your published validation study?
• What is the sensitivity of your PM? Specificity? Is that accurate enough to fit with our needs?
• What % of the total eligible population does your PM put on the original call list?
• One Care Street™ response: Our PM requires survey data only, so you’re not missing the required data on new enrollees or anyone without identifying claims history in the previous year. Because it’s done annually and unlike claims data that typically requires 12 months of complete history on someone to be valid, our method is not constrained by the lack of claims history. We published our validation study in October 2000 with sensitivity/specificity ratings of 67%/63% respectively. We have found that this level of accuracy in KNOWING you are targeting the people who are about to be high-cost this year is the most essential aspect of the entire process. We have consistently demonstrated that we find the right people. Typical claims-based sensitivity levels do not exceed 35% in even the best models and only target the top 1-5% most costly; we target the top 10-13% because we’ve found that 35% of the top 5% isn’t nearly enough to earn first year ROI. Once again, although it seems intuitive to target people with certain chronic diseases, or those who are obese, or who smoke, or whatever…these rules do not correlate well with your high-cost 10% group in your current year.

ENGAGEMENT questions:
• What percent of the total call list and then total population do you engage in a meaningful coaching intervention?
• Please describe what you mean by a meaningful coaching intervention?
• What are the skill sets of the coaches?
• What percent of those coached received 1-3 sessions? 4-9 sessions? 10 and higher sessions?

One Care Street™ response: We engage 60+% of the total call list that ends up being 7-10% of the total population this is actively coached. What we mean by a meaningful intervention is that the coach helps the person determine what is most contributing to their sense of not feeling well (physical symptoms, chronic condition mis-management; stress and stress emotions, lifestyle behaviors, so basic need issues such as food, shelter, safety) and then works with them to develop and carry out a highly tailored health improvement plan until those factors are addressed and they are feeling and functioning at or near their expected capability level. Our coaches are all master’s prepared psychologists, social workers, health promotion specialists, or health educators. In a major client situation we experienced 52% of those coached having 1-3 sessions, 36% had 4-9 coaching sessions, and 12% had 10-24 sessions for an average of 4.5 sessions per high-risk coached person. This varies from client to client based on a number of variables, but we consistently provide higher than average number of sessions per coached persons due to the unique intervention model the coaches are trained to use.

OUTCOMES ANALYSIS questions:
• What was the research design? Pre/post? Randomized control/treatment group? How was the cohort established?
• Who did the analysis? Who paid for the analysis? Did an independent third party firm do the analysis? Do they have the credentials/experience to do such an analysis? What are they?
• What were the steps in the data clean up prior to the analysis? How were outliers handled? How did they handle the fact that people received the coaching at different points and time, thus their "index date" for pre/post claims would be different? What claims paid field was used?
• How were various biases controlled for so that I have confidence the outcome was due to the intervention?
• Have you provided enough detail in your outcomes analysis report so that I can see how the data was cleaned and analyzed?

One Care Street™ response: The research design was a controlled randomized trial. All who had complete OCS survey plus pre/post 11-month claims data determined the study cohort. The Regenstrief Institute, at Indiana University School of Medicine, completed the analysis by one of the premier health services researchers, Dr. William Tierney. The client paid for the analysis, not the vendor. Their background and experience is detailed at www.regenstrief.org. The data clean-up work was comprehensive and detailed. All potential bias that could be controlled for was controlled in the analysis prior to the determination of savings. This was accomplished primarily through the use of an equivalent control group.

Note: Based on work done by Dr. Richard Citrin and Dr. Julie A. Meek (2/11/2005)