The Inaccuracies in Labeling and Scoring Vendor Risk: An Interview with Douglas Hubbard
According to Gartner, vendor risk management is:
“…the process of ensuring that the use of service providers and IT suppliers does not create an unacceptable potential for business disruption or a negative impact on business performance.”
The inherent issue with most vendor risk management platforms on the market today is that they’re failing at the “ensuring” part in the statement above. Most have taken the path of least resistance by quickly applying vendor scoring or labels, which is intended to convey a level of security and give you peace of mind to do business with them. The inaccuracies and business risks in these types of methods are too numerous to name.
We recently interviewed measurement expert Douglas W. Hubbard, CEO at Hubbard Decision Research and author of How to Measure Anything in Cybersecurity Risk (Wiley, July 25, 2016). Hubbard sheds some light on the issues surrounding vendor labels and scores and offers three questions to ask before you select a vendor risk management platform.
The Wild, Wild West
Evaluating vendor risk management platforms on the market is an arduous task. Each platform assesses vendors differently, each weighs questionnaire answers differently, and each calculates risk based on those answers differently. So how can a business decide which platform truly and accurately calculates third-party vendor risk?
“It’s wild, wild west for vendor risk management right now; anybody can do VRM,” says Hubbard. “You can’t start an insurance company without an actuary, it’s against the law. Accountants have to be certified; lawyers have to pass the bar. This is not the case for cybersecurity. There are some certification methods, such as CISSP, but it’s not a requirement for building a vendor risk management platform. Some people are just making up their own scoring methods for evaluating vendor risk.”
Those scoring methods are the root of the problem, according to Hubbard, because they don’t give you the information you need to make a calculated decision on whether or not you should do business with that vendor. “What you want to know is the probability of losing X amount of dollars or more in a given period of time,” he says. “If I get some data on the risk of a particular vendor, does that help me better forecast the probability of losing say $10 million or more due to some event in the next 12 months? That’s what I should be able to do.”
The Problem with Labels
Many vendor risk management platforms on the market today place labels on vendors to signify a level of risk. Low, medium, high and critical risk are typical labels you’ll see used abundantly when classifying vendors. The problem with this method is twofold.
First, a label is only a label and doesn’t help you make an educated decision on what to do next. “Not only do labels introduce ambiguity and make the estimates worse, they don’t even support the decisions,” says Hubbard. “People have to look at these labels and then make another level of subjective judgement about whether or not they should invest in a certain vendor.”
Second, verbal labels add errors to the estimation and decision process, which creates a placebo effect and gives people a false sense of security. “These risk analysis methods can make you believe that your estimates are better when they’re actually not,” says Hubbard. “There’s nothing statistical about it. There’s no reason to believe it improved decisions at all. There’s lots of reasons to believe it made it worse though.”
The Problem with Scores
Other vendor risk management platforms rely on a scoring system to rank vendors. Each vendor is given a score upon completion of the questionnaire. This score is designed to convey a level of risk based on a certain ranking methodology, similar to a FICO score.
“I see how these ideas have spread because they’re being inspired by an already inaccurate, broken system,” says Hubbard. “But the difference between these systems and the FICO score is that FICO does have predictive power. A credit score is actually inferred from a whole history of behaviors that paint a picture of the person, which gets reduced to a single number and that number is used to try to compute the risk of non-repayment.”
This process is entirely different from evaluating one vendor based on a set of questions. “It is possible to develop a scoring method when you look at a whole population of people and correlate it to something,” says Hubbard. “What would be difficult is looking at only individual responses and trying to weight them and add them up, without any knowledge of the thing that I’m trying to forecast. If we are not even sure what specific event is being forecast, the weights don’t even make sense.”
Three Questions to Ask First
Hubbard recommends asking these three questions when evaluating vendor risk management platforms:
- Is There Evidence that it has Some Sort of Predictive Value?
In terms of vendor security, labeling and scoring have no predictive value. “The fact that we can be fooled into believing that these estimates are accurate because we went through some type of process is a problem we have to anticipate,” says Hubbard. “If someone says, ‘Here’s a risk score for a vendor,’ what does that actually mean? You must first understand the principles behind the methodologies and get past these misconceptions.”
- What does it Correlate to?
If two vendors have the exact same score, it should mean nothing to your organization until you determine what that score correlates to. “Does it correlate to events that are minor or central to your organization?” asks Hubbard. “Use these correlations in your own internal models to ask what the monetary risk is. If you’re relying on a label or score, you still have to do internal analysis. Conduct business intelligence that informs models that are specific to your company.”
- Am I Relying on Blind Faith?
The vendor risk management market is predicted to be a $7 billion market by 2024. So, it’s understandable that new platforms are popping up every day with the promise of protecting your business against third-party risk. “You have to ask the hard questions like, ‘How do I know this works?’ ‘What’s the measurement that was done that shows this is better at estimating things than my unaided intuition?’” says Hubbard. “The problem is if we’re going to adopt a methodology to improve on our intuition, that methodology should not only show a measurable improvement, it should show a measurable improvement that is large enough to justify its cost. Insist on evidence that it’s working.”
A Better Way
At ProcessBolt, we don’t rely on labels or scores to convey vendor risk. Instead, we simply assist in making the vendor risk analysis process easier through automation. We’ll help you streamline your vendor risk assessments and easily manage your entire vendor risk landscape. Our platform replaces spreadsheets and emails with an automated and customizable workflow that includes interactive questionnaires, data-driven dashboards, and much more. Sign up for a 1-on-1 demo today.
About Douglas W. Hubbard: Douglas Hubbard is the inventor of the Applied Information Economics (AIE) method and founder of Hubbard Decision Research (HDR). He is the author of How to Measure Anything: Finding the Value of Intangibles in Business, The Failure of Risk Management: Why It’s Broken and How to Fix It, Pulse: The New Science of Harnessing Internet Buzz to Track Threats and Opportunities and his latest book, How to Measure Anything in Cybersecurity Risk (Wiley, 2016). He has sold over 100,000 copies of his books in eight different languages.