Quantivate Blog

Strategic Advantage Through Key Indicators

by William Hord

June 05, 2017 11:06 am

As I work with and speak to professionals all over the country it is interesting to see how they approach Key Indicators (“KI”). More specifically how they are using Key Performance Indicators (“KPI”) and Key Risk Indicators (“KRI”) to help strategically manage their business and its impact on their Strategic Objectives (“Objectives”). Let’s start with a couple working definitions. (more…)

Read More

Do Your Risk Assessments Include Vulnerability and Velocity?

by William Hord

May 22, 2017 09:05 am

Most every Enterprise Risk Management (ERM) program I have seen, consulted on, developed or read about follows a similar process of identifying, analyzing, responding to, and monitoring risks and opportunities. It doesn’t matter what risk framework you utilize, if you are not effectively making Vulnerability and Velocity part of that process your results simply won’t be as accurate as they could be.


The Committee of Sponsoring Organizations of the Treadway Commission (COSO) defines Vulnerability as “the susceptibility of the entity to a risk event in terms of criteria related to the entity’s preparedness, agility, and adaptability.”

So why is using vulnerability in your risk assessment process so important?

In the absence of it, your assessment results may elevate one area over another guiding you and your resources towards mitigation efforts that while still prudent, may not be the best approach at that time. Introducing vulnerability into your assessment process will assist in determining how well you are managing your risks which directly affects the impact should a given event occur. To ensure you are deploying your resources effectively, understanding how vulnerable you are will assist you in determining where to allocate those resources to maximize your effectiveness. So, when you are reviewing your assessment data and certain aspects of it seem to be equal, the inclusion of vulnerability can help be the deciding factor that tips the scale.


COSO also defines Velocity as “the speed of onset or the time it takes for a risk event to manifest itself, or in other words, the time that elapses between the occurrence of an event and the point at which the company first feels its effects.”

So why is using velocity in your risk assessment process so important?

When you introduce velocity into your risk assessment process you gain a better understanding of how fast the impacts of risks will be felt by your organization. Determining potentially how much reaction time you have to a given risk ultimately improves your overall assessment of risks, and will allow you to better align your prevention, mitigation and response strategies. This means you can provide better prioritization of your resources to impact your most pressing risk needs.

Vulnerability and Velocity Make a Great Team!

Let’s use an existing regulation with an upcoming change where the regulation impacts multiple processes but the changes may impact fewer processes as an example. When assessing compliance risk, you might generate similar risk scores if assessed only for their probability of likelihood and impact. The impact may be the same based upon the fines for non-compliance and the likelihood may be relatively similar because some controls already exist to comply with the regulation. So, in the standard two-dimensional model (Likelihood x Impact), the risks might appear to pose essentially the same amount of risk to the organization.

By adding vulnerability and velocity to your assessments you begin to see that the processes impacted by the regulatory change not only increase your impact due to your vulnerabilities but also due to the velocity incurred by how quickly the regulatory change requires compliance for those processes. In this example, you are clearly able to make determinations of which processes to allocate resources to first and which ones can possibly wait. Although this is an overly simplified example, when vulnerability and velocity are applied against your entire risk landscape you will begin to see a change in how your risk data is presented which in turn means your risk mitigation strategies will be enhanced, that saves time and money.

Read More

WannaCry Ransomeware

by Dan Banning

May 17, 2017 03:05 pm

By now you may have heard about the “WannaCry” ransomeware attack. We have received a number of questions about this attack from our customers and want to assure you that:

  • Quantivate was not affected by the recent “WannaCry” ransomware attack.
  • All of your data is safe and unaffected.
  • None of our systems were compromised.
  • Our production environment does not rely on any Windows products and as such was not vulnerable to this attack.
  • All of our corporate Windows based systems were up to date on their patching schedules and therefore were not vulnerable to the attack.

For more information on the “WannaCry” ransomeware attack you can visit:



If you have any additional questions or concerns please contact Quantivate support.

Read More

Can Weak & Ineffective Controls Save You Money?

by William Hord

March 10, 2017 10:03 am

I was honored to speak last month at NAFCU’s Strategic Growth Conference about “Transforming Your ERM Program from Enterprise Risk to Enterprise Opportunities.” The topics covered were Risk Appetite Opportunity, Weak & Ineffective Control Opportunity and Effective Key Risk Indicators for Opportunity.

After the presentation, I appreciated the level of questions and comments that came from those in attendance. It was great having discussions across all three topics, but it seemed most of the questions were focused on the second topic, Weak & Ineffective Control Opportunity.

So, with that in mind I thought it might be good to share with you some of the highlights related to that topic.

Got Controls?

First off, what is a Control in the context of Enterprise Risk Management (ERM)? In the simplest of terms, it is a business process mitigation activity designed to reduce or eliminate one or more risks.

As a business, you obviously have hundreds or possibly thousands of controls across your organization within every department. When those controls were first designed and put into place the probability was very good of them being strong and highly effective. However, over time as our business changes, process changes are introduced and therefore potentially new risks. If new controls aren’t introduced or existing ones properly evaluated the probability that those controls are still producing the same risk mitigation as originally designed may be impaired.

When evaluating process risk and controls I can’t tell you how many times I have heard the responses, “I don’t know” or “We’ve always done it that way.” When I’ve asked the question, “Why do you do it that way?” Which brings me to my first point, if the employee responsible for completing a control or set of controls to mitigate risk don’t understand why they do it, its effectiveness is going to be less than ideal. Additionally, in the absence of your control assessment how would risk management even begin to know if the employee can’t articulate its intended mitigation and therefore its perceived deficiency?

However, you generally only find out the answer to that question and others by sitting down and assessing the controls within your organization. My second point is how often are you really evaluating your controls to determine which ones are weak and ineffective to the point that they potentially elevate your process residual risk to levels outside your established risk appetite and tolerances?

Impacting Your Bottom-Line

When you begin to effectively evaluate your controls and determine which ones are weak and ineffective you can truly begin to have a positive impact on your organization’s bottom-line. This can easily be accomplished in a couple ways:

1. Can the control be automated? If a control is still relevant to reducing risk and the deficiency is tied to lack of understanding or lapses on the part of employees performing them, can they be automated?

You could train and counsel the employee(s) but that takes additional time and other resources to maintain and/or improve the control’s effectiveness. If the control can be automated a quick cost benefit analysis can be performed to show how the overall cost of automating the control may not only improve its effectiveness but save the organization resources and money over time.

2. Should the control be removed? If a control is still relevant to reducing risk and can’t be automated or the cost benefit analysis shows the ROI isn’t optimal, then can it be removed

A quick cost benefit analysis here can possibly show that the time it takes to complete the control and continually monitor and train to maintain its effectiveness far exceeds the benefit derived from the control. In this case, risk management can make the sound recommendation for removing the control and document its reasoning.

The Starting Line!

Several conference attendees asked me, “Where is the best place to start?” Well without a full understanding of your organization, its risk management practices and other factors it’s tough to say. However, a baseline place to start would be as follows:

1. Review your existing Control Library;
2. Sort your Weak and Ineffective Controls;
3. From those Controls start with the processes that have the highest level of Residual Risk;
4. Ask the employees responsible for those Controls:
a. “Why do you do it that way?”;
b. “Do you have ideas on how we can improve it?”
5. Begin your analysis
6. Train;
7. Enhance;
8. Automate and/or;
9. Remove.

One of the last questions I got before leaving the conference was “Do you really believe that Weak and Ineffective Controls Save You Money?”

Of course they do, but only if you are effectively assessing them on a periodic basis. Otherwise, the money, time and resources you waste is never truly realized and your perceived risk mitigation is simply that…..a perception. When was the last time you evaluated your Controls?

Read More

ERM Strategy for Credit Unions – Podcast

by Dan Banning

October 12, 2016 09:10 am

In the latest installment of the NAFCU podcast series, “A 360 View of ERM,” Devon Lyon, Director of Education for NAFCU, asks ERM expert Bill Hord, Vice President of Enterprise Risk Management Services for Quantivate, tactical questions credit unions need to know in order to implement an ERM program structure strategically and successfully. (more…)

Read More

What Do You Really Need to Know About Zika Virus?

by Andrea Tolentino

October 05, 2016 10:10 am

Unless you have been living under a rock, you probably are familiar with the recent Zika virus outbreak that has been spreading across the globe through the bite of an infected mosquito. You may be asking yourself; ‘Does it matter?’, ‘Should I care?’, ‘What should I do to protect my organization?’. Disaster prevention, mitigation, and preparedness are some of the key roles a Business Continuity professional plays within any organization. Due to this fact, Business Continuity managers play a critical role in safeguarding an organization and countering the risks Zika (and other infectious diseases) pose. (more…)

Read More

ERM Risk Quiz

by Dan Banning

August 08, 2016 11:08 am

With Enterprise Risk Management (ERM) getting increased attention in many organizations and across industries it is important to understand the various parts of an ERM program and how they affect the program overall. Proper implementation of ERM can facilitate better decision-making, increase efficiency, and enhance an organization’s risk control efforts to support critical Governance, Risk, and Compliance (GRC) initiatives. Effective ERM enables management to cope with potential future events that create uncertainty and helps management respond in a manner that reduces negative outcomes. (more…)

Read More

COSO ERM-Integrated Framework Update – Now Open for Public Comment

by William Hord

June 28, 2016 01:06 pm

On 6/14/16 COSO announced their much anticipated update to their ERM Integrated Framework. They indicated in their press release that “The update, Enterprise Risk Management — Aligning Risk with Strategy and Performance, is designed to address the needs of all organizations to improve their approach to managing new and existing risks as a way to help create, preserve, sustain and realize value.” (more…)

Read More

FFIEC Issues Statement on Cybersecurity

by William Hord

June 08, 2016 08:06 am

FFIEC Issues Statement on Safeguarding the Cybersecurity of Interbank Messaging and Payment Networks

The Federal Financial Institutions Examination Council advised financial institutions yesterday afternoon to monitor the risks associated with interbank messaging and wholesale payment networks. Coming just two weeks after a malware attack on the Society for Worldwide Interbank Financial Telecommunication (SWIFT) breached 12 banks. The FFIEC stated “financial institutions should review risk-management practices and controls related to information technology systems and wholesale payment networks, including risk assessment; authentication, authorization and access controls; monitoring and mitigation; fraud detection; and incident response.”

If you haven’t already been assessing this process risk via your ERM program and/or your IT/GRC program, you should. Ensuring you have all the necessary controls in place to mitigate your risk and provide assurances to examiners and stakeholders is critical for such a highly utilized and trusted financial service.


Read More