The reality of budgetary pressures is not new to Safety departments. Especially during economic slowdowns when executives focus in on cutting costs, it has always been vital for safety professionals to be able to justify their activities.
Aside from budget pressures, being able to demonstrate the value and the impact of your safety programs is crucial to help build momentum and support for future initiatives. Without the support of executive management, supervisors, and employees, safety programs will not reach their full wide-ranging impacts or help drive a cultural shift. One of the most effective ways to gain this kind of company-wide buy-in is to be able to demonstrate the effectiveness of previous programs and building upon that success.
Let’s take a deeper look into the challenges that safety professionals face when trying to define success and outline best practices to help you demonstrate the value of the companies’ safety programs and initiatives.
Making Invisible Success Visible
Every safety department faces the same challenge of figuring out how to measure their positive impacts. Oftentimes, safety teams are viewed as a cost of doing business with no tangible value, but we all know that’s not the case. Safety incidents can be costly and impact the company’s bottom line if the public loses trust in the company. While injuries, illnesses, absences, and accidents are visible, and the negative effects of an unsafe environment are readily apparent to anyone in the company, it’s what safety professionals do behind the scenes to prevent these things from happening that often gets overlooked.
So how do you shine a light on these efforts that prevent injuries and illnesses? Much of the work that safety professionals do results in benefits that are only apparent through careful measurement and reporting. Being able to attribute a measurable impact to your efforts is critical to get buy-in for current and future safety programs and initiatives.
How to Evaluate Success
Despite the complexity in justifying your safety programs and initiatives, there are ways to define and measure the results of your work. The first step is to select an appropriate methodology. There are many ways that the impact of a project or program can be measured. Each is calculated and measured differently and often relates to a different stage of your safety program. Let’s look at two examples:
An effectiveness evaluation, in its simplest terms, determines whether the results of the specific program met the outlined objectives. The success measurements for this method identify the impacts of a program and look at the magnitude of its effect. Areas that safety professions would look to measure include: injury rates, near misses, events with significant injuries, worker compensation benefits. It’s important to note that there are a number of variables that can have an impact on the perceived effectiveness of a particular program, so safety professionals may need to dig in to learn more about why a program wasn’t as successful as they thought it would be. Things like employee turnover, changing operations, and mergers and acquisitions can impact the results.
Executive management wants to see financial-based analyses, such as cost-outcome analysis, cost-benefit analysis or cost-effective analysis. All of these analyses more or less work the same way, but one may be more appropriate to use than the others. First, you estimate the net cost of your program by defining how much it costs to implement, then subtracting the cost savings that can be associated to the project. Determining the cost savings can be challenging, since these are typically considered avoided costs, so it’s important to be able to demonstrate quantitatively that the program had a direct positive impact on such things as injury rates, absenteeism, and occupational health costs.
Planning Comes First
Demonstrating program value should be the primary goal before designing the program. The requirement to provide justification for your work has numerous knock-on effects that dictate what kind of program to implement and what to measure. If this value calculation is not top of mind during the planning phase, then there’s a good chance that you’ll get to the end of the program without the data you need demonstrate to management that the program was a success. Typically, for every safety program, the planning phase should cover the following:
- Define the scope: Work collaboratively – involve employees and managers to define the purpose of the program, the main questions, identify available resources, establish goals for the project, and specify a deadline.
- Organize a committee of stakeholders: Be sure to include those who will communicate results, such as managers, worker representatives and evaluation experts. You should look for members who have different perspectives. Getting buy in from different divisions, departments and disciplines will ensure that your program benefits a wider segment of the organization, rather than a small niche.
- Develop models: Attempt to predict how the program will work and try to identify any outside variables that may have an impact on the validity of your results. Work done at this stage should save you time and energy. No one wants to have to redesign a program after its been implemented.
- Choose your evaluation criteria: As already discussed, you need to determine what the goals are and how you are going to measure the program’s impact. Consider giving a higher weighting to certain outcomes. Be careful not to make data collection too onerous, think about scalability – this group may have buy-in but will everyone else? It’s important that you understand what it is you are going to measure. Questions you should ask yourself and others, “Does the program lean towards being measured in a certain way?” and “Will the results be statistically valid?”. Asking these sorts of questions will help ensure that you aren’t overlooking things when you are analyzing the results.
- Resources: Do you have the resources to introduce experimental design elements into your evaluation, such as control groups, random selection, and accurate pre-program measurements? Knowing the answers to these questions will help ensure that the results you get are defensible.
At the end of a project, you want to be able to see whether the program as designed is suitable for a wider application in the organization. If something in the data indicates that the design of the program should be changed – test it again. Don’t look at the data and see a positive effect and be lulled into thinking that the positive effect will surely be replicated in the rest of the company. If there are indications that something might not be right, drill down into it and think about ways to improve the program design. This is the whole point of continuous improvement.
Hopefully, it is clear by now that there is no silver bullet when it comes to proving the value of safety programs and initiatives. Every organization has different priorities, so there is no standard definition of ‘value’ that is consistent across industries. It’s because of these shifting priorities that we have such an abundance and variety of tools available to us to plan and construct a safety program. We invite you to learn more about Cority’s safety and business intelligence solutions that can help you track, manage and measure your programs.
|Ian Cohen, MS, is the Product Marketing Manager responsible for Cority’s environmental and safety solutions. Before taking this role, Cohen was Cority’s Environmental Product Manager, where he was responsible for developing Cority’s environmental compliance and data management solutions.
Before joining Cority, Cohen was an environmental specialist at Florida Power & Light Company, a NextEra Energy, Inc., company, where he led the development, implementation, and management of various environmental management systems and programs. He is well versed in the development of enterprise environmental management information systems and is a subject matter expert in corporate sustainability, including program development, annual reporting, and stakeholder communications. He has earned a Bachelor of Science degree in Biology and a Master of Science in Environmental Science, both from The University of Tennessee at Chattanooga.