Data released by the Australian Bureau of Statistics (ABS) in early November 2011 has revealed that 18.5% of people injured at work in 2009-10 received no OHS training prior to the incident.
The basic findings of the 2009-10 data are not all new as a December 2010 media release shows but the new report, “6324.0 – Work-Related Injuries, Australia, 2009-10” does include new data on OHS training.
Most of the OHS training data is included in table 13 but other tables should not be overlooked. Table 3 shows that of those injured in 2009-10:
“82% (522,400) had received occupational health and safety training in the job prior to their work-related injury or illness occurring…”
and that 18.5% did not.
A legitimate question is “what is meant by occupational health and safety training?” Table 13 includes these categories
“Received training in OH&S risks
- As part of a process to obtain licence/qualification
- As part of induction training
- Refresher/on-going training
- Included demonstration of safe procedures
- Involved workplace assessments
- Other OH&S training”
The efficacy of this training, or whether there is a direct cause and effect, is difficult to determine from this statistical report.
One of the downsides of this type of statistical report is that over-reliance may encourage OHS regulators to pursue a strategy that does not reflect the reality. For instance, the ABS report states that
“Sprains/strains were also the most commonly reported work-related injury or illness sustained across the majority of industries, followed by cuts/open wounds and chronic joint or muscle conditions…”
but England’s Trade Union Congress in February 2011 that workplace stress is
“now by far the most common health and safety problem at work.”
Certainly trade union surveys are not representative of the national population but they should be considered in the development of a suitable safety management program for specific industries or workplaces in the future.
With any OHS statistics there are major equivocations on the validity of the statistics but the ABS should be best placed to minimise variations and vagaries. One of the major benefits of ABS statistics is that background data is readily downloadable in Excel or Winzip format, as are relatively simple explanatory notes.
The summary in the new “full” report paraphrases the 2010 data release:
“Of the 12 million people who had worked at some time in the last 12 months, 5.3% experienced a work-related injury or illness during that same period. The majority (88%) of the 640,700 people who experienced a work-related injury or illness continued to work in the job where their injury or illness occurred. Approximately 5.2% had changed jobs and the remaining 6.9% were not employed in the reference week.
More than half of people who experienced a work-related injury or illness were men (55.6%). This can be partly attributed to the nature of their work and to the fact that a larger proportion of those who worked at some time in the last 12 months were men (54%). However, even after this factor is removed, men were still more likely than women to experience a work-related injury or illness. In 2009-10, 5.5% of men who worked in the last 12 months experienced a work-related injury or illness, down from 7.4% in 2005-06. The proportion of women who experienced a work-related injury or illness in the last 12 months was the same as 2005-06, at 5.1%.”
The production of work-related injury statistics is very important but they require translation in order to be relevant to the real world and in order to provide some guidance on what type of interventions work best. The ABS report provides the data for others to interpret. Let’s hope someone does soon.
The only statistic that can have any relevance is the Total Injury Frequency Rate (TIFR). LTI is typically fraught with false and misleading information. An LTI is usually a full shift off work though injury but if you get them back to work to sign paperwork and give them some mundane duties, then it is no longer an LTI, but an MTI. Even MTIs are fraught with misinformation. A first aider administrating assistance can keep the employee at work instead of seeing a doctor, so that then becomes a FAI. So one way around it is to record TIFR and have your KPIs linked to these. The trick is to get everyone on board to report all injuries and near misses (another statistic worth noting…in particular Significant Potential Near Misses) and this is regardless of their significance.
As I wrote above – what relevance does even TIFR have except for one organisation to be able to say “we didn’t fail as badly as others in ensuring our employees’ safety”.
A single statitistical value serves nil purpose in understanding what went wrong and informing what should be done to improve the outcomes.
And if ‘our’ TIFR is the smallest of any organisation in our industry, or even in the land, does that mean we don’t need to look at the detail of those incidents that contributed to that statistic????
That is true Les, but statistics are, like it or not, used for all sorts of reasons and are often a benchmark for various objectives. No-one is disputing what you are stating. If you are getting the same type of injury from the same type of action (or inaction) then the question needs to be asked: WTF is going on? Obviously the devil is in the detail and it is necessary to get to the root cause. If my workplace had many SPNM as opposed to LTIs, then I would be pleased with that, because although something nearly went wrong, it didn’t and someone reported it. And that is the point I was making overall, having people the confidence to report all types of incidents/injuries so that it can be investigated and root cause anlysis begin. I think we are on the sdame page mate.
SPNM??
Apologies…SPNM = Significant Potential Near Miss.
The world is full of acronyms!
Without the adjunct of lead indicators to show whether anything had been done/was in place in order to prevent those injuries, I’m not convinced that the production of injury statistics (lag indicators), in and of themselves, are of very much value. What value to know the LTIR and LTIFR rates, or event the high incidence types and trends – all they do is tell how well/badly we failed to ensure safety of personnel.
Sure, we can benchmark one organisation against another and we can benchmark one industry against another but in and of itself what value does that give?
Sure, regulators will use such statistics to work out ‘target’ issues and even target industries for ‘regulatory inspections’; and other organisations will use such statistics to work out where to focus resources and effort for research, but unless we can be absolutely certain that every organisation is reporting to the same standard, can we even trust the raw data that the statistics are drawn from? – are we really comparing apples with apples?
And what value do they provide the individual organisation that spends its resources on capturing and collating the data?
One organisation works hard to prevent injuries and another doesn’t; One industry has much higher risk which is more difficult to manage than another and so on.
The production of work-related injury statistics is very important but they require translation in order to be relevant to the real world and in order to provide some guidance on what type of interventions work best also many employers expect training to be the easy alternative we’ve trained them so they should now work safer.
Never ceases to amaze me how much focus is placed on provision of training without the requisite post-training supervision to embed new knowledge and skills in into existing work practices.
In my experience generally, training never changes a damn thing expect the knowledge levels and skill POTENTIAL of those trained.
Unless work is done ‘at the coal face’ to ensure that things actually change the training can be such a waste of time and money.
Embedded culture is almost as difficult to breakdown and change as old habits.
Too many employers expect training to be the easy option – we’ve trained them therefore they (employees) should now work safer. But did they modify the performance expectations with employees and their supervisors to encourage implementation of the training. Usually not!!!
G’Day Les, You make a valid point. Training needs to be done at the coalface wherever practicable. A good example of this type of training is the implementation of a training plan (buddy system) where the new employee (either new to the business or the process) is teamed up with a highly competent and experienced operator and not just shown the ropes, but mentored and guided each step of the procedures until such time he can be assessed as either competent or not yet competent.
The quality of training, and that includes refresher training, is so important. But whether it directly relates to incidents and injuries cannot be truly measured. Statistics can tell many stories depending on how they are structured and displayed. Even if the training is first rate (or best practice…forgive me for using such a flippant term) there can never be a guarantee that someone won’t be injured or involved in an incident. At my workplace, we had a SPNM (significant potential near miss) where the employee had training but still failed to do the appropriate checks before using the tool that gave him an electrical shock. Human behaviour can be corrected, however humans will always make a poor decision from time to time…is that a statistical fact that is recorded? I think not.
Thanks for the comment, Wayne. In light of this I would welcome your thoughts on the article I have just uploaded – http://safetyatworkblog.com/2011/11/10/small-fine-of-1250-but-important-safety-lessons/
Damn, you do good, bloody useful work with this stuff KJ. Cheers.