<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[A safer barrier]]></title><description><![CDATA[A safer barrier]]></description><link>https://blog.resilium.group/</link><generator>Ghost 5.70</generator><lastBuildDate>Sun, 12 Oct 2025 18:57:20 GMT</lastBuildDate><atom:link href="https://blog.resilium.group/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Fixing a bowtie with multiple levels of detail]]></title><description><![CDATA[<p>Below is a small screencast of our <a href="https://resilium.group/academy/online-bowtie-practitioner/?ref=blog.resilium.group">Online bowtie practitioner course</a>, in which we did a bowtie on employee kidnapping, and came across an interesting issue. We had barriers with multiple levels of detail. But ideally, the barriers in a bowtie are on the same level of detail.</p><p>It&apos;</p>]]></description><link>https://blog.resilium.group/fixing-a-bowtie-with-multiple-levels-of-detail/</link><guid isPermaLink="false">6359ab8889af78298e202fd2</guid><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Thu, 11 Nov 2021 15:36:55 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/Employee-kidnapping-snapshot.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.resilium.group/content/images/2022/10/Employee-kidnapping-snapshot.jpg" alt="Fixing a bowtie with multiple levels of detail"><p>Below is a small screencast of our <a href="https://resilium.group/academy/online-bowtie-practitioner/?ref=blog.resilium.group">Online bowtie practitioner course</a>, in which we did a bowtie on employee kidnapping, and came across an interesting issue. We had barriers with multiple levels of detail. But ideally, the barriers in a bowtie are on the same level of detail.</p><p>It&apos;s not a hard requirement to have the same level of detail, but it makes the bowtie cleaner, easier to read and, if you choose the right level of detail, more specific. In this case, we fixed the bowtie by moving the more general barrier into an underlying activity. Watch the clip below to see what I&apos;m talking about:</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/QInG-BHgDqg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure>]]></content:encoded></item><item><title><![CDATA[Incident analysis only tells you half of the story]]></title><description><![CDATA[<p>A study described in the book Controlling the Controllable (Groeneweg, 2002, p. 88-89, experiment 2) looked at the ability of incident analysts to distinguish relevant from irrelevant information. The results are intriguing and may put our own incident analyses into perspective.</p><h2 id="the-experiment">The experiment</h2><p>Participants were divided into two main groups:</p>]]></description><link>https://blog.resilium.group/incident-analysts-only-tell-you-half-the-story/</link><guid isPermaLink="false">6359ab8889af78298e202fd1</guid><dc:creator><![CDATA[Jasper Smit]]></dc:creator><pubDate>Thu, 22 Jul 2021 14:21:53 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1502113547033-10210f07b370?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDUyfHxoYWxmJTIwbW9vbnxlbnwwfHx8fDE2MjY5NjM5OTQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1502113547033-10210f07b370?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDUyfHxoYWxmJTIwbW9vbnxlbnwwfHx8fDE2MjY5NjM5OTQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Incident analysis only tells you half of the story"><p>A study described in the book Controlling the Controllable (Groeneweg, 2002, p. 88-89, experiment 2) looked at the ability of incident analysts to distinguish relevant from irrelevant information. The results are intriguing and may put our own incident analyses into perspective.</p><h2 id="the-experiment">The experiment</h2><p>Participants were divided into two main groups: A &#x2018;trained&#x2019; group that received instructions on incident analysis using fault trees (N=15) and a group which received no instructions (N=15). Each of the participants were given a set of 128 event descriptions related to a fictional incident concerning a ferry. 107 of these events were irrelevant to the causation of the incident and 21 events were relevant. The participants had to select the relevant events.</p><p>The original study also included a small group of experienced investigators. For these, no statistical differences were found. This may have been due to the small group size. For reasons of brevity, I have omitted that part of the study.</p><h2 id="the-results">The results</h2><p>In the untrained group, on average, 8.2 relevant events were correctly identified. This means that 60% of relevant information remained unidentified. In the same group, 12.5 events were selected as relevant, while they actually were not.</p><p>The trained group, on average, identified 12 relevant events correctly which means that still 48% of relevant information was not reported. Compared to the untrained group, this difference proved statistically significant. In the same group, 12.4 irrelevant events were reported as relevant.</p><h2 id="what-does-that-tell-us">What does that tell us?</h2><p>First, it is comforting to see that the group that received some training identified more of the relevant information compared to the untrained group. This seems to suggest that training does indeed pay off.</p><p>But more interestingly, a relatively high percentage of relevant events remained unidentified. Even in the trained group, still 48% was missed. The pool of selected events was further diluted by a large number of erroneously selected irrelevant events. These results were obtained under laboratory conditions, with cleanly written event descriptions that were identical to all participants. In an actual analysis, the ambiguity of information is likely to be much higher and as a result the percentage of identified relevant data is probably even lower.</p><p>This raises the question: If we on average only find less than half of the relevant information, is all incident investigation futile? If our sole goal is to describe everything that happened on that location during the incident, then yes, perhaps we are doomed to be incomplete. But if our goal is to find ways to improve our organization, a lot can still be pieced together from that 50% through careful analysis.</p><p>Next time you&#x2019;re looking at an incident report, keep in mind that you might only be looking at part of the story. But at the same time, if that analysis leads to demonstrable improvements, it may just have been worth it.</p>]]></content:encoded></item><item><title><![CDATA[Writing too many procedures]]></title><description><![CDATA[<p>James Reason&apos;s classic book Managing the Risks of Organizational Accidents has a lot of great risk management insights. Here are three paragraphs on adding too many procedures over time (p. 49):</p><blockquote>All organizations suffer a tension between the natural variability of human behaviour and the system&apos;s</blockquote>]]></description><link>https://blog.resilium.group/writing-another-procedure/</link><guid isPermaLink="false">6359ab8889af78298e202fd0</guid><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Thu, 15 Jul 2021 14:51:13 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/writing-another-procedure.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.resilium.group/content/images/2022/10/writing-another-procedure.jpg" alt="Writing too many procedures"><p>James Reason&apos;s classic book Managing the Risks of Organizational Accidents has a lot of great risk management insights. Here are three paragraphs on adding too many procedures over time (p. 49):</p><blockquote>All organizations suffer a tension between the natural variability of human behaviour and the system&apos;s needs for a high degree of regularity in the activities of its members. The managers of hazardous systems must try to restrict human actions to pathways that are not only efficient and productive, but also safe. The most widely used means to achieve both goals are written procedures. But there are a number of important differences between the procedures for production and those for protection.</blockquote><blockquote>Although by no means immutable, the procedures designed to ensure efficient working tend to arise fairly naturally from the nature of the productive equipment and the task to which it is put. Safe operating procedures, on the other hand, are continually being amended to prohibit actions that have been implicated in some recent accident or incident. Over time, these additions to the &apos;rule book&apos; become increasingly restrictive, often reducing the range of permitted actions to far less than those necessary to get the job done under anything but optimal conditions.</blockquote><blockquote>Figure 3.1 illustrates this shrinkage of allowable action as it occurs over the history of a given system. This could be a chemical process plant, a railway, an aircraft operating company - or any hazardous technology at risk to organizational accidents. The space between the shaded areas represents the scope of prescribed action. As time passes, the organization inevitably suffers accidents and incidents in which human actions are identified as contributing factors. After each event, the procedures are modified so as to proscribe these implicated actions. As a consequence, the scope of allowable actions gradually shrinks to a range that is less than that required to perform all the necessary tasks. The only way to do these jobs is to violate the procedures.</blockquote><p>These are just three paragraphs of condensed juicy safety wisdom. I encourage you to pick up a copy and read the rest. It&apos;s just as good.</p>]]></content:encoded></item><item><title><![CDATA[It's not enough to have safety barriers]]></title><description><![CDATA[<p>The best definition of a safety barrier can be found in an article by Sklet from 2006:</p><blockquote><em><a href="https://blog.resilium.group/safety-barrier-definition/">Safety barriers are physical and/or non-physical means planned to prevent, control, or mitigate undesired events or accidents.</a></em></blockquote><p>This definition has an interesting word. <em>Planned</em>. It implies that besides stopping unwanted events, a</p>]]></description><link>https://blog.resilium.group/its-not-enough-to-have-safety-barriers/</link><guid isPermaLink="false">6359ab8889af78298e202fce</guid><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Thu, 08 Jul 2021 12:18:30 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/margarida-csilva-cQCqoTjr0B4-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.resilium.group/content/images/2022/10/margarida-csilva-cQCqoTjr0B4-unsplash.jpg" alt="It&apos;s not enough to have safety barriers"><p>The best definition of a safety barrier can be found in an article by Sklet from 2006:</p><blockquote><em><a href="https://blog.resilium.group/safety-barrier-definition/">Safety barriers are physical and/or non-physical means planned to prevent, control, or mitigate undesired events or accidents.</a></em></blockquote><p>This definition has an interesting word. <em>Planned</em>. It implies that besides stopping unwanted events, a barrier is also organised formally. But by only looking at formal barriers, two aspects of safety are left out.</p><p>First, because we&apos;re excluding informal barriers. Someone that happens to walk on the beach and saves a drowning child is not a barrier because it hasn&apos;t been formally organised, while a lifeguard that does the exact same thing <em>is</em> considered a barrier. It&apos;s the combination of formal and informal barriers that keeps an organisation safe. They complement each other. However, if we measure the safety in an organisation, the focus is often put on formal barriers because they&apos;re easier to measure. This creates a blind spot for the informal side of safety. To prevent this, concepts like HRO (High Reliability Organising) and resilience engineering aim to strengthen our ability to deal with scenarios even if we haven&apos;t planned for them.</p><p>Second, there are many formal systems whose goal isn&apos;t to improve safety, but do so as a byproduct. Things like simplifying a proces to save costs, or a teambuilding exercise. If we didn&apos;t plan these things to increase safety, they&apos;re not considered barriers and aren&apos;t treated as such, event though they might have a positive effect.</p><p>In the end, there are many factors, both formal and informal, that increase safety in an organisation. The question is why do we even want to consider some of them as barriers? The reason is that we have to focus our efforts. If we think everything is important, it&apos;s hard to know where to start improving. Barriers are one way to focus on those parts of the formal organisation that are important to reduce our risks.</p><p>But once you&apos;re confident that all those formal barriers are working, don&apos;t think you&apos;re done. It&apos;s time to widen your view and look at the rest of the formal and informal factors that keep your organisation safe.</p><!--kg-card-begin: html--><small>Photo by <a href="https://unsplash.com/@marg_cs?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Margarida CSilva</a> on <a href="https://unsplash.com/s/photos/lifeguard?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>
  </small><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[The safety barrier definition]]></title><description><![CDATA[<p>* <a href="https://doi.org/10.1016/j.jlp.2005.12.004?ref=blog.resilium.group">Sklet, S. (2006). Safety barriers: Definition, classification, and performance. Journal of Loss Prevention in the Process Industries, 19(5), 494&#x2013;506.</a></p>]]></description><link>https://blog.resilium.group/safety-barrier-definition/</link><guid isPermaLink="false">6359ab8889af78298e202fcd</guid><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Thu, 17 Jun 2021 12:36:09 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/Safety-barrier-definition-infographic.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.resilium.group/content/images/2022/10/Safety-barrier-definition-infographic.png" alt="The safety barrier definition"><p>* <a href="https://doi.org/10.1016/j.jlp.2005.12.004?ref=blog.resilium.group">Sklet, S. (2006). Safety barriers: Definition, classification, and performance. Journal of Loss Prevention in the Process Industries, 19(5), 494&#x2013;506.</a></p>]]></content:encoded></item><item><title><![CDATA[Use criticality to find important barriers]]></title><description><![CDATA[Which risk barriers are the most important? This question can be answered with barrier criticality. In this article we discuss why & how to approach barrier criticality as well as some challenges.]]></description><link>https://blog.resilium.group/barrier-criticality/</link><guid isPermaLink="false">6359ab8889af78298e202fa1</guid><category><![CDATA[barriers]]></category><category><![CDATA[risk assessment]]></category><category><![CDATA[bowtie]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Tue, 17 Sep 2019 19:57:06 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1496745109441-36ea45fed379?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1496745109441-36ea45fed379?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Use criticality to find important barriers"><p>Which risk barriers are the most important? This question can be answered by assessing barrier criticality<sup>[1]</sup>. It allows organisations to prioritise their effort on barriers you really can&apos;t afford to fail. In this article, we&apos;ll discuss why &amp; how to approach barrier criticality as well as some challenges.</p><h1 id="why-use-barrier-criticality">Why use barrier criticality</h1><p>So what problem does barrier criticality solve? First, all organisations have a limit on resources to invest in risk reduction. Those resources need to be distributed among all the possible measures to reduce risk. Barriers that are critical will get more resources. For example, critical barriers could be checked more often than others.</p><p>Another reason for criticality is focus. People can only deal with a limited amount of information, whereas risk analysis gives you a complete overview of all the relevant risks and barriers. Giving people a smaller set of critical barriers makes it more likely they will get managed. The counter argument is that it&apos;s extra easy to ignore everything that isn&apos;t critical. This is a good reason to make sure you think about how you will deal with all levels of criticality, including the lower ones (see the challenges below).</p><h1 id="how-to-assess-barrier-criticality">How to assess barrier criticality</h1><p>Criticality is assessed using a combination of 3 components.</p><ol><li>Scenario size</li><li>Barrier effectiveness</li><li>Barrier redundancy</li></ol><h2 id="scenario-size">Scenario size</h2><p>Barriers are built to prevent one or more scenarios. These scenarios (also called threats or consequences) are not all the same. They have different frequencies and power to cause negative consequences. The more a scenario contributes to the overall risk, the more critical barriers against that scenario become.</p><h2 id="barrier-effectiveness">Barrier effectiveness</h2><p>Not all barriers are equally effective in stopping a scenario. Some are more effective than others. A barrier that is more effective is usually also more critical, because losing it would mean a greater reduction in protection against the scenario.</p><h2 id="barrier-redundancy">Barrier redundancy</h2><p>If a scenario is protected by multiple independent barriers, losing one of them is less of a problem, which makes it less critical. Of course it&apos;s still important to maintain the group of barriers, but it&apos;s less critical than a single barrier that is responsible for controlling a scenario or barriers that share dependencies, making them less redundant.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://blog.resilium.group/content/images/2019/09/criticality.png" class="kg-image" alt="Use criticality to find important barriers" loading="lazy"><figcaption>An abstract example of criticality in a bowtie diagram.</figcaption></figure><h2 id="combining-them">Combining them</h2><p>You combine these three components to select barriers that are highly critical. In the picture above, we can see a critical barrier in the first scenario that is there because it&apos;s a) on a high threat scenario and b) it&apos;s very effective (shown by the green colour). The other barrier on that line is a little less effective (orange), so we decide not to make that one critical because we already have such a good barrier on that line.</p><p>The second line also has a critical barrier, even though the threat scenario is medium and the effectiveness is poor. However, it&apos;s the only barrier on that scenario line, so we still regard it as critical.</p><p>This shows that these three criticality components can combine in different ways to mark a barrier as critical. But in all cases, they are either on a high risk scenario, have few back-up barriers or are highly effective. In an assessment, you can combine this with the expertise in the room, because there might be other exceptional circumstances which make a barrier critical or not.</p><h1 id="challenges">Challenges</h1><p>Let&apos;s also look at two challenges when using criticality.</p><h2 id="too-many-critical-barriers">Too many critical barriers</h2><p>The core use of criticality is to take a longer list of barriers, and make a short list to focus on. But what do you do if you still end up with a long list of barriers? The simplest solution is to increase the threshold of when you find something critical. For instance, you could decide to never regard a barrier critical on a medium sized threat, or to have at most 1 critical barrier per line. If you describe this in a method policy, it can make discussions with barrier owners easier, as they sometimes assess their own barriers as the most critical.</p><h2 id="only-looking-at-critical-barriers">Only looking at critical barriers</h2><p>One of the big problems with criticality, are barriers that aren&apos;t highly critical. It can be easy to forget about them. Make sure that your method policy considers how to treat each level of barrier criticality. Low criticality barriers should still be managed after all. An example could be to report on the performance of critical barriers monthly, whereas the full set is reported on quarterly or yearly.</p><h1 id="conclusion">Conclusion</h1><p>Criticality can be a powerful tool to create a short list of important barriers to devote more resources and attention to. It is a combination of scenario size, effectiveness, redundancy and expert judgement. Make sure to have as few critical barriers as practicable, while at the same time having a plan to not forget about the barriers that have a lower criticality.</p><hr><p>[1] Just to be sure, when we talk about criticality here, we mean it in a positive way (as in, this is critically important). We don&apos;t mean the criticality of a negative event (as in, this is critically bad) like in <a href="https://en.wikipedia.org/wiki/Failure_mode,_effects,_and_criticality_analysis?ref=blog.resilium.group">FMECA</a>.</p>]]></content:encoded></item><item><title><![CDATA[Practical applications of bowtie]]></title><description><![CDATA[It can be a challenge to know what to use bowties for. In this webinar we show different practical applications of bowties. From risk communication to barrier monitoring and decision support.]]></description><link>https://blog.resilium.group/practical-applications-of-bowtie/</link><guid isPermaLink="false">6359ab8889af78298e202fc8</guid><category><![CDATA[barriers]]></category><category><![CDATA[bowtie]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Tue, 06 Nov 2018 11:37:49 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/Bowtie-picture.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.resilium.group/content/images/2022/10/Bowtie-picture.jpg" alt="Practical applications of bowtie"><p>It can be a challenge to know what to use bowties for. In this webinar we show different practical applications of bowties. From risk communication to barrier monitoring and decision support.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/N-hx8CTrVP0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure>]]></content:encoded></item><item><title><![CDATA[5 reasons why analysing barrier failures is better than 5-why]]></title><description><![CDATA[I hope you'll forgive me, but I don't really like 5-why. It's a decent methodology, but I think there are better options. Instead, we can use barrier-based incident analysis methods. Why is this better? Well, here are 5 reasons why I think analysing barrier failures is better than 5-why.]]></description><link>https://blog.resilium.group/5-reasons-why-analysing-barriers-is-better-than-5-why/</link><guid isPermaLink="false">6359ab8889af78298e202fc6</guid><category><![CDATA[incidents]]></category><category><![CDATA[opinion]]></category><category><![CDATA[rca]]></category><category><![CDATA[barriers]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Tue, 24 Apr 2018 10:00:00 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/5reasonswhy.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.resilium.group/content/images/2022/10/5reasonswhy.jpg" alt="5 reasons why analysing barrier failures is better than 5-why"><p>I hope you&apos;ll forgive me, but I don&apos;t really like <a href="https://en.wikipedia.org/wiki/5_Whys?ref=blog.resilium.group">5-why</a>. It&apos;s a decent methodology (especially if you don&apos;t do incident analysis often), but in my opinion there are better options. Instead, we can use barrier-based incident analysis methods like Tripod Beta, Barrier Failure Analysis or BSCAT. I&apos;ll use Barrier Failure Analysis (BFA) to explain. If you don&apos;t know what BFA is, take a look at the picture in <a href="https://www.linkedin.com/feed/update/urn:li:activity:6389437004622499840?ref=blog.resilium.group">this post</a>. Why is this better? Well, here are 5 reasons why I think analysing barrier failures is better than 5-why.</p>
<h2 id="1barriersfocusyourattention">1. Barriers focus your attention</h2>
<p>In a 5-why diagram, everything is an event or a root cause. If the incident is large, this can lead to an overwhelming amount of information with no difference between important and supporting events. In barrier failure analysis, events are scaffolding that allow us to identify barriers. Barriers are what the organisation should have had in place and it is where our focus should be. During the investigation this focus is useful to spend the resources we have in the right place. After the investigation is complete, this focus is also useful to communicate more effectively to people that weren&apos;t part of the investigation team.</p>
<h2 id="2thereisaclearvisualsequenceofevents">2. There is a clear visual sequence of events</h2>
<p>A 5-why diagram flows in one direction. Both the sequence of events and the contributing factors are all displayed from, for instance, left to right. Barrier failure diagrams flow in two directions. The events and barriers are modeled horizontally, and the barrier failure causations are vertical. This makes it easier to distinguish between the direct sequence of events in the incident and the underlying contributing factors.</p>
<p>This is not unique to barrier failure diagrams. For instance causal factors charting <sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup> also flows in two directions.</p>
<h2 id="3itsmoredifficulttostoptheanalysistooearly">3. It&apos;s more difficult to stop the analysis too early</h2>
<p>5-why gives little guidance beyond asking why and ending up with a root cause. This can lead to root causes which aren&apos;t useful root causes. The analysis can stop too early (e.g. &apos;warning signs were absent&apos;, without diving into why they were absent) or actually go too far (everything was &apos;poor safety culture&apos;). In barrier failure analysis, this problem is reduced by giving each level in the causation path a specific purpose. You describe the immediate cause first, then the context in which that immediate cause can happen, and finally the underlying system level cause. In 5-why, the 5 levels don&apos;t have a meaning (except for the final one), which makes it more difficult for people to guide their analysis in the right direction.</p>
<h2 id="4recommendationscanfocusonimmediateandlongterminterventions">4. Recommendations can focus on immediate and long term interventions</h2>
<p>There are two logical places in a barrier failure analysis where recommendations can be made. On the barrier for more immediate corrective actions, and on the underlying cause for a more long term systematic intervention. Both are useful because we can first get up and running safely in the short term, and then prevent recurrence in the long term. The recommendations in 5-why usually focus more exclusively on root causes, and thus on the long term only.</p>
<h2 id="5itseasiertoconnecttoexistingriskassessments">5. It&apos;s easier to connect to existing risk assessments</h2>
<p>Any risk assessment method you use already has the concept of barriers (control measures, IPL&apos;s, safeguards or something similar). It&apos;s weird to have different ways of thinking for incident analysis and risk assessment when they are really two sides of the same coin. With a barrier-based incident analysis method you can more easily connect back to the barriers in an existing risk assessment. This can be useful for trending across incidents and to add new incident scenarios to the risk assessments.</p>
<p><mark>If you want to know more about barrier failure analysis, we have a great course on incident analysis with barriers on 5-6 June in Leidschendam, The Netherlands. Go to <a href="https://slicerisk.com/?ref=blog.resilium.group#Events">slicerisk.com</a> or view <a href="https://drive.google.com/file/d/1Gvky1ynFpGe4qBumdEOd4X5aA_7EQQUx/view?ref=blog.resilium.group">the brochure</a></mark></p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p><a href="http://158.132.155.107/posh97/private/AccidentPhenonmenon/investigation-workbook.pdf?ref=blog.resilium.group">DOE, Conducting Accident Investigations, DOE Workbook</a> <a href="#fnref1" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Why threats in bowties don't always lead to all consequences]]></title><description><![CDATA[In theory all threats in a bowtie diagram can cause the top event, and the top event can cause all consequences. But sometimes a threat can't lead to all consequences. Why is that and is it a problem?]]></description><link>https://blog.resilium.group/why-threats-in-bowties-dont-always-lead-to-all-consequences/</link><guid isPermaLink="false">6359ab8889af78298e202fc5</guid><category><![CDATA[bowtie]]></category><category><![CDATA[risk assessment]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Tue, 13 Mar 2018 09:00:00 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/IMG_20180309_154725-01.jpeg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.resilium.group/content/images/2022/10/IMG_20180309_154725-01.jpeg" alt="Why threats in bowties don&apos;t always lead to all consequences"><p>One of the assumptions in a bowtie diagram is that all threats cause the top event, and the top event can cause all consequences. This is the theory. But sometimes a threat can&apos;t lead to all consequences. Why is that and is it a problem? We&apos;ll see why it&apos;s not necessarily bad, but it can be if you don&apos;t realise that you&apos;re doing it.</p>
<p>Let&apos;s start with a confined space entry example.</p>
<p><img src="https://blog.resilium.group/content/images/2018/03/specific-threats---consequences-1a-1.png" alt="Why threats in bowties don&apos;t always lead to all consequences" loading="lazy"></p>
<p>As you can see, a lack of oxygen leads to an unsafe work environment, which leads to asphyxiation. Excessive heat also leads to an unsafe work environment, which leads to heat stroke. But a lack of oxygen cannot lead to heat stroke and excessive heat cannot lead to asphyxiation. How did we get to this situation?</p>
<p>This picture summarises what happens:</p>
<p><img src="https://blog.resilium.group/content/images/2018/03/specific-threats---consequences-1-1.png" alt="Why threats in bowties don&apos;t always lead to all consequences" loading="lazy"></p>
<p>The trick is in the top event, because there is a difference in abstraction level between the threats/consequences and the top event. We are adding specific threats and consequences to a high level top event. It is very common in bowties to see this pattern.</p>
<p>Let&apos;s first discuss the alternatives to fix it, and then get back to whether it&apos;s a problem for our confined space bowtie. There are two ways to fix it. Either we make our top event specific, or we make our threats and consequences more abstract. If we make our top event more specific, you&apos;ll see that your bowtie breaks apart into smaller sub bowties, like this:</p>
<p><img src="https://blog.resilium.group/content/images/2018/03/specific-threats---consequences-2.png" alt="Why threats in bowties don&apos;t always lead to all consequences" loading="lazy"><br>
<img src="https://blog.resilium.group/content/images/2018/03/specific-threats---consequences-3.png" alt="Why threats in bowties don&apos;t always lead to all consequences" loading="lazy"></p>
<p>Now all threats lead to all consequences in both bowties. The pros of this approach is that we have specific information in our bowtie, and the causality is correct again. The downside is that we now have one extra diagram, and in reality it might break apart into even more bowties.</p>
<p>We can also make the threats and consequences more high level. That would result in something like this:</p>
<p><img src="https://blog.resilium.group/content/images/2018/03/specific-threats---consequences-4.png" alt="Why threats in bowties don&apos;t always lead to all consequences" loading="lazy"></p>
<p>You will probably group multiple specific threats and consequences into higher level subjects. The pros of this approach is that we get to keep our single bowtie diagram, and also maintain a correct causal diagram. The downside is that we lose some of our specific information.</p>
<p>Now we can go back to our confined space entry bowtie and determine if we want to split it into two bowties, with one top event called &apos;Oxygen level below X&apos; and &apos;Heat in confined space above X degrees celsius&apos; or if we want to create a high level bowtie and group our various threats into &apos;Environmental factors&apos; and our consequences into &apos;Adverse effect on person&apos;.</p>
<p>You can probably see that both are not ideal. But what about a third option. Let&apos;s say we keep our original bowtie. This means we get to keep our single bowtie, as well as the specific information in the threats and consequences. The downside is that the causality isn&apos;t technically correct anymore. But after reading this post, the big difference is that you&apos;re now aware of it, and you can make a conscious decision whether the benefits outweigh the disadvantages.</p>
<p>One last note. You can make the decision which of the three strategies works best for you when you&apos;re using the bowtie as a qualitative communication tool. If you&apos;re using it in a quantitative way. You&apos;ll most likely want to make multiple specific bowties, because you need a correct causal relationship for most calculations to make sense, and you need specific information because it&apos;s difficult to quantify abstract concepts.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The barrier maturity model]]></title><description><![CDATA[Many organisations have adopted the idea of barrier management in safety. Many are also searching for ways to take a next step. With the barrier maturity model you can determine where your organisation currently is and see which steps can be taken to mature to a higher level of barrier management.]]></description><link>https://blog.resilium.group/the-barrier-maturity-model/</link><guid isPermaLink="false">6359ab8889af78298e202fc4</guid><category><![CDATA[barriers]]></category><category><![CDATA[bowtie]]></category><category><![CDATA[risk assessment]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Thu, 22 Feb 2018 15:33:48 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1476589143317-647df3b59c32?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=c4d6704e49d0d511e89418b8c3ec3111" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1476589143317-647df3b59c32?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=c4d6704e49d0d511e89418b8c3ec3111" alt="The barrier maturity model"><p>Many organisations have adopted, or want to adopt, the idea of barrier management in safety <sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup>. Many are also searching for ways to take a next step. The barrier maturity model is a simple way to first determine where your organisation currently stands and then see which steps can be taken to mature to a higher level of barrier management.</p>
<p>The barrier maturity model has four main levels</p>
<ol>
<li>Pre barrier level</li>
<li>Static barrier level</li>
<li>Dynamic barrier level</li>
<li>Operational barrier level</li>
</ol>
<p>Each level represents a different type of maturity in an organisation. The levels do not necessarily follow each other in a linear way but in an effective barrier-based risk management system, all levels should be present. Especially the dynamic and operational level tend to work together. It is useful to have these levels because it allows us to discuss how we can transition to a next level to further adopt and integrate barrier management into the way of thinking and dealing with safety.</p>
<h2 id="prebarrierlevel">Pre barrier level</h2>
<p>The current approach for many organisations is to focus on risk matrices instead of barrier management. This is the case in more traditional approaches using an excel sheet with a single column describing barriers (or control measures). The main point of the sheet is to show that the inherent risk level has been brought down to an acceptable residual risk. This is of course useful, but the barriers do not have a central role and are not fleshed out in much detail.</p>
<h3 id="howtoprogress">How to progress</h3>
<p>To get to the next level, a model has to be chosen which puts more emphasis on analysing the barriers in a structured way. We typically use a bowtie diagram for this purpose, but you can choose a different model if you prefer. As long as it allows you to analyse which barriers you need and how you plan to maintain them.</p>
<h2 id="staticbarrierlevel">Static barrier level</h2>
<p>Most organisations that work with barrier management are at this level. Risk scenarios and associated barriers are analysed in a structured way. This helps to identify high risk areas where improvements should be made. It then results in an improvement plan which is the primary outcome of the analysis. However, often the reports themselves are shelved and nobody looks at them again until the next risk review. The problem is that this is not a continuous process but merely a snapshot at the time of the analysis. A report is made and hopefully communicated to the right stakeholders, but the analysis is not used for anything more.</p>
<h3 id="howtoprogress">How to progress</h3>
<p>The main problem is not the analysis itself, it&apos;s that no data is connected to show the current status of the barriers. Why would someone look at a model if the information it provides is always the same? To move to a more dynamic view, an overview of the available data related to the barriers has to be made. This can then be connected.</p>
<h2 id="dynamicbarrierlevel">Dynamic barrier level</h2>
<p>Some organisations are actively monitoring the performance of their barriers over time. This is done through various data sources. Incidents and audits are commonly used, but the maintenance system, competence management, permit to work or sensors can all provide valuable data that can be mapped onto barriers.</p>
<h3 id="howtoprogress">How to progress</h3>
<p>The challenge to get to the highest level of barrier maturity is to package the dynamic information in the right way to aid decision-making. This can be in the operation, as well as on a corporate level.</p>
<p>Get an overview of who needs to make which decisions, and what type of information they need to help make that decision. If you are working in a high risk environment, what type of information would help you make a risk critical decision?</p>
<h2 id="operationalbarrierlevel">Operational barrier level</h2>
<p>Organisations should provide the right barrier information at the right time to the right person to make the right decision. This is the final stage of the barrier maturity model. To achieve this, talk to people in high risk departments. Understand what their daily routine looks like and how this routine can be enhanced by providing timely barrier (performance) information. The main challenge is to fit this information in so it adds maximum value and a minimum of extra workload. The result might look very different from the original barrier analysis, but that&apos;s ok. That just means you put in the effort to transform the information into something useful.</p>
<p>Few organisations have reached this level of maturity. We think the main reason is not a technical issue. It&apos;s often difficult for people to understand what is needed by other people that have a different role in the organisation. Instead, assumptions are made about what would be useful which don&apos;t match with reality. To understand what is needed requires a high level of trust, cooperation and communication that is often lacking.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The barrier maturity model is a roadmap that can be followed by organisations that want to take the next step in barrier management. It&apos;s of course possible to take a different route, for instance to communicate static barrier information to the operation directly. This can even make it easier to mature barrier management, because you can have results sooner and keep the momentum going. However, to fully implement barrier management it&apos;s necessary to eventually go through all four levels. There is more to say on the details of each level and the path that can be taken to go from one level to the next, but we will write more on this in future articles. Stay tuned!</p>
<p><small>Photo by Gabriel Jimenez on Unsplash</small></p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>see the <a href="http://www.ptil.no/barriers/category1269.html?ref=blog.resilium.group">Petroleum Safety Authority</a> for more background <a href="#fnref1" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[An alternative to the hierarchy of control]]></title><description><![CDATA[The hierarchy of control is good to brainstorm controls. However it is not a formal classification because it mixes control functions and systems.]]></description><link>https://blog.resilium.group/an-alternative-to-the-hierarchy-of-control/</link><guid isPermaLink="false">6359ab8889af78298e202fb7</guid><category><![CDATA[barriers]]></category><category><![CDATA[risk assessment]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Wed, 02 Nov 2016 22:00:00 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/IMG_20160810_181011-01.jpeg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.resilium.group/content/images/2022/10/IMG_20160810_181011-01.jpeg" alt="An alternative to the hierarchy of control"><p>The <a href="http://www.cdc.gov/niosh/topics/hierarchy/default.html?ref=blog.resilium.group">hierarchy of control</a> is often used as a brainstorming tool to come up with effective controls (aka, barriers). It&apos;s good because it favours proactive interventions like eliminating a source of fuel over reactive interventions like putting out a fire. However sometimes it is misused as a formal classification tool. This is bad because it mixes two different subjects. Control functions and control systems.</p>
<figure>
<img src="https://blog.resilium.group/content/images/2017/09/hierarchyofcontrol.jpg" alt="An alternative to the hierarchy of control">
<figcaption>
Taken from the <a href="http://www.cdc.gov/niosh/topics/hierarchy/default.html?ref=blog.resilium.group">NIOSH</a></figcaption>
</figure>
<p>For example, substituting a dangerous chemical can be done by mixing it with a second chemical. This might require a worker to mix in the second chemical. Is this a substitution control or an administrative control? If it&apos;s used as a brainstorming tool this doesn&apos;t really matter. But for control classification this overlap is ugly and can lead to lengthy discussions.</p>
<p>Another example: Is an airbag in a car an engineering control? After all, it is a fully automated system that functions independently of the worker. It&apos;s installed or purchased early on. In a way it isolates the driver from the energy of a crash. On the other hand, I can also argue it&apos;s PPE because its sole purpose is to protect you personally.</p>
<p>Ok, last one: If you regard working at height a hazard, monitoring weather conditions might cause cancellation of the work before it begins. Is this an elimination control or an administrative control? Part of the problem in this example is how we define a hazard. Traditionally a hazard is often a substance like oil or a chemical. But if an activity like working at height can also be a hazard, what does it mean to eliminate? Should elimination be permanent or can it be temporary by postponing work?</p>
<p>So hopefully you see there might be some discussion when this is used as a classification tool. Now let&apos;s move on to the alternative. Enter the difference between functions and systems. <strong>Functions are what we want to achieve (eliminate, substitute, prevent, mitigate). Systems are the means by which we achieve that function (engineering, administration)</strong>. If you take nothing else from this post, at least consider the difference between these two things. PPE is not in these lists because I consider it a specific type of equipment on a different level than the other categories. We will not discuss PPE specifically further.</p>
<p>As an example, here is an alternative classification system that uses two aspects for each control. A function and a system category.</p>
<h2 id="controlfunction">Control Function</h2>
<p>The control functions are based on the 10 strategies by <a href="https://www.technologyreview.com/s/409370/on-the-escape-of-tigers/?ref=blog.resilium.group">William Haddon</a>. A lot of the existing classifications seem to be based on his work, and so is this one.</p>
<p>The first thing we consider is if we need to accept the existence of a hazard. This is the elimination step in the hierarchy of control. The three functions we want to achieve in relation to the hazard are to:</p>
<p><strong>Eliminate the hazard</strong><br>
<strong>Substitute the hazard</strong><br>
<strong>Reduce the amount of hazard</strong></p>
<p>If we cannot eliminate, substitute or reduce, we have to focus on keeping the hazard under control. The first thing we try is to:</p>
<p><strong>Control deviations</strong>. Deviation events are possible because we let the hazard exist. Some more specific functions that might be relevant for some deviations are to 1. Eliminate the event, 2. Reduce the size of the event, 3. Modify the event rate or 4. Change the event properties.</p>
<p>If we cannot stop the deviation event, we have to keep it away from objects we care about. Objects is an abstract term in this case which usually refers to people, assets, environment or reputation. So now we try to:</p>
<p><strong>Separate</strong> the deviation event in time, space and access from the objects. If we cannot control the deviation or separate it from the objects, we have to:</p>
<p><strong>Protect objects</strong>. The difference here is that the deviation can run its course. We do not necessarily know what the specific deviation will be, but we create controls that will best protect an object from a range of possible deviations. If all else fails, we must also consider how to:</p>
<p><strong>Reduce damage to objects</strong>. Here we try to reduce damage while the deviation is still ongoing. Then finally, after the deviation has passed, we can:</p>
<p><strong>Stabilize, repair, and rehabilitate objects</strong></p>
<h2 id="controlsystem">Control System</h2>
<p>Control systems are the ways in which we achieve our functions. At a high level there are two types: physical systems or behavioural systems (i.e. engineering or administration). The following splits those two up into more subtle categories and is an adaptation of the barrier classification in <a href="http://www.ncbi.nlm.nih.gov/pubmed/16111813?ref=blog.resilium.group">ARAMIS</a>.</p>
<p><strong>Behavioural</strong>. A system reliant on only people and procedures. E.g. a double check, defensive driving.<br>
<strong>Active hardware</strong>. A physical system built from equipment which does not require human interaction for it&apos;s primary function, although it does need to be maintained by someone. On some input, the system takes an action.<br>
<strong>Socio-technical</strong>. A system that needs both a person and a piece of equipment to perform it&apos;s primary function. E.g. emergency shutdown system, confined space continuous monitoring.<br>
<strong>Continuous hardware</strong>. A physical system that needs energy to function, but instead of waiting on input to take action, the action it takes is continuous. E.g. ventilation, active corrosion protection.<br>
<strong>Passive hardware</strong>. A physical system that does not need energy to function, and also doesn&apos;t take action. E.g. walls, fences, paint.</p>
<h2 id="conclusion">Conclusion</h2>
<p>These two lists for function and system are just examples of possible categories. Let&apos;s take a look at the three controls we had above and see if we can classify them.</p>
<table>
  <tr>
    <th>Control</th>
    <th>Function</th>
    <th>System</th>
  </tr>
<tr>
  <td>Mixing chemicals</td>
  <td>Substitute hazard</td>
  <td>Behavioural</td>
</tr>
<tr>
  <td>Airbag</td>
  <td>Reduce damage to object</td>
  <td>Active hardware</td>
</tr>
<tr>
  <td>Wind monitoring before working at height</td>
  <td>Eliminate hazard</td>
  <td>Behavioural</td>
</tr>
</table>
<p>Perhaps brainstorming controls is easier using the hierarchy of control directly because you only have to consider one list. But after the brainstorming, when you want to put a barrier in a category, please consider splitting function and system.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Barrier states]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>A barrier can have different states in an incident, mostly divided into four types: 1) Missing barriers 2) Failed barriers 3) Inadequate barriers and 4) Effective barriers. There are considerable differences in interpretation of these states, and when one or the other should be used. Here I&apos;d like</p>]]></description><link>https://blog.resilium.group/barrier-states/</link><guid isPermaLink="false">6359ab8889af78298e202fae</guid><category><![CDATA[incidents]]></category><category><![CDATA[barriers]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Sat, 07 Feb 2015 20:03:56 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>A barrier can have different states in an incident, mostly divided into four types: 1) Missing barriers 2) Failed barriers 3) Inadequate barriers and 4) Effective barriers. There are considerable differences in interpretation of these states, and when one or the other should be used. Here I&apos;d like to give another interpretation, because the existing definitions don&apos;t really work for me.</p>
<h2 id="missingbarrier">Missing barrier</h2>
<p>For me, a barrier is missing if it has never been fully implemented. For instance, a fire extinguisher that was specified in a standard, but never acquired. Implementing a barrier is a long process. It requires identifying the need for it, specifying it, acquiring it, installing it and activating it. If this process breaks down anywhere before the barrier is activated, I call it a missing barrier.</p>
<p>Most people can see the logic in this definition. However, before you accept it, think about the most extreme example. If I install a smoke detector, but forget to turn it on, I would call it a missing barrier because it has never been fully implemented. Even though it is physically present, the implementation failed at the last moment. This can be difficult for people to accept, because if a barrier is physically present, it&apos;s difficult to see it as &apos;missing&apos;. But it is consistent. We can also ask the question: &apos;Could this barrier at one point have performed according to its specification?&apos;. If the answer is no, it&apos;s a missing barrier.</p>
<p>If a barrier was never identified as necessary in the first place (when it should have been because it was an industry best practice for instance), I don&apos;t call it a missing barrier. It&apos;s not useful in an incident analysis to include barriers that were never identified in the organisation at all. Those things should be turned into recommendations.</p>
<h2 id="failedbarrier">Failed barrier</h2>
<p>Most barriers in an incident analysis will be failed barriers. These barriers are correctly implemented at some point, and then for whatever reason stop performing their function. This can be because the barrier is removed, is turned off, breaks when challenged or is broken continuously. If the barrier isn&apos;t missing, we can ask: &apos;Has this barrier worked according to it&apos;s specification?&apos;. If the answer is no, the barrier has failed.</p>
<h2 id="inadequatebarrier">Inadequate barrier</h2>
<p>There are also barriers that work according to their specification, but do not stop the incident from progressing. They might have no effect or a minor effect. These barriers are classified as inadequate. A barrier can be inadequate for various reasons. It can be specified but designed incorrectly (it will perform its intended function, but will be insufficient for the scenario). It can be designed within certain design limits and fail because of extreme circumstances. It can also be designed correctly but still fail because the context within which it operates changes. This can happen if management of change isn&apos;t done correctly.</p>
<p>It is difficult to predict when a barrier will be inadequate, because it depends on the circumstances. A barrier can be adequate for one scenario, and completely inadequate for another. We can ask the question: &apos;Has this barrier worked according to its specification, and stopped the incident?&apos; If the answer is no, the barrier is inadequate.</p>
<h2 id="effectivebarrier">Effective barrier</h2>
<p>Effective barriers are easier. These are the reason an incident doesn&apos;t escalate further. We can ask the same question we asked for inadequate barriers: &apos;Has this barrier worked according to its specification, and stopped the incident?&apos; If the answer is yes, the barrier is effective in these particular circumstances. If multiple barriers are required to stop an incident, I would classify all of them as effective, even if any single barrier wouldn&apos;t have stopped it on its own.</p>
<h1 id="whythisisbetter">Why this is better</h1>
<p>So why would this interpretation be better? First, I just find these classification rules easier to work with than others I&apos;ve come across. Second, we actually get some meaning from categorising our barriers like this. It allows us to count how many missing barriers we have, and use that as a quality measure of our implementation process. We can count failed barriers, and use it as an indication of how our barriers perform in operations. We can count inadequate barriers, and use it as a measure of our design process (both initial design and within change management), and finally we can count effective barriers and make sure they keep working, because they&apos;re apparently the only ones that work.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[30 years after Bhopal]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In the media today you can read various accounts on Bhopal, exactly 30 years after the disaster. This video makes you think about how organisations often fail to deal with long term consequences of accidents. Often they already struggle to effectively implement corrective measures directly after an incident analysis is</p>]]></description><link>https://blog.resilium.group/30-years-after-bhopal/</link><guid isPermaLink="false">6359ab8889af78298e202fb3</guid><category><![CDATA[incidents]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Wed, 03 Dec 2014 13:26:29 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In the media today you can read various accounts on Bhopal, exactly 30 years after the disaster. This video makes you think about how organisations often fail to deal with long term consequences of accidents. Often they already struggle to effectively implement corrective measures directly after an incident analysis is done, let alone years after the fact with indirect (but very real) consequences that might not have been apparent at the time.</p>
<iframe width="560" height="315" src="//www.youtube.com/embed/IwPSDMUtNmk" frameborder="0" allowfullscreen></iframe><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Human error is like gravity]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>There are still incidents that get summarised to &apos;it was caused by human error.&apos; This isn&apos;t useful. It&apos;ll be difficult to find an incident that doesn&apos;t include human error somehow. It&apos;s problematic because talking about the human condition as a</p>]]></description><link>https://blog.resilium.group/human-error-is-like-gravity/</link><guid isPermaLink="false">6359ab8889af78298e202fa2</guid><category><![CDATA[incidents]]></category><category><![CDATA[human factors]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Wed, 26 Nov 2014 20:58:26 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>There are still incidents that get summarised to &apos;it was caused by human error.&apos; This isn&apos;t useful. It&apos;ll be difficult to find an incident that doesn&apos;t include human error somehow. It&apos;s problematic because talking about the human condition as a whole is unlikely to get fixed anytime soon.</p>
<p>Let the late <a href="https://en.wikipedia.org/wiki/Trevor_Kletz?ref=blog.resilium.group">Trevor Kletz</a> explain it to you in a few words:</p>
<iframe width="420" height="315" src="//www.youtube.com/embed/3o80EJ1og7k" frameborder="0"></iframe><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The root cause fallacy]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>After an incident has occurred, most investigations will try to find the root cause on a system or management level. To find the underlying reasons for an incident, instead of focusing on the more superficial ones.</p>
<p>An important reason for doing this is the assumption that by fixing a single</p>]]></description><link>https://blog.resilium.group/the-root-cause-fallacy/</link><guid isPermaLink="false">6359ab8889af78298e202fa4</guid><category><![CDATA[rca]]></category><category><![CDATA[incidents]]></category><dc:creator><![CDATA[Alex de Ruijter]]></dc:creator><pubDate>Wed, 22 Oct 2014 19:11:00 GMT</pubDate><media:content url="https://blog.resilium.group/content/images/2022/10/467295936_a753a5b693_h.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.resilium.group/content/images/2022/10/467295936_a753a5b693_h.jpg" alt="The root cause fallacy"><p>After an incident has occurred, most investigations will try to find the root cause on a system or management level. To find the underlying reasons for an incident, instead of focusing on the more superficial ones.</p>
<p>An important reason for doing this is the assumption that by fixing a single root cause, we&apos;re not only fixing the incident at hand, but also many related incidents. This gets us the maximum bang for our buck. But is this a correct assumption?</p>
<p>If something sounds easy, it&apos;s generally incomplete. Root causes are no exception. What single thing can you fix that will have far reaching effects? Maintenance? Leadership? Culture? Whatever comes to mind will likely fall apart into many small, albeit related, things. For instance, fixing a maintenance management system likely includes fixing a lot of specific maintenance procedures, educating individuals, redesigning work environments etc.</p>
<blockquote>
<p>At best, root causes are a way to group a list of specific issues.</p>
</blockquote>
<p>Why can&apos;t we identify just one specific issue to fix on a system level that will prevent multiple incidents? Because preventing different incidents is done by fixing many specific issues, not just one.</p>
<p>Does this mean we have to abandon our search for root causes? Probably not. It&apos;s still good to look for improvements on a system level to avoid symptomatic fixes, but there are two things that come to mind. First, finding a single root cause to fix is more work than it sounds as it breaks apart into many specific tasks. Second, in some cases, it might be better to spend all that effort at the sharp end, fixing specific things there, instead of that &apos;one root cause&apos; far away. Because if that one cause doesn&apos;t exist, it becomes possible that we get the biggest bang for our buck, not by fixing a lot of specific system issues, but a lot of specific operational issues instead.</p>
<p><small>Image by <a href="https://www.flickr.com/photos/international-festival/467295936?ref=blog.resilium.group" title="CONFETTI by Tor Lindstrand, on Flickr">Tor Lindstrand</a></small></p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>