STAY INFORMED!
Sign Up for our Newsletter
Let's get social

New & Improved White Paper Archive

Date: 1/21/2015

Title: Predictive Failure Analysis™

Share This 


Predictive Failure Analysis™

Planning for your worst business nightmare by figuring out what it is!

 

By guest author: Jack Hipple, Innovation-TRIZ

“The only emotion I’ve ever felt with a new car is ‘I just hope it’s quick enough.’ I anticipate failure more than I enjoy possible success.”
— Sir Frank Williams, Formula One Auto Racing Team Owner

In the innovation and creativity world, we spend a lot of time solving problems or designing new things to replace or improve what already exists. Usually, the problem is complex or the solution is not obvious (or it wouldn’t need creativity, right?).

In this day and age where we are increasingly aware of the threat of hurricanes, tsunamis, earthquakes, possible flu pandemics, and terrorism, we have a new kind of problem — thinking about how to prevent natural and man-made disasters from doing harm to not only individuals, but our infrastructure and businesses. This involves trying to identify ahead of time potential causes of failure in a system, product, or organization. A typical approach to this type of problem is a checklist, developed by an experienced practitioner in a given area based on years of practical experience. From these checklists and/or using conventional ideation techniques we develop hypotheses of what might go wrong from which to plan. The problem is that because it’s based on what we already know or anticipate, we don’t always get to the root cause and consequently have to deal with a new symptom or related problem again at a future date.

There continues to be a large number of industrial accidents and disasters where — after the fact — it becomes obvious what happened. The same is true with natural hazard preparedness in that we not only repeat the same mistakes of the past in equipment and process design, but we don’t anticipate all the problems that arise. In hindsight the cause usually appears obvious but it was not on the checklist. It gets added to the checklist for the next time. And so on and so on. A very inefficient — and potentially dangerous — way to find out EVERYTHING that should be on the list!

The Usual Approach:


Those of you familiar with TRIZ problem solving know that it has an overriding algorithm and set of tools in a toolkit. Many don’t know that within that toolkit is a very unique process for solving failure related problems. We use the simple TRIZ algorithm in reverse. Normally, we use TRIZ as follows:
  1. Define the ideal state for the system, product, etc.
  2. Identify the resources available for achieving this state
  3. Identify and resolve the contradictions that prevent this state from being achieved, using any number of known problem solving principles and algorithms (of which there are a limited number).

Of course there is more to it than these three simple headlines, but for illustration, they’ll suffice.

Thinking backwards can be a good thing!


If we’re trying to prevent a failure problem, then we need to invert this process, using a method called Predictive Failure Analysis™. We’ll use an example of a potential business interruption of the sort that we are now much more concerned about than in the past:
  1. Define the ideal state: It would be great if we had total recovery of all data records from a business affected by a natural disaster (hurricane, flu pandemic, tornado, terrorist attack)
  2. Invert this statement: We DO NOT want the records to be available after the natural disaster
  3. Exaggerate the inverted ideal statement: We want all the records of a business to be TOTALLY destroyed and NEVER be recoverable after a natural disaster.
  4. How would I accomplish this? What resources are required?
Now this may sound very simplistic, but what we have done is to change the basic question from “what could go wrong?” (A checklist type of approach) to “how do I make it go wrong?” — a pro-active saboteurial question. We’ve changed the question from “what?” to “how?” This puts peoples’ brains in a different quadrant — they are now permitted to be evil geniuses and do things not normally permitted! In our experience with projects such as electronic bank fraud, food contamination, chemical releases, this process has always produced either 1) answers not obvious in the first place, 2) greatly improved answers, or 3) the best answer that wasn’t even considered in the first place.

An Example


In one project with a chemical plant which released a toxic chemical from a scrubber (despite being in full compliance with all the required checklist processes from the company, OSHA, and the EPA), it turned out that the flow to the scrubber was inadequate to contain the normal flow. Now this was something that should not have happened and was already on the checklist, but in the process of this analysis, the Predictive Failure Analysis algorithm was used in the following way:
  1. Ideal State: We want no release of chemical from the system.
  2. Invert Ideal State: We want a release of chemicals to the atmosphere.
  3. Exaggerate: We want the system to leak ALL THE TIME and cause severe environmental damage and human health impact all over the surrounding geographic region.
  4. What resources are required?


The normal chemical industry hazard analysis process (HAZOP) is to review every pressure, flow, temperature, etc. and ask the consequences of not being at the specified point (Higher? Lower?), but not to question the fundamental design. Approaching this problem using Predictive Failure Analysis, we ask the question, “What resources are needed to have a leak?” The group starts out saying things like high pressure, high temperature, etc. but none of these is sufficient without another resource — A HOLE. We then ask about the hole at the top of the scrubbing system. It turns out that it’s there because we have to vent an inert gas used to bubble into a storage tank. The inert gas is used to analyze differential pressure in order to obtain a level reading. Once we realize this, we ask if there are other indirect ways of measuring level not requiring a gas which must be scrubbed. The answer is yes, by using an externally mounted level device using the magnetic resonance properties of the liquid. Thus the whole root cause of the problem disappears and the hole and its associated scrubbing system are no longer required.

Non-technical applications


Though this was a technical example, consider the use of this thinking process in a more general area of concern:
  1. Ideal State: It would be great if we had organizational communication at the time of a natural disaster (Invert: We want to make sure that NO ONE ever finds out ANYTHING that’s going to happen prior to, during, and after the disaster).
  2. Ideal State: It would be great if we could improve the records recovery process after a natural disaster (Invert: We want to make sure that we are NEVER able to recover any company records, and go out of business within a matter of days).
  3. Ideal State: It would be great if my customers were prepared for what would happen to the supply of my product in the event of a natural disaster (Invert: We want to make sure my customers are NOT aware of my business situation and so are not prepared for a change in supply of my product in the event of a natural disaster).
In practice, there are additional tools and techniques to assist in this basic algorithm (software, cause and effect diagramming tools, potential causes derived from past information in other fields as in “normal” TRIZ problem solving), but just changing your thinking process to become a saboteur will produce new answers you never thought of, while allowing you to — productively — play out your secret fantasies of being an evil genius!



 
Share This