Skip to main content
#
Scottish Continuity
 
Our Blog
Friday, October 12 2018

An interesting article that was copied straight from Financial Risk Management for Dummies - Gordon Mackie

10 DRAMATIC ONLINE ILLUSTRATIONS OF RISK

FROM

Financial Risk Management For Dummies

By  Aaron Brown 

One of the fun parts of being a risk manager is that you never have to make dull presentations. Your job is to disrupt thinking and force broader consideration of potential future events, and you never do that by putting people to sleep.

Whether you’re arguing for people to take more risk or to take more care, to go boldly or to dig defensive trenches, to discard stale canards or to adhere to traditional wisdom, you can find dramatic illustrations to make your case on the Internet.

To help with that, here are ten favourites. Even stable sites change links, but even if they do, you should have no trouble finding information about any of these ten events using a search engine.

DEALING WITH THE UNEXPECTED – THE BOSTON MOLASSES FLOOD

At first glance, the Boston Molasses Flood story, seems like a routine urban industrial disaster. A 50-foot molasses storage tank at the Purity-Distilling Company exploded, probably due to the build-up of carbon dioxide as the molasses fermented. (If the fermentation had progressed longer, the molasses would have converted into rum, and the story would be the Boston Rum Party instead of the Boston Molasses Flood.) The ensuing molasses flood killed 21 people, injured 150, levelled two square blocks and an elevated train station.

Three lessons illustrate important risk management principles:

  • Heed warning signs: When people complained about molasses leaking out from the seams of the tank, the company took swift action. It repainted the tank brown so the leaks weren’t as noticeable.

Disasters usually have warning signs beforehand. Sometimes the best strategy is to heed the warnings, investigate the problem and deal with it before the bad things happen. Often, the best strategy, and in any event the second-best strategy, is to do nothing until things get worse and the problem is clear or the problem goes away on its own. The most common strategy is the worst – cure the symptom without attempting to diagnose the disease.

  • Anticipate unconventional dangers: Molasses isn’t usually considered to be dangerous. When you read about a munitions factory explosion, or toxic spill from a chemical plant or environmentally damaging leak from an offshore oil rig, you’re not surprised. But molasses seems safe and friendly. No one ever proposed a disaster movie, Deadly Sweet, about killer molasses. Before the disaster, there were no inspection requirements for molasses tanks, although there were all kinds of regulations for things considered dangerous.

The moral? When you’re thinking about risk, don’t focus solely on the dangerous stuff. In fact, the dangerous stuff is usually the least of your worries because everyone focuses on it. Your portfolio may tank due to losses on supposedly safe AAA bonds and the speculative growth stocks bring great returns.

  • Don’t assume normalcy: Molasses is a non-Newtonian fluid, which is just a fancy way of saying that its viscosity (its thickness and stickiness) changes under different conditions. When people say, ‘as slow as molasses’, they’re thinking of unstressed molasses. But if molasses is squeezed or shaken enough, as in an explosion, it can flow almost as easily as water. The Boston Molasses Flood travelled at 23 miles per hour and inundated two city blocks in less than 20 seconds.

That would have been bad enough if the molasses had remained in that thin fluid state. It would have knocked things over and moved them around and maybe some people would have drowned. But the terrifying thing is that when the expansion pressure eased, the molasses reverted to its normal thick, sticky state, and literally ripped apart the people, animals and things (including buildings and sections of rail track) caught up in it.

The message for risk management is you can’t rely on your intuition about how things work under normal conditions. You may have been prepared for a water flood, and a molasses flood, but only sophisticated calculations would lead you to prepare for a flood that could move as fast as water but stick as tightly as molasses.

For more information, take a look at Stephen Puleo’s book, Dark Tide: The Great Molasses Flood of 1919 (Beacon Press).

FLYING DIRECTLY INTO DANGER – THE MOUNT EREBUS DISASTER

If you ask people to list the main causes of airplane crashes, controlled flight into terrain (CFIT) is seldom high on the list. Shockingly, however, it accounts for 25 per cent of crashes, which makes it common enough to need a name. It means the aircraft flew into the side of a mountain or into the ground without mechanical malfunction or crew incapacity – just a plane in good condition piloted by a competent person in good condition that flies into the ground.

One of the most famous examples of CFIT is Air New Zealand Flight TE901. This craft was on an Antarctic sightseeing flight under the command of an experienced crew. The airplane and all systems were in good order, and the weather was clear. Yet on 28 November 1979, the craft flew into the side of Mount Erebus on Ross Island, Antarctica, killing all 257 people on board.

Without going into the complicated and still-controversial details of the accident, here is one aspect highlighted: The plane’s autopilot was programmed with a route, but the programmed flight path was changed without informing the crew. So when the pilot took manual control, the plane was 30 miles east of where he thought it was. Although the mountain should have been plainly visible to the crew, the brain finds it easier to manufacture what it expects.

The risk management lesson is that CFIT is common. If an expert pilot, with his life and the life of his passengers and crew at stake, can fly straight into a mountain in good visibility, anyone in your organisation can do incomprehensible things that cause disaster. Never assume that no one would ever do X. However crazy or dangerous X is, someone may do it sometime. Don’t assume everyone can see the looming disaster just because the disaster is plainly visible.

IGNORING THE OBVIOUS SOLUTION – THE BP COFFEE SPILL

This comedy sketch satirising the reaction of British Petroleum executives to the 2010 Deepwater Horizon oil spill in the Gulf of Mexico illustrates some important risk management points. Look at this parody skit at and ask people what mistakes were made. The most common answer, by far, is that the actors ignore the simple, obvious solution to the problem. Of course that’s what drives the humour in the piece. Watchers feel superior, saying to themselves, ‘Of course we would never act like that.’

In fact, people often do act like that and you won’t find it as funny when the damage is real. Three less obvious mistakes demonstrated in the video are applicable to risk management:

  • Panic: This is also part of the joke, because the worst downside to a coffee spill isn’t that bad. But in the face of a real danger, panic can destroy any chance to salvage the situation and can make it far worse. No situation is so bad that you can’t make it worse by panicking.
  • The imperative to do something: Even when the actors elect to do nothing, doing so is a decision to observe for three hours followed by despair at having wasted the time. Doing nothing is always an option and often the best option. A lot of problems go away or get worse, but in getting worse, they make the appropriate solution clear. Doing nothing can also clear the field for someone else with better ideas to try to make things better. At least doing nothing doesn’t make things worse.
  • Tunnel vision: Don’t focus exclusively on solving the problem to the exclusion of making contingency plans in case it cannot be solved. The actors don’t inform others about the issue, consider what the consequences may be or take actions to alleviate the potential harm. All their efforts are devoted to the spilled coffee on the table, they forget that an entire world is outside. This kind of tunnel vision is very common in crises.

So laugh at the skit, but don’t consider that you may look just as silly in a real crisis unless you remember the following: Don’t panic, doing nothing is an option and make contingency plans in case you can’t solve the problem.

FAILING FAIL-SAFES – THE 1996 CHUNNEL FIRE

The tunnel under the English Channel, known as the Chunnel, which connects Folkestone, Kent, in the United Kingdom, with Coquelles, Pas-de-Calais, in northern France, is one of the great engineering achievements of the 20th century.

When the Chunnel was designed and built, there was a lot of controversy in the engineering profession about several aspects of the design, especially the plans for dealing with fires in the tunnel.

One particularly widely cited piece of information was a calculation that a serious fire in the Chunnel would occur on average only once every 840 years, and a fire with fatalities or an extended closure of the Chunnel less than once in 10,000 years. Depending on how you count, there have been from three to six serious fires in the 20-year operating history of the Chunnel, two of which closed the Chunnel for six months each. Although fatalities have been avoided, there have been numerous injuries, mainly from smoke inhalation.

The ‘1 in 840 years’ statistic came from a calculation listing all the things that would have to go wrong to have a serious fire – the fire would have to start, it would have to escape detection by smoke and fire detectors as well as human observers until it got out of control, and so on. The report estimated probabilities for all these things and multiplied them together to get a very low probability of a serious fire.

Everything listed in the calculation did in fact happen in the 1996 Chunnel fire. The fire broke out in France before the train entered the Chunnel. Both smoke and flame detectors on board the train failed. Guards noticed the smoke and called the operations booth, but no one was manning the booth. Eventually the guards contacted the train engineer directly, but by that time the train was in the Chunnel. The instructions were for the train to continue through to the UK where the fire could be extinguished. However, the engineer instead elected to stop the train and evacuate the passengers. Once the train was stopped, the fire was concentrated in one place, and it destroyed the Chunnel electrical and ventilation systems, which frustrated the remaining detection and mitigation features.

There weren’t 20 independent events that all happened to go wrong at the same time due to an extreme run of bad luck; all these things were connected. The lesson is that you’re only correct to multiply probabilities together if they’re independent.

Here are two risk management lessons from this experience:

  • Whenever someone argues a disaster is unlikely because of a long list of things that would have to go wrong first, ignore all but the two least likely items on the list. For one reason, when two unlikely things happen at once, you’re likely in a scenario you failed to anticipate, so you can’t trust any of the other items. For another, the existence of multiple levels of safety precautions nearly always leads to people neglecting some of them because they never matter. Why should a sentry bother to stay awake when an automated proximity alarm is present? Why bother to make sure that the fire extinguishers are charged when you have a sprinkler system? A few high-performance, high-risk organisations can maintain multiple levels of high security, but such an organisation is the exception, not the rule.
  • Figure out how to salvage a sitution when your fail-safes fail. The Chunnel was much more prone to fires than its designers thought, but those disasters haven’t killed anyone due to the robust contingency and rescue systems built into the project, as well as the high level of performance of emergency crews. Predicting and monitoring dangers is good, preventing them is better; but however well you do those things, make sure to spend some effort thinking about how to rescue the situation after all else fails.

SHOWING BRAVADO – HARRY TRUMAN VERSUS MOUNT ST HELENS

At the beginning of March 1980, geologists started to see signs that Mount St Helens in Washington state might be preparing to erupt. As the evidence mounted over the next two months, people living on or near the mountain were evacuated. But 84-year-old World War I veteran Harry Randall Truman became a folk hero for refusing to leave his home of 52 years where he lived with 16 cats beside Spirit Lake. Among other memorable quotes he claimed, ‘The mountain ain’t gonna hurt me, boy.’

Harry’s unflappable courage was celebrated in song, poem and story. He was a favourite interview subject. Everyone loved Harry. Except, as it turned out, the mountain. On 18 May 1980, the mountain did hurt Harry, and his 16 cats, and his home, and 56 other people and hundreds of square miles of land and billions of dollars worth of property.

Inspiring-sounding bravado delivered in folksy terms will always be popular. It works in movies but not in real life. Bravado is the opposite of risk management. Even after Harry was killed, people continued to celebrate his stubbornness rather than mourn his foolish denial of reality.

Whenever you hear someone downplaying a risk based on romantic nonsense, remember that Harry Randall Truman died that way.

BEING ABLE TO ANSWER QUESTIONS – JON CORZINE ON CSPAN

Jon Corzine had an illustrious career. He was head of Goldman Sachs, a US senator and the governor of New Jersey before taking over the commodity brokerage firm MF Global. The firm collapsed on 31 October 2011 due to losses from bets on European sovereign debt. Shockingly, $1.5 billion of customer funds appeared to be missing, and six weeks later Corzine appeared before Congress to answer questions about the shortfall.

MF Global’s basic business was simple. It was a futures commission merchant(FCM) meaning that it held accounts for individuals and institutions that wanted to place bets in the futures markets. If the bets win, the profits are placed in the customer’s account with the FCM; if the bets lose, the losses are taken out of the customer’s account with the FCM. The important point is that the money in the customer accounts belongs to the customer, not to the FCM. The FCM is legally required to keep those funds segregated from its own funds. Of course, as a practical matter, the location of the funds can get complicated.

MF Global had tens of thousands of customers trading futures contracts around the world, some of them making hundreds of trades per day. So you can easily understand why there may be a dispute about the amounts in individual accounts. You can also understand someone saying something like, ‘The customer accounts in the UK and Canada were seized by bankruptcy administrators who want to use them to pay off other creditors.’ But what’s difficult to understand is Corzine saying that he has no idea where the money is or who was responsible for it. He couldn’t even remember whether or not he signed required documents certifying that the money was segregated properly. Again and again he was asked simple questions that should have simple answers, and he never knew or couldn’t remember, but he did know that everything is complicated.

Every risk manager should watch this video. You cannot predict or prevent disaster and pretending that you can is misleading to others and stressful to yourself. A reasonable goal is to be able to answer simple questions about what happened and to have asked the questions beforehand that Congress is likely to be asking afterwards.

To be fair, Corzine may have been playing dumb in part based on legal advice. However, forget that and, while you’re watching him, think about the answers you would like to give if it were you on the hot seat. You’d want to know where the money was or to name the person you trusted with it and what that person did. You’d want to describe the controls preventing people from misusing the money, not in technical detail, but in simple terms anyone can understand. If the controls are too complicated to explain to Congress, they’re too complicated to be robust.

The next step is to think about your risk management responsibilities. Imagine that a disaster has occurred, and that you’re being grilled by Congress. If you can think of any question they may ask you that you’d be embarrassed if you didn’t know, go out and ask it now, before the disaster.

MOVING FAR FROM CENTRE – THE 25 STANDARD DEVIATION MOVE

In August of 2007, before the great financial meltdown of 2007–2009, David Viniar, the chief financial officer (CFO) of Goldman Sachs, famously said, ‘We were seeing things that were 25 standard deviation moves several days in a row.’ The quotation was ridiculed by people who know only a little statistics. Its true meaning is worth heeding for a risk manager.

This computer simulation of a quincunx is a device that demonstrates how random bounces can produce a neat bell-shaped curve. In the simulation on the website, one standard deviation is two bins wide, so the bins on the far left and the far right are four standard deviations (eight bins) away from the centre. Only one in 16,384 balls lands in one of these bins, and if you watch for a few minutes, you can get a feel for how unusual a four standard deviation move is.

To get a 25 standard deviation move, you’d have to watch 7 followed by 187 zeros balls to see one land 25 standard deviations (50 bins) away from the centre. So on a quincunx, a 25 standard deviation event is essentially impossible. Under a normal distribution, 25 standard deviation events are a little more common, but the probability is still 7 followed by 137 zeros.

But not all 25 standard deviation events are anywhere near that unlikely. One way to translate a statement about an N standard deviation event into something intuitive is flipping a fair coin N2 times in a row and getting all heads in an N standard deviation event. So a one standard deviation event, flipping a coin once and getting heads is common. A two standard deviation event, flipping four heads in a row, is rarer, but it certainly happens. Flipping three standard deviations, or nine heads in a row, is getting decidedly unusual and as you go to four, five and six standard deviation events, you really don’t expect to see many, even if you flip coins every day for a living. A 25 standard deviation event is like flipping 625 heads in a row, which just doesn’t happen.

Another kind of N standard deviation event is to have one coin with heads on both sides, and N2 coins with tails on both sides. Pick one coin at random and flip is repeatedly. It is an N standard deviation event to get any number of heads in a row. Now the chance of a 25 standard deviation event is 1 in 626 (252 + 1), which is rare, but entirely plausible.

Other types of 25 standard deviations events also exist with different probabilities. So whenever someone tells you about an N standard deviation event, remember that the probability may be as high as 1 in N2 + 1, or as low as flipping N2 heads in a row (or even lower).

LEARNING FROM DISASTER – THE TACOMA NARROWS BRIDGE

In 1938, construction began on a bridge to connect the town of Tacoma on the eastern side of Puget Sound in Washington state to the Kitsap Penninsula on the western side. The Tacoma Narrows bridge opened on 1 July 1940, and collapsed spectacularly three months later. The disaster is particularly famous because it was captured on film.

Building the Tacoma Narrows bridge presented unusual challenges, which were met in innovative ways. The bridge was the third-longest suspension bridge in the world during its brief lifespan, but it was only two lanes wide instead of the typical six or eight lanes. During construction of the bridge, workers nicknamed it Galloping Gertie because it had a strong transverse vibration (meaning the east end of the road would rise while the west end would decline, and vice versa, so drivers were always driving uphill or downhill). A lot of effort was devoted to controlling this vibration and making sure that it didn’t endanger the bridge or travellers. In the end, it wasn’t the transverse vibration, but the torsional, or side-to-side, vibration that broke the bridge and brought it down.

A couple of details to the story are worth a risk manager’s consideration:

  • False explanations often stick. In the aftermath of the bridge collapse, engineers knew immediately that torsional, or side-to-side, vibration was the cause. But the myth somehow arose that the Tacoma Narrows bridge collapsed due to resonance of transverse, or lengthwise, vibrations. This erroneous explanation can be found in physics books and popular accounts today.

Typically, glib but false explanations elucidate a valid principle (in this case, resonance) that just doesn’t happen to be the most relevant principle for explaining the disaster. Nevertheless, the dramatic pictures and simple story become beloved examples for people (physicists in this case) who care about the principle more than the actual disaster. Although the faulty explanation may lead to more entertaining physics texts, it subverts the important risk management lessons.

  • Not all disasters are bad. No one was killed in the Tacoma Narrows collapse, and the economic loss and disruption was small. The bridge engineer Othmar Ammann wrote: ‘The Tacoma Narrows bridge failure has given us invaluable information. It has shown that every new structure that projects into new fields of magnitude involves new problems for the solution of which neither theory nor practical experience furnish an adequate guide. This point is when we must rely largely on judgement and if, as a result, errors, or failures occur, we must accept them as a price for human progress.’

The new ideas and techniques used to build the bridge revealed a new type of bridge problem. This problem of torsional vibrations destroyed the Tacoma Narrows bridge but led to improvements in all bridges. Civil engineer Henry Petroski wrote, ‘No one wants to learn from failure, but we don’t learn enough from success to advance the state of the art.’ The alternative to disasters like the Tacoma Narrows bridge is to never try new things.

REACTING TO CRISIS – THE OFFICE FIRE DRILL

Although the fire drill from the television show The Office is intended as comedy, it by no means exaggerates the irrational and uncoordinated behaviour that characterises reactions to crisis. In calm times, you can easily assume that everyone reacts sensibly to unusual or unexpected events. Risk managers know differently.

The way to avoid dysfunction is to think through common situations in advance. You can then come up with useful precautions to take ahead of time and decide the appropriate procedures ahead of time then train and drill people in them. Don’t just hand out instruction sheets or make people click through an online tutorial; you need to engage people in realistic training.

Everyone pays lip service to the information in the preceding paragraph, but most people don’t really do it. You may worry that you won’t think of every disaster that may befall your firm, but that’s not the important part. Preparing for the things you can foresee gives you the tools and discipline to react to what actually happens. Preparing for a fire, or a stock market crash, or a bank failure or a liquidity shock gives you general capabilities that can help in different types of trouble.

Another excuse for neglecting drills is that you don’t know any good procedures to deal with crises. How do you write instructions for a terrorist bomb, or a high-ranking embezzler or a cyber-attack? Such things can happen so many ways, and what can you do? That’s when you watch this video and realise that you don’t need a great plan, or even a good plan, all you need to do find a plan that’s better than what the characters in The Office did. That’s your alternative, and is an easy benchmark to beat.

FACING UNFAVOURABLE ODDS – WILFRIED DIETRICH VERSUS CHRIS TAYLOR

At 6 feet 5 inches tall and weighing 412 pounds, American wrestler Chris Taylor was the largest person to compete in the 1972 Olympics. He was not only large, but phenomenally strong and quick and a gifted wrestler to boot. He was a heavy (pun intended) favourite to win the Greco-Roman wrestling gold medal at the 1972 Olympics in Munich. It was even suggested that he be awarded the medal without competing so as to prevent injury to the other wrestlers.

In the match, Taylor faced Wilfried Dietrich, a 6-foot tall, 260-pound German wrestler. Dietrich won five Olympic wrestling medals, more than anyone else but at 38 years old was nearing the end of his 17-year athletic career.

The video of their match can be used to illustrate a lot of lessons: the race is not to the swift, nor the battle to the strong; it’s not the size of the dog in the fight, but the size of the fight in the dog; the bigger they come, the harder they fall. Unexpected results and daring tactics have a big place in risk management.

However, focus on another aspect of this story, unfortunately missing from the video. Before the match, Dietrich went over to Taylor and gave him a hug, something that’s not usual in wrestling. Why? Dietrich had to be sure that he could get his arms around Taylor to execute the belly-to-belly overhead move he used to win the match.

Posted by: Gordon Mackie AT 03:13 am   |  Permalink   |  0 Comments  |  Email
Email
Twitter
Facebook
LinkedIn
Add to favorites
Wednesday, 01 September 2021
Dind out what you could do for us, and what we could do for you if you join the Board of Scottish Continuity
Tuesday, 16 February 2021
An overview of upcoming events hosted by Scottish Continuity through 2021, and news about membership fees.
Tuesday, 09 July 2019
A welcome from the new Chair
Tuesday, 04 June 2019
Tuesday, 14 May 2019
Scottish Continuity Group AGM Find out where and when
Sustaining a Resilient Community

Scottish Continuity
Email: info@scottishcontinuity.com

Site Powered By
    WebKeeper WebSite Builder
    Online web site design