2.2 SCHEDULE RISK ANALYSIS PROCESS

2.2.1 SCHEDULE RISK ANALYSIS PROCESS OVERVIEW

The following diagram represents a process sequence or workflow for conducting effective schedule risk analysis using Oracle’s Primavera Risk Analysis.

Schedule Risk Analysis Process Diagram
The sections that follow discuss in detail each of the steps involved in this process.

2.2.2 SCHEDULE PREPARATION

Preparation of the schedule for use in schedule risk analysis is of the utmost importance as it forms the foundation of all things to come. As with any process, if the foundation isn’t right, a quality product is unlikely. There are many considerations when preparing a schedule for use in schedule risk analysis, typically based on good planning principals. If the schedule has been well constructed to begin with, very little modification is likely to be required to use the plan as a risk model. Major considerations are listed below:

Technical requirements for schedules for use in risk analysis:



Basic Design

• The schedule model must represent all known scope that may influence key dates.

• The schedule model must be representative of the current execution plan.

• The schedule should be appropriately structured to meet the client’s reporting requirements for both schedule and cost outcomes from the analysis. Such requirements should be discussed early in the process to avert the requirement for re-design.

• The schedule should have clearly identified milestones for any RA reporting targets.

• The schedule model should be representative of the size and complexity of the work and especially not be overly-summarised.

• The schedule model must have key stakeholder “buy-in”. The stakeholder requesting the Schedule Risk Analysis should “own” the schedule to be used in the analysis and accept its validity.



Logic

• The schedule should be as fully linked as possible. Typically, an indication that the schedule is adequately connected is a relationship to activity ratio of around 2 to 1. Below 2 to 1 may be acceptable but requires increasing attention to validate the dependencies the lower the ratio.

• Each activity should have at least one start predecessor and at least one finish successor, and each of these should preferably be driving. Where this logic does not apply, it is possible for the lengthening of an activity to shorten the project duration (driving FF predecessor, driving SS successor)!

• The critical and near critical paths through the plan should be logical and make sense to the stakeholders.

• The schedule should make minimal use of FS +lag relationships as these typically represent undeclared tasks that should be subject to uncertainty.

• The validity of any SS or FF long lag relationships should be assessed against the level of detail in the schedule. Detailed schedules should use such relationships sparingly (prefer use of FS –lag relationships instead), whereas such relationships are a logical necessity of more summarised schedules. Planning “common sense” should be used.


Constraints

• The schedule should use logic in place of constraints where possible. For example, it is preferable to include predecessor logic than an early constraint which may prevent “schedule optimism” from being revealed. Start or Finish No Earlier Than constraints prevent tasks starting any earlier than their constrained dates.

• The schedule must not use “hard” constraints that could produce negative float in a plan or mandatory constraints that prevent successors from moving.

• Avoid Expected Finish Constraints and minimise the use of As Late As Possible (Zero Free Float) Constraints.

• An “Always Critical” constraint should be placed on an intermediate key milestone prior to its analysis so that criticality ratings of its logical predecessors are accentuated in cruciality tornado diagrams and not due to other overlapping pathways. Such constraints should subsequently be removed to analyse other key intermediate milestones and to analyse the true critical path(s) through the entire plan.


Schedule Size
There is no fixed guidance for this, but guidance based on experience suggests the following:

• Preferred limit is up to about 2,000 normal (non-zero duration) activities that are included in critical path calculations. PRA is able to analyse this size of schedule acceptably quickly (hardware speed advances tend to increase this limit).

• Although there is no fixed limit on the size of schedule to be analysed, increasingly large schedules become correspondingly slower to analyse. Furthermore, larger schedules require more complex correlation models to counter the central limit theory (which asserts that larger models with smaller average durations will produce narrower distributions of results around the mean).

• Use of small summarised schedules is to be avoided as they are likely to produce unrealistically optimistic Monte Carlo analyses, due to the elimination of logic nodes (eg, intermediate milestones) that bring together multiple strands of schedule logic. The Merge Bias Effect causes the probability of such logic nodes finishing by their planned date to be the product of the probability of each logic strand being completed by the planned date. The acts as a barrier to earlier completion of a schedule. Summarising schedules tends to reduce the number of such logic strands and nodes and therefore the real barriers to earlier completion.

• So the Schedule Risk Analysis size used to model the project should be as large as is required to represent the project scope and complexity adequately within the practical limits of analysis.


Resources
Plan resources have the potential to slow Schedule Risk Analysis analysis times significantly and should be removed if not required. There are also differences in the way that PRA and other planning applications (such as Primavera P6) calculate resource driven task durations which can cause unexpected differences in the two versions.
Use of larger numbers of resources may greatly increase analysis times as well as narrow the resultant distributions. This is a known problem in Integrated Cost & Schedule Risk Analysis (IRA - see later Knowledge Base discussion of this when added) and may also affect Schedule Risk Analysis.

Planning Units
Unless dealing with a high-level plan with long activity durations and long project duration, it’s almost always preferable to work with a planning unit of “Hours” over a planning unit of “Days”. When planning units are set to days, small percentage variations on shorter tasks are not expressed, as the task duration is always rounded to the nearest integer. A 4.5% variation of a 10 day task would still be expressed as 10 days, whereas the same task in a schedule planned in hours would be expressed as 10.45 days equivalent in hours.
Unlike Primavera P6, PRA is not capable of working in minutes, and instead has a minimum planning unit of ¼ hour blocks. This may result in some minor discrepancies in activity durations in plans exchanged between the two applications. It should be noted however that increasingly smaller planning unit durations result in increased scheduling and analysis times in PRA.
Planning units should always be set when first importing the schedule into PRA and careful schedule checks made to ensure discrepancies are minimised at that time.

Calendars
In multi-phase projects involving, say, design, procurement, construction and commissioning, calendar changes are generally unavoidable when modelling accuracy is important. But changes in working periods per day and/or per week may result in “gaps” in criticality when trying to determine the driving critical paths through a model. Mixed working periods occur when the calendars attached to a predecessor and a successor task are not the same. For example, if the calendar of a predecessor task were set to 24 hours a day, and the calendar of a successor task only allowed for 9am to 5pm, any time the predecessor task finished between the hours of 6pm and 8am, float would be created between the two tasks, resulting in loss of criticality. In general, when a predecessor has more available working periods in a week than its successor, the total float of the predecessor will be greater than that of the successor.
In the special case of the successor having zero float, the critical path must be defined by “longest path” rather than zero float because the predecessor task will have positive float.


Constructing versus adapting a schedule for risk analysis?



• The question of whether to adapt the existing client’s schedule or construct a new schedule specifically designed for use in RA is a crucial one, and one that must always be answered at the beginning of any analysis.


• It is not uncommon to find that schedules are of poor quality after the commencement of the process and some effort should always be allocated to examining and circumventing any issues that may arise from the schedule’s construction.


• The size of the schedule is a key consideration when preparing a schedule for RA. Too few activities and key dependencies and visibility of the true drivers of the project can be missed and Merge Bias Effect s unrealistically excluded. Too many activities and the model can become unwieldy and unusable.


• RIMPL prefer to use the project schedule, with its built-in logic representing real dependencies between elements of the project, rather than create a summary schedule with the concern that key logic may be omitted.


• Ultimately, however, it is unavoidable that some client schedules are of such poor technical construction (refer to Technical Requirements for Schedules for use in risk analysis), or are of such a size (or both!) that they cannot be used or adapted. In such circumstances, a summary schedule must be constructed. Other circumstances that could require a summary schedule include combining different schedules, or a program of projects.




2.2.3 GATHERING SCHEDULE UNCERTAINTY INFORMATION

For information on available methods of gathering schedule uncertainty data, please refer to 1.2.3 Risk Identification.

2.2.4 ASSIGNING SCHEDULE UNCERTAINTY RANGES

Now that you have a schedule that is technically robust and suitable for risk analysis, it’s time to input the range information you’ve gathered from the project team and its stakeholders.
Duration uncertainty typically refers to a 3-point estimate of how long a task may take:


Optimistic Estimate: If everything went as well as could be expected, how long might the activity take?

Most Likely Estimate: Under normal conditions, how long might the activity take?

Pessimistic Estimate: If everything went poorly, how long might the activity take?


It is important to disregard any consideration of opportunities or threats when considering task durations. For example, it is not appropriate to say that an optimistic duration for a task may be 0, as that is equivalent to saying that it might not have to be done at all. If this is the case, the task should be converted into a risk task and assigned a probability of existence. Similarly, it is not appropriate to provide an overly pessimistic estimate on the basis that something might change that fundamentally alters the nature of the task to be performed. This again is a risk event. All task duration uncertainties must be assessed on the basis of the known scope and the assumed conditions that underpinned the development of the schedule to begin with.

Distribution Types


When it comes to assigning duration uncertainty to a schedule, the distribution type selected can have a substantial impact on how the uncertainty is expressed in the model. The distribution type ultimately defines the way in which the Monte Carlo engine samples data around the limits specified.There are a few commonly used types of distributions in Primavera Risk Analysis and other MCM tools, including:

Triangle


The triangle distribution is perhaps the most commonly used type as it is the default shape and is simple to comprehend. Like most, it can be positively or negatively skewed, with the probability of sampling values at the extremities decreasing linearly from the central point.

Triangle Distribution


Trigen


The trigen distribution is like triangular, but with the corners cut off. The optimistic and pessimistic limits are set at specified probability boundaries rather than zero probability. When a trigen limit is set, there is an x% chance that the distribution limit can be exceeded. Using trigen distributions, PRA automatically calculates the absolute boundaries (zero probability) for the distribution. The value for x is also controlled by the user and can be different for optimistic and pessimistic limits.

Trigen Distribution


Uniform


The uniform distribution is simple constant probability distribution for which every value in the range between the minimum and maximum limit is equally likely to occur. Unlike the other distributions available in PRA, uniform distributions require only 2 values (lower and upper) as all values between are equally likely.

Uniform Distribution


BetaPert


The BetaPert distribution is best described as a Normal distribution that allows for positive or negative “skewness” (bias). Unlike the Normal distribution that is symmetrical either side of the most likely value, the BetaPert distribution allows for the most likely value to be closer to either the optimistic or pessimistic value while preserving a smooth, “Normal” distribution shape. The BetaPert distribution shape has the least proportion of probability distributed to the skewed (further) extremity (“thin tail”). It may be best suited to activities for which reliable performance data is available, which can more confidently be centrally clustered.

BetaPert Distribution

The choice of which distribution type to use is dependent on a multitude of factors including the type of data being collected, and if gathered by polling individuals, the way in which it was requested. For example, if people were asked to identify a “minimum”, “most likely” and “maximum” value for a distribution, a triangle distribution might be best suited. This is because the terms minimum and maximum may be most commonly interpreted as the extreme values. However, if “optimistic”, “most likely” and “pessimistic” values were requested, a Trigen distribution might be better suited, as the terms optimistic and pessimistic don’t imply that extreme values are sought. If utilising reliable data gathered from a large population sample, a BetaPert distribution may be best as described above. However, where very little is known about an uncertainty range and no central tendency can be assumed, it may be appropriate to simply specify minimum and maximum limits and use a Uniform distribution, with every value between equally possible.



2.2.5 DURATION RISK FACTORS

Risk Factors are underlying causes of risk and uncertainty in projects that, if identified, quantified, and applied to project activities, can significantly increase ability to understand what drives project schedule risk.


Define Causes of Uncertainty


RIMPL’s Integrated Risk Factors tool (IRF) works within Primavera Risk Analysis to allow for allocation of risk factors at the resource assignment level as well as introducing risk factor correlation.

Duration risk factors apply a methodology that defines and assigns uncertainty to plan tasks through identification of common contributors (or “factors”) that affect probabilistic duration outcomes. Unlike normal three-point estimates of uncertainty, risk factors are better described as causal contributors to project uncertainty that affect groups of activities (or resources) within a model equally during a simulation. For example, whereas a collection of construction tasks may all have separate duration uncertainties as they are independent tasks, there are also likely to be common factors such as site productivity rates that influence their durations to a similar extent.


Define Correlation


The risk factors methodology has another significant benefit over traditional 3-point estimates in that it can take the guess work out of defining an effective correlation model. One of the key features of the Monte Carlo Method is that it inherently assumes all elements in a model are independent. To over-ride this invalid assumption for related tasks in a project schedule, a risk modeller is required to make educated guesses regarding the degree of relatedness between groups of tasks then enter these values as correlation percentages against each of the applicable model elements in the group. In contrast, the risk factors methodology removes this guesswork from the analysis as it effectively defines the degree of relatedness between elements by the action of multiple risk factors on overlapping groups of activities.
The validity of this is dependent on all significant risk factors being identified and applied.


How do Risk Factors Work?



The following steps briefly outline how a risk factors methodology can be applied to a schedule risk model:

• Stakeholders identify common sources of uncertainty within a model that have the potential to influence task duration outcomes. These are the risk factors. Their impacts may range from <100% of the deterministic value to >100%


• Characteristics of each risk factor are defined including:


◊ Description,


◊ Impact range distribution (3 point probability distribution and shape),


◊ Probability of occurrence (may be 100% or less), and


◊ Correlation between related risk factors.


• Risk factors are then mapped to tasks and/or resource assignments.


• When a risk analysis is run, the Risk Factors application intercepts each iteration event and modifies each task duration and/or resource assignment value according to the net effect of the individual or multiple risk factors that have been applied.


• The modified task / resource assignment information then provides the inputs for the scheduling engine before the results are calculated and committed as the final iteration data.


2.2.6 MAPPING RISKS TO THE SCHEDULE

Mapping risks to schedule tasks is a challenging aspect of the schedule risk analysis process, as it requires a detailed understanding of the nature of the risks and schedule tasks involved in the process. The integrity of any schedule risk model is dependent on the validity of the risk/task mappings, such that their impact is neither overstated nor understated.


Things to Consider when Mapping Schedule Risks


When a schedule risk is mapped into a schedule model, it is then referred to as a schedule risk-task. This is a task that has a probability of occurrence less than 100%, a deterministic duration of 0 days, but a probabilistic duration distribution greater than 0 days. By definition, a risk is not a risk if it has no impact on the objectives of the project. Therefore, probabilistic impact ranges including minimums of 0 days are not recommended.

The placement of risk-tasks within the probabilistic model is a significant determinant of their impact on the probabilistic schedule outcomes. Factors which influence the effect of a risk on the schedule model include:

The probability of the risk’s occurrence in any given iteration. If a risk is specified as 50% probable and the project model were to be simulated 1000 times, the risk will exist (occur with an impact from its distribution) 500 times. The more iterations in which a schedule risk-task is triggered, the more effect it will have on successor probabilistic completion dates.


The risk’s probabilistic duration distribution profile. Similarly to normal tasks with duration uncertainty, risk-tasks are usually assigned a duration distribution profile. This is the duration impact that the risk-task will have on its parent task’s successors should it occur. Risk-tasks with larger duration distributions produce larger changes in probabilistic schedule outcomes of successor tasks than those with smaller duration distributions.


The criticality of the task to which the risk is mapped. Risks must be applied in context. A risk with a high probability and high impact may have less probabilistic impact on project objectives than a low probability low impact risk if the task to which the former applies is rarely on the critical path while the latter affects a task frequently on the critical path.


The logic applied to predecessors and successors of the risk-task. The schedule logic into which the risk is mapped (to the parent task) is also important as it determines how the project-level risk behaves when applied at the activity level. If we assume that a threat risk-task is mapped to the end of its parent task, the successor logic of the parent task should be replicated on the risk-task. However, if the parent task has no driving (or near-driving) finish-successor logic, or only successor logic stemming from its start, the risk-task cannot actually impact on any successor task. As stated earlier (3.2.2 Schedule Preparation), each task in an Schedule Risk Analysis model should have at least one start-predecessor and at least one finish successor.



Series or Parallel Schedule Risk Assignments


As discussed earlier, a detailed understanding of the schedule and the nature of the risks in a model is important when performing risk-mappings. A risk applied incorrectly within a schedule can have its influence understated or exaggerated depending on its context. How the risk is defined, to a large extent, determines the way it should be mapped. Then the schedule logic determines the overall probabilistic impact on the project.

Consider the following Foundation Excavation Schedule and associated risks:
Risk Assignment Serial or Parallel

Case A
“There is a risk that rock may be encountered during excavation for foundations on the project site, leading to project delays.” Preliminary geotechnical investigations of the site suggest a 30% probability. The presence of rock increases the excavation time by an impact distribution range of 15% / 25% / 50% of planned duration.

Mapping this project level Risk A into the above Foundation Excavation Schedule requires that it be mapped to each of the three tasks, each with a 30% probability, but the impacts occur independently (that is, there is no existence correlation between the three risk-tasks). Each excavation task has a 30% probability of occurrence, with the impact distribution proportional to the duration of the task.

Case B
“There is a risk of encountering rock during excavation for foundations in each of the three areas comprising the project site, leading to project delays.” Geotechnical investigations of each area of the site have produced the following probabilities of rock in each area:

Area S1: 50%; Area S2: 25%; Area S3: 5%



As for Case A, the presence of rock increases the excavation time by an impact distribution range of 15% / 25% / 50% of planned duration. In this case there are three separate risks, applicable independently to the three different areas with their different probabilities. However, each has the same impact delay range distribution of 15% / 25% / 50% of the parent task duration.

Case C
The excavation logic is changed to accelerate the work by doing all three area excavations in parallel:
Risk Assignment Serial or Parallel


Where two or more activities affected by a risk are arranged in parallel, the impact may be applied 100% to each pathway because the delay may occur which ever path is followed in the project.

By paralleling the parent tasks the overall effect of the risks in case A and B applied to Case C will be lessened, with the largest combined task and risk uncertainty driving the model uncertainty.

If instead of expressing the schedule impact as a percentage of each duration, the project level risk were expressed in terms of an overall impact range, such as “…causing a delay of” 5d / 8d / 15d, it would be necessary to apportion the impact range between the three excavation tasks.

These examples illustrate the importance of the wording of the risk and the logic in determining how a project risk is mapped at the activity level.

2.2.7 WEATHER MODELLING IN SCHEDULE RISK ANALYSIS


Almost every project that involves outdoor work will be subject to some kind of weather conditions that may dictate working and non-working periods for all or part of the workforce. In normal deterministic plans, this is usually accounted for by making an allowance for downtime in the relevant plan calendars. However, in reality, weather is often more complex and uncertain than this, and requires special probabilistic techniques to be able to model its potential effects appropriately.

Weather modelling can be broadly divided into three main categories:

Weather uncertainty refers to the variations in normal weather conditions. This is the variability / fluctuation of normal weather patterns within specified time periods. For example, in the month of May in a specific region, there may be an average 10 hours of downtime due to inclement weather. However, historical data may show that this could be as little as 5 hours, or as much as 30 hours. An impact probability distribution would be required to express that uncertainty.


Weather events behave somewhat like risk events and are assigned a probability of existence. Weather events typically refer to distinct events such as floods, fires and cyclones or hurricanes. These are events that may or may not occur, but if they do, may have a similar impact on productive downtime across large portions of a project plan. Similarly to risk events, weather events can be assigned optimistic, most-likely and pessimistic duration ranges, and can be applied selectively to tasks within a schedule risk model.


Weather windows refer to periods within a schedule model in which certain operations can or cannot be undertaken. Unlike weather events, weather windows have 100% probability of existence. Their uncertainty stems from when they will start and how long the period will last. Classic examples of weather windows are the opportunity to truck materials over non-permanent ice-roads, the ability to use rivers for barging goods or the period for which roads are impassable during the wet season in tropical areas.



The incorporation of weather modelling in schedule risk analyses adds significant benefits over the options for allowing for weather in normal deterministic schedules because of the wide range of outcomes possible and the probabilistic nature of weather uncertainty. The principals of weather modelling can also be used to model other types of uncertainties causing downtime which have seasonal patterns.

2.2.8 PROBABILISTIC LINKS & BRANCHING

One of the key advantages of schedule risk analysis is that it is capable of modelling uncertainty not only in terms of duration and risk events, but also logic. Logic uncertainty is an important aspect of schedule risk, but something that is often overlooked. It refers to the ability to set the probability that one or more pathways through a project plan will be followed. Probabilistic logic can be broadly divided into two main categories; probabilistic links & probabilistic branching. These are discussed in the sections that follow.

Probabilistic Links


Deterministic plans force assumptions to be made regarding the workflow through a particular process, even if two tasks may or may not be linked. An example of this could be the link between an environmental approvals process and a construction task. In the early stages of the project, we might suspect that the approval is required before starting construction, but we can’t be entirely sure until we’ve learned more. This is where a probabilistic link could be useful. If the percentage probability (that regulatory approval to start construction will be required) can be estimated, then during modelling, the link will only exist for that percentage of the simulation iterations through the approval task to the start of construction.

Probabilistic Branching


While probabilistic links are useful when it is uncertain whether two tasks are related, this does not help when more complex modelling is required of alternate pathways / execution strategies to the same objective. Probabilistic branching is applicable where there are two or more mutually exclusive ways to accomplish some part of a project and the choice has not yet been made on which to use.

For example, probabilistic branching might be used to model the difference between modularisation and stick building in a site construction process. Or different contracting strategies may be expressed in probabilistic branching where it is unclear which strategy may be used.

Each branch is assigned a probability of existence, but the sum of the probabilities of all branches must add to 100%. When a schedule risk analysis is run, each pathway (or branch) is randomly sampled according to its probability of existence, including all successors in the branch. However, only one pathway may be selected and the other branches do not exist in that iteration.


2.2.9 DURATION CORRELATION

Correlation is an important component of any schedule risk analysis model. It is the means of advising the Monte Carlo software of the degree to which tasks are to be treated as related. As noted earlier, an important inherent assumption of Monte Carlo analytical software is that every element in the model is completely independent of every other element. Thus, when a high value is selected from one range, there is nothing to stop a low value from being selected from another. This is not a problem for unrelated activities, but where works are related through a common denominator such as the same contractor, or the same physical locality, it is natural that poor performance at one task is likely to be at least partially reflected in another.

For this reason, it is necessary to incorporate correlation into any probabilistic model. This is increasingly important as the numbers of tasks in the model increase to many hundreds or even thousands of activities. In such situations, a statistical phenomenon known as the
“Central Limit Theorem” becomes particularly evident. In simple terms, this is the tendency for the results of increasingly large analyses to approximate a normal distribution and to be more and more tightly grouped around the mean value (lower variance/standard deviation, higher “kurtosis” or “peakedness” of the distribution). This is because of the apparently “random” selection of high values from some distributions and the counter-selection of low values from others, combining together across many iterations to result in very little overall variance either side of the mean.

By correlating related activities, we ensure that commonly themed or related packets of work are constrained to greater or lesser extents (depending on the percentage correlation) to be sampled from the same end of the spectrum of possible outcomes. This results in a greater net variance either side of the mean. A correlation percentage of 0% means no association between activities. Conversely, a correlation percentage of +100% forces complete proportionality in the sampling of distributions between activities in a linear fashion. In rare situations negative correlation may apply, where a higher duration being sampled from one activity trends towards a lower duration being sampled in another.

The challenges with correlation are:

• To determine to groupings in the model to which correlation should be applied and

• To identify the levels of correlation applicable in the absence of data.


Correlation and the Merge Bias Effect (MBE) are factors to be balanced in development of realistic Schedule Risk Analysis models. The MBE causes larger and more complex schedule models to be more realistic and preferable to smaller more summarised models. But the larger and more complex the Schedule Risk Analysis model becomes, the more important realistic correlation becomes and the more challenging it is to achieve it.


2.2.10 SCHEDULE MONTE CARLO SIMULATION

Monte Carlo Method simulation is a mathematical technique that uses repeated random sampling within specified distributions to calculate the probability of defined outcomes. The principal of the method is that by simulating a process many times using ranged parameters before doing something in actuality, a mathematically based prediction of how the real process may eventuate can be calculated. The method was invented in the Second World War to simulate nuclear events during the Manhattan Project to develop the atomic bomb and has been adapted to an increasingly widespread range of applications since.

As applied to schedule risk analysis, Monte Carlo simulation involves the random sampling of uncertainties within a project schedule. As identified earlier, there are four main types of uncertainty in schedule risk:

• Duration Uncertainty

• Risk Events

• Logic Uncertainty

• Calendar Uncertainty


For each of these elements, and against each item in the model, uncertainties are randomly sampled for duration and / or probability. Normal forward and backward pass critical path method calculations are then made, and the resultant early and late start and finish dates for each of the tasks / milestones in the schedule are recorded, as are the task durations. After this process has been repeated many hundreds or thousands of times, the results are collected, ready for interpretation. These are discussed below in 3.2.11 Interpretation of Schedule Analysis Results.

How many simulations / iterations are required?


It is a general statistical principal that the more times data is sampled the more meaningful the results are expected to be. For example, if we were to ask a random person on the street if they liked ice cream, it wouldn’t be appropriate to infer from their answer that the rest of the population felt the same way. However, the more people we asked (especially if they were from a wide range of demographics), the more statistically significant our observations should become.

The same principle applies to schedule risk analysis. Simulating a project only a handful of times will likely produce wildly varying results with little statistical significance. However, as the number of simulations performed is increased, the precision of analysis results will also increase.

This is especially true of schedules that contain many low probability high impact risks. It is statistically unlikely that more than one of these risks will occur at once unless running many simulations, but where this does happen, it is likely to have a substantial effect on the outcome of that iteration and influence the overall results more significantly.

Sensitivity analyses are sometimes made to apply the full impact of more than one high impact low probability risk simultaneously to a project model to assess the resilience of the project.

It is often felt that the major threats to large scale complex projects come from such low probability high impact risks.

There is no clear-cut answer as to how many simulations may be required to obtain statistically significant results from an analysis. But the generalisation can be made that the central properties of a distribution (mean, standard deviation, P50) do not change much between a few hundred iterations and several thousand. What do change are the properties at the “tails” of the distribution and, particularly for large projects, the conservative end of the analysis (P80/P90) is of great interest for sizing schedule (and cost) contingency. The inherent uncertainty in Monte Carlo analyses is such that if low probability risks are being analysed, higher numbers of iterations are desirable, say around 5,000, to reduce the % uncertainty of the modelling (often masked by using a fixed “seed” (see below) to start the iterations so that consistent results are obtained). The inherent uncertainty of MCM modelling is inversely proportional to the number of iterations and may be less than 0.5% at around 5,000 iterations.

The problem with this is the practical limit of time of analysis: large and complex projects have large Schedule Risk Analysis models which take many minutes to analyse. So compromises may have to be made.

Some Monte Carlo schedule simulation tools come with features that allow you to continue simulating until the mean results move by less than a specified threshold. The point at which the results are no longer making significant differences to the results is often referred to as “convergence”. By analysing until convergence is maintained over a number of iterations, we can be relatively confident that continued analysis will add little to the validity of the results we will observe, provided we are not so concerned about the “tails”.

A useful compromise may be to use fewer iterations until the model is finalised and then do the final results using the recommended full number of iterations.

What is an analysis “seed” & why do we use it?


Running a different number of simulations will produce different results. However, less obvious is why running two different analyses on the same plan, using the same number of iterations in each, might produce two different answers. The reason why this occurs is because Monte Carlo in its purest form is always completely random in its sampling. Therefore, if two identical analyses are run, it’s unlikely that exactly the same result will be produced. Although perfectly statistically valid, there are two problems with this approach:

• First, it can be confusing to users attempting to interpret the results; and

• Second, the computational processes required to produce and interpret useful results from such truly random sampling are quite resource intensive, and require significant processing time.


However, Monte Carlo schedule risk analysis tools may allow the user to control the use of something referred to as a “seed”. This is the starting point for the set of instructions for the random sampling processes to follow that increase analysis performance speeds and also ensure that the same plan modelled twice in the same way will always produce the same results. All subsequent values are generated randomly, so although the simulation is now following the same sampling pattern, it is still, in effect, a random process. But the apparent randomness of the results is significantly reduced.

Tests run by RIMPL have shown that the effect of changing the seed varies truly random analysis results depending on the number of iterations:

• For 1,000 iterations of a schedule model with, say, 1,000 activities, the percentage variation between analyses through seed changing may be 1-1.5%.

• If the number of iterations is increased to 5,000, the percentage variation between analyses through seed changing may drop to 0.5%.


It is important to note that any change in a model effectively changes the way that the seed interacts with the model, and therefore is tantamount to changing the seed itself. It is only by ensuring that a sufficient number of iterations have been performed in each analysis that the apparent “noise” in the analysis results associated with this seed change will be minimised. This is important if the effects of risks are being measured by difference. If the probabilistic effect of a risk is small, its effect may be exceeded by the randomness introduced by the effective seed change effect of removing the small risk. This can explain anomalous results for low probabilistic impact risks from Schedule Risk Analysis modelling.

2.2.11 INTERPRETATION OF SCHEDULE ANALYSIS RESULTS

As stated earlier, schedule risk analysis results are derived from date and duration information collected across many hundreds or thousands of simulations of a risk-loaded schedule. When interpreting this data however, it is important that it can be conveyed in a simple and meaningful form. The commonly accepted means of presenting MCM results uses a histogram overlaid with a cumulative curve to display percentile data. An example plan finish date histogram is shown below:
Example Plan

Histogram Data


In a histogram, data is grouped into collectors called “bins” across a particular dimension (e.g., date, duration). The frequency with which the data set falls into these collectors is represented as height on the vertical axis (shown on the left axis in the example above as “Hits” [iteration results]). By structuring the data in this way, a visual representation of the clustering of results along the measured dimension (finish dates in this case) is displayed.

In the example above, the finish date of the entire plan has been used as the bin parameter along the horizontal axis. The dimension is always the same as the metric to be reported.

In simple terms, the above date histogram shows the results for all iterations (“Hits”) of a Monte Carlo Schedule Risk Analysis for the Finish Date of the Entire Plan. The results are plotted from the earliest date of an iteration (29Jan14) to the latest (15Apr14). The height of each bar represents the number of hits that fell within the date range represented by the width of the bar. The highest bar records that 49 hits occurred on 16Feb14.

Bin sizes are a flexible variable, and in the case of schedule risk analysis, information by the hour, day, week, or month etc may be reported. It is important that the bins are sized large enough to allow for adequate visual representation of trends, but not so large that they hide important information about the model. As an example, if finish date information is collated by month, then a plan might show a marked drop off in frequency in December which could cause some confusion. However, collating the same information by week might reveal that the drop-off is actually caused by no hits in the last week of the month where the holiday calendar downtime occurs.

Cumulative Curve Data



The cumulative curve adds up the number of hits in each bar progressively so that it represents the number of iterations up to a particular date. In effect, an intercept from the curve to the horizontal axis represents the percentage of iterations up to that date or the probability of the Entire Plan finishing on or before that date. So the highest bar also corresponds to the percentage of iterations up to 16Feb14, which the vertical axis intercept tells us is the 50% point. Schedule Risk Analysis results usually refer to “P-values”. These are the percentile values. For example, in the output diagram shown above, the P90 finish date value of 13-Mar-2014 indicates 90% confidence that the completion date will be on or before 13-Mar-2014.

The earliest and latest dates are usually of less interest than the P10, P50 and P90 (or P20, Pmean and P80). The intercepts used by organisations vary according to their risk policies, “risk tolerance” or “risk appetite”. Also showing on the cumulative curve is the Planned or Deterministic Date for the Entire Plan Finish Date. In this case, it is on or just after the earliest date on the chart and has a probability of less than 1%. This makes it clear that the planned date for the project is highly unlikely to be achieved. In summary, the date histogram reveals a lot more about the feasibility of the project than the deterministic plan can. It identifies the range of possible date outcomes, how likely the planned date is to be achieved (very unlikely), what is an aggressive date (say 11Feb14, a 30% probable date or P30), what is a likely date (say 16Feb14, the P50) and what is a conservative date (say 13Mar14, the P90).


Skewness


One of the reasons that it is important to be able to clearly visually represent the distribution of data in the histogram is that it gives us an indication of Skewness.
Skewness

Skewness refers to the asymmetry of data around a central point, and indicates a bias in the trending of results. Data can be positively skewed, negatively skewed, or without skew. A positive skew indicates a longer tail after the central measure, whereas a negative skew indicates a longer tail before the central point.

In schedule risk analysis, Skewness is important as it indicates the bias toward optimism or pessimism relative to the mean. If data is positively skewed, the P90 dates will be further from the mean than the P10 dates will be from the mean and the skew or bias is pessimistic. Conversely, if data is negatively skewed, the P90 dates will be closer to the mean than the P10 dates and the skew is optimistic. Understanding the potential for movement of schedule dates towards either extreme of the analysis is important in understanding overall risk exposure.

Kurtosis


In addition to the Skewness of a distribution, it is important to understand the overall Kurtosis or “peakedness” of results. The kurtosis describes the narrowness of the distribution or the extent to which results are tightly clustered around the mean.

In schedule risk analysis, it is important to understand this distribution as it can help establish or challenge the credibility of the model. Results with very “peaky” results around the mean are likely to be caused by narrow ranging of activities on and around the critical path, or from deficiencies in the correlation model, or both.


2.2.12 EXAMINATION OF SCHEDULE DRIVERS


Apart from the ability to assess schedule contingency, one of the key benefits of schedule risk analysis is that it enables the drivers of uncertainty within a model to be assessed and ranked. This is important and valuable because it identifies the tasks and risks for targeted actions to reduce the riskiness of the schedule most effectively.


What are sensitivities?


Assessing risk model drivers involves sensitivities. Sensitivities are measures of correlation or dependence between two random variables or sets of data. Correlation measures the tendency for the behaviour of one variable (the independent variable (IV)) to act as a predictor of the behaviour of another (the dependent variable (DV)) in terms of some quantifiable measure. Sensitivities can range in value from +1 (perfect direct or positive dependency), through 0 (no relationship), to -1 (perfect inverse or negative dependency). Positive values represent a positive relationship between the IV and the DV, and negative values represent a negative relationship. Sensitivities approaching 0 indicate progressively weaker associations between the IV and the DV, with a sensitivity of 0 indicating no statistically measurable relationship between the behaviours of the two variables. It is important to be aware that correlation does not measure causality. It does indicate the likelihood of causality, but a high sensitivity does not guarantee causality, as discussed further below.

What is their use?


Sensitivities are used in quantitative risk analysis to assess the effect of one element in a model on another element, or on a key measure within the model as a whole. Measuring the degree of relatedness between variables helps to identify and rank key elements within a model that may be positively or negatively affecting measured outcomes so that scarce project management resources may be focused on improving project outcomes most efficiently.

Duration sensitivity


Schedule risk analysis driver assessment measures the relatedness between the changes in each task’s duration from iteration to iteration and the change in overall project duration. This is known as duration sensitivity. Duration sensitivity is not limited to measuring impact at the project level; it is also measurable against the duration to any task, summary, or milestone within the plan. Similarly, duration sensitivity can also be measured from a summary task rather than an individual task, enabling, for example, the duration sensitivity of all mechanical construction tasks against the overall finish date of the project to be measured.

The problem with duration sensitivity (Criticality & Cruciality)



Unfortunately, duration sensitivity has one fundamental flaw; it doesn’t measure whether the independent variable is actually driving the dependent variable. It only looks at the correlation between its duration and the start or finish date of the observed target. Therefore, in schedule risk analysis, a hammock task that spanned from the start of the project to the end of the project would measure a perfect (100%) duration sensitivity when measured against the finish date of the project. This is because whenever the finish date extends, so too does the hammock task, thus creating a perfect correlation between the two observed elements. In reality, the hammock isn’t actually a determinant of the completion of the project at all, but rather a product of it. Similarly, a predecessor with very high float can have high duration sensitivity, but never drive the project end date because it is never on the critical path.

To assess whether a task is actually a determinant of the date of the dependent variable, a metric to measure the potential for the task to drive the date must be added.

That metric is called Criticality, which measures the percentage of iterations in which a task was on the critical path in any one simulation. The higher the Criticality, the more frequently the task was involved in the calculation of the completion date for the selected target. Multiplying Criticality and Duration Sensitivity together giving us a criticality moderated duration sensitivity metric known as Cruciality.

While Cruciality is a powerful indicator of the relative importance of tasks as drivers of a Target milestone or task within a schedule model, it is important to note that Cruciality only works in this way if the critical path(s) that directly drive the target milestone are able to be isolated. The presence of alternate critical paths that are unrelated to the completion of the dependent variable may confuse the criticality results and the consequent cruciality metric, even if only dependent predecessors of the target milestone have been selected. This is because other critical paths can cross over the paths to the target milestone in leading to non-target tasks.


Correlation and causation



When dealing with issues of correlation, it is easy to infer causation where none exists. This is especially true when dealing with large data sets from quantitative models such as a schedule, where there commonly exist sets of complex interactions between the elements in the model. The issue here is whether the changes in the IV are actually causing the perceived changes in the DV, or if the two may be related through a third variable. In statistics, this third variable is known as an “undeclared independent variable” and it has the ability to alter significantly the calculated values for any type of sensitivities.

In schedule risk analysis, perhaps the most frequent example of an undeclared independent variable is the presence of input duration correlation between tasks. These correlations form an integral part of the model in that they counter the “Central Limit Theory” and prevent unrealistically narrow or “peaky” distributions. However, when looking at sensitivity calculations, the correlation between the sampling of one task’s duration and another task’s duration represents an undeclared and uncontrolled variable, which may modify and can invalidate the sensitivity result.

As mentioned earlier, sensitivity calculations do not measure the strength of the relationship between two variables, but the similarity in rates of change between them. Thus, if a very small duration distribution and a very large duration distribution are 100% correlated via the duration correlation model, this acts as an undeclared independent variable, and the sensitivity for the smaller duration distribution will be calculated as equal to that of the larger distribution when measured for influence on total start or finish date variability.

Thus, we see that while sensitivity calculations are useful measures for gaining some insight into the drivers of measured outcomes in the model, they are inherently flawed in that we can never fully control for undeclared independent variables.

In summary, sensitivity rankings cannot be adjusted for the effects of applied correlation groupings. Another more reliable means of ranking is needed. Ultimately this can only be done by removing each source of uncertainty from the Monte Carlo model, re-running the simulation and reporting the differences at chosen P-levels.





comments powered by Disqus