The Basic Steps of a Simulation Study
The application of simulation involves specific steps in order for the simulation study to be successful. Regardless of the type of problem and the objective of the study, the process by which the simulation is performed remains constant. The following briefly describes the basic steps in the simulation process [6, 7]:

Problem Definition
The initial step involves defining the goals of the study and determing what needs to be solved. The problem is further defined through objective observations of the process to be studied. Care should be taken to determine if simulation is the appropriate tool for the problem under investigation.
Project Planning
The tasks for completing the project are broken down into work packages with a responsible party assigned to each package. Milestones are indicated for tracking progress. This schedule is necessary to determine if sufficient time and resources are available for completion.
System Definition
This step involves identifying the system components to be modeled and the preformance measures to be analyzed. Often the system is very complex, thus defining the system requires an experienced simulator who can find the appropriate level of detail and flexibility.
Model Formulation
Understanding how the actual system behaves and determining the basic requirements of the model are necessary in developing the right model. Creating a flow chart of how the system operates facilitates the understanding of what variables are involved and how these variables interact.
Input Data Collection & Analysis
After formulating the model, the type of data to collect is determined. New data is collected and/or existing data is gathered. Data is fitted to theoretical distributions. For example, the arrival rate of a specific part to the manufacturing plant may follow a normal distribution curve.
Model Translation
The model is translated into programming language. Choices range from general purpose languages such as fortran or simulation programs such as Arena.
Verification & Validation
Verification is the process of ensuring that the model behaves as intended, usually by debugging or through animation. Verification is necessary but not sufficient for validation, that is a model may be verified but not valid. Validation ensures that no significant difference exists between the model and the real system and that the model reflects reality. Validation can be achieved through statistical analysis. Additionally, face validity may be obtained by having the model reviewed and supported by an expert.
Experimentation & Analysis
Experimentation involves developing the alternative model(s), executing the simulation runs, and statistically comparing the alternative(s) system performance with that of the real system.
Documentation consists of the written report and/or presentation. The results and implications of the study are discussed. The best course of action is identified, recommended, and justified.
Decisions for Simulating
Completing the required steps of a simulation study establishes the likelihood of the study’s success. Although knowing the basic steps in the simulation study is important, it is equally important to realize that not every problem should be solved using simulation. In the past, simulation required the specialized training of programmers and analysts dedicated to very large and complex projects. Now, due to the large number of software available, simulation at times is used inappropriately by individuals lacking the sufficient training and experience. When simulation is applied inappropriately, the study will not produce meaningful results. The failure to achieve the desired goals of the simulation study may induce blaming the simulation approach itself when in fact the cause of the failure lies in the inappropriate application of simulation [8].
To recognize if simulation is the correct approach to solving a particular problem, four items should be evaluated before deciding to conduct the study:
 Type of Problem
 Availability of Resources
 Costs
 Availability of Data
Type of Problem: If a problem can be solved by common sense or analytically, the use of simulation is unnecessary. Additionally, using algorithms and mathematical equations may be faster and less expensive than simulating. Also, if the problem can be solved by performing direct experiments on the system to be evaluated, then conducting direct experiments may be more desirable than simulating. To illustrate, recently the UH Transportation Department conducted field studies on expanding the campus shuttle system. The department used their own personnel and vehicles to perform the experiment during the weekend. In contrast, developing the simulation model for the shuttle system took one student several weeks to complete. However, one factor to consider when performing directing experiments is the degree in which the real system will be disturbed. If a high degree of disruption to the real system will occur, then another approach may be necessary.The real system itself plays another factor in deciding to simulate. If the system is too complex, cannot be defined, and not understandable then simulation will not produce meaningful results. This situation often occurs when human behavior is involved.
Availability of Resources: People and time are the determining resources for conducting a simulation study. An experienced analyst is the most important resource since such a person has the ability and experience to determine both the model’s appropriate level of detail and how to verify and validate the model. Without a trained simulator, the wrong model may be developed which produces unreliable results. Additionally, the allocation of time should not be so limited so as to force the simulator to take shortcuts in designing the model. The schedule should allow enough time for the implementation of any necessary changes and for verification and validation to take place if the results are to be meaningful.
Costs: Cost considerations should be given for each step in the simulation process, purchasing simulation software if not already available, and computer resources. Obviously if these costs exceed the potential savings in altering the current system, then simulation should not be pursued.
Availability of Data: The necessary data should be identified and located, and if the data does not exist, then the data should be collectible. If the data does not exist and cannot be collected, then continuing with the simulation study will eventually yield unreliable and useless results. The simulation output cannot be compared to the real system’s performance, which is vital for verifying and validating the model.
The basic steps and decisions for a simulation study are incorporated into a flowchart as shown below:
Steps and Decisions for Conducting a Simulation Study
Once simulation has been identified as the preferred approach to solving a particular problem, the decision to implement the course of action suggested by the simulation study’s results does not necessarily signify the end of the study, as indicated in the flowchart above. The model may be maintained to check the system’s response to variabilities experienced by the real system. However, the extent to which the model may be maintained largely depends on the model’s flexibility and what questions the model was originally designed to address.
Get full access to Quantitative Techniques: Theory and Problems and 60K+ other titles, with free 10day trial of O’Reilly.
There’s also live online events, interactive content, certification prep materials, and more.
STEPS IN THE SIMULTATION PROCESS
Although simulations vary in complexity from situation to situation, in general one would have to go through the following steps:
Step 1→  Define the problem or system you intended to simulate. 
Step 2→  Formulate the model you intend to use. 
Step 3→  Test the model; compare its behaviour with the behaviour of the actual problem. 
Step 4→  Identify and collect the data needed to test the model. 
Step 5→  Run the simulation 
Step 6→  Analyze the results of the simulation and, if desired, change the solution you are evaluating. 
Step 7→  Rerun the simulation to test the new solution. 
Step 8→  Validate the simulation; this involves increasing the chances of the inferences you may draw about the real situation . 
Get Quantitative Techniques: Theory and Problems now with the O’Reilly learning platform.
O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.
Averill M. Law & Associates, Inc., Tucson, AZ
Averill M. Law & Associates, Inc., Tucson, AZ

10 citation 687 Downloads
New Citation Alert added!
This alert has been successfully added and will be sent to:
You will be notified whenever a record that you have chosen has been cited.
To manage your alert preferences, click on the button below.
New Citation Alert!
Save to Binder
 Cancel Create
WSC ’03: Proceedings of the 35th conference on Winter simulation: driving innovation
ABSTRACT
In this tutorial we give a definitive and comprehensive sevenstep approach for conducting a successful simulation study. Topics to be discussed include problem formulation, collection and analysis of data, developing a valid and credible model, modeling sources of system randomness, design and analysis of simulation experiments, and project management.
Get full access to this Publication
Purchase, subscribe or recommend this publication to your librarian.
Already a Subscriber?Sign In
References
 Banks, J., J. S. Carson, B. L. Nelson, and D. M. Nicol. 2001. DiscreteEvent System Simulation, Third Edition, PrenticeHall, Upper Saddle River, N.J. Google Scholar
 Law, A. M. and W. D. Kelton. 2000. Simulation Modeling and Analysis, Third Edition, McGrawHill, New York. Google Scholar
Comments
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Full Access
 Information
 Contributors

Published in
 General Chair:
 David Ferrin
,
 Program Chair:
 Douglas J. Morrice
The University of Texas at Austin, Austin, TX
You are here
 Home
 Simulation Methods
Simulation
Simulation is a flexible methodology we can use to analyze the behavior of a present or proposed business activity, new product, manufacturing line or plant expansion, and so on (analysts call this the ‘system’ under study). By performing simulations and analyzing the results, we can gain an understanding of how a present system operates, and what would happen if we changed it — or we can estimate how a proposed new system would behave. Often — but not always — a simulation deals with uncertainty, in the system itself, or in the world around it.
Simulation Applications
Simulation is one of the most widely used quantitative methods — because it is so flexible and can yield so many useful results. Here’s just a sample of the applications where simulation is used:
 Choosing drilling projects for oil and natural gas
 Evaluating environmental impacts of a new highway or industrial plant
 Setting stock levels to meet fluctuating demand at retail stores
 Forecasting sales and production requirements for a new drug
 Planning aircraft sorties and ship movements in the military
 Planning for retirement, given expenses and investment performance
 Deciding on reservations and overbooking policies for an airline
 Selecting projects with uncertain payoffs in capital budgeting
Simulation Models
In a simulation, we perform experiments on a model of the real system, rather than the real system itself. We do this because it is faster, cheaper, or safer to perform experiments on the model. While simulations can be performed using physical models — such as a scale model of an airplane — our focus here is on simulations carried out on a computer.
Computer simulations use a mathematical model of the real system. In such a model we use variables to represent key numerical measures of the inputs and outputs of the system, and we use formulas, programming statements, or other means to express mathematical relationships between the inputs and outputs. When the simulation deals with uncertainty, the model will include uncertain variables — whose values are not under our control — as well as decision variables or parameters that we can control. The uncertain variables are represented by random number generators that return sample values from a representative distribution of possible values for each uncertain element in each experimental trial or replication of the model. A simulation run includes many hundreds or thousands of trials.
Our simulation model — often called a risk model — will calculate the impact of the uncertain variables and the decisions we make on outcomes that we care about, such as profit and loss, investment returns, environmental consequences, and the like. As part of our model design, we must choose how numerical values for the uncertain variables will be sampled on each trial.
Simulation Methods
Complex manufacturing and logistics systems often call for discrete event simulation, where there are “flows” of materials or parts, people, etc. through the system, and many steps or stages with complex interrelationships. Special simulation modeling languages are often used for these applications.
But a great many situations — including almost all of the examples above — have been successfully handled with simulation models created in a spreadsheet using Microsoft Excel. This minimizes the learning curve, since you can apply your spreadsheet skills to create the model. Simple steps or stages, such as inventory levels in different periods, are easy to represent in columns of a spreadsheet model. You can solve a wide range of problems with Monte Carlo simulation of models created in Excel, or in a programming language such as Visual Basic, C++ or C#.
Running a simulation generates a great deal of statistical data, that must be analyzed with appropriate tools. Professional simulation software, such as Frontline Systems’ Risk Solver, allows you to easily create charts and graphs, a wide range of statistics and risk measures, perform sensitivity analysis and parameterized simulations, and use advanced methods for simulation optimization.
Monte Carlo Simulation
Monte Carlo simulation — named after the city in Monaco famed for its casinos and games of chance — is a powerful method for studying the behavior of a system, as expressed in a mathematical model on a computer. As the name implies, Monte Carlo methods rely on random sampling of values for uncertain variables, that are “plugged into” the simulation model and used to calculate outcomes of interest. With the aid of software, we can obtain statistics and view charts and graphs of the results. To learn more, consult our Monte Carlo simulation tutorial.
Monte Carlo simulation is especially helpful when there are several different sources of uncertainty that interact to produce an outcome. For example, if we’re dealing with uncertain market demand, competitors’ pricing, and variable production and raw materials costs at the same time, it can be very difficult to estimate the impacts of these factors — in combination — on Net Profit. Monte Carlo simulation can quickly analyze thousands of ‘whatif’ scenarios, often yielding surprising insights into what can go right, what can go wrong, and what we can do about it.
Simulation modeling solves realworld problems safely and efficiently. It provides an important method of analysis which is easily verified, communicated, and understood. Across industries and disciplines, simulation modeling provides valuable solutions by giving clear insights into complex systems.
Bits not atoms. Simulation enables experimentation on a valid digital representation of a system. Unlike physical modeling, such as making a scale copy of a building, simulation modeling is computer based and uses algorithms and equations. Simulation software provides a dynamic environment for the analysis of computer models while they are running, including the possibility to view them in 2D or 3D.
The uses of simulation in business are varied and it is often utilized when conducting experiments on a real system is impossible or impractical, often because of cost or time.
The ability to analyze the model as it runs sets simulation modeling apart from other methods, such as those using Excel or linear programming. By being able to inspect processes and interact with a simulation model in action, both understanding and trust are quickly built.
To learn how simulation is different from traditional mathematical modeling and check if it is applicable to your challenges, try AnyLogic yourself — download the Personal Learning Edition for free. This version is specially designed for selflearning, so you can freely explore the world of simulation!
Alternatively, start with our white paper based on the presentation by Lyle Wallis, director at PwC. It compares different approaches for modeling and analyzing business strategies and demonstrates the commercial use of simulation with case studies from worldfamous companies.
riskfree environment
save money and time
visualization
insight into dynamics
increased accuracy
handle uncertainty
The Use of Simulation with an Example: Simulation Modeling for Efficient Customer Service
This specific example may also be applicable to the more general problem of human and technical resource management, where companies naturally seek to lower the cost of underutilized resources, technical experts, or equipment, for example.
Finding the optimum number of staff to deliver a predefined quality of service to customers visiting a bank.
Firstly, for the bank, the level of service was defined as the average queue size. Relevant system measures were then selected to set the parameters of the simulation model – the number and frequency of customer arrivals, the time a teller takes to attend a customer, and the natural variations which can occur in all of these, in particular, lunch hour rushes and complex requests.
A flowchart corresponding to the structure and processes of the department was then created. Simulation models only need to consider those factors which impact the problem being analyzed. For example, the availability of office services for corporate accounts, or the credit department have no effect on those for individuals, because they are physically and functionally separate.
Finally, after feeding the model data, the simulation could be run and its operation seen over time, allowing refinement and analysis of the results. If the average queue size exceeded the specified limit, the number of available staff was increased and a new experiment was done. It is possible for this to happen automatically until an optimal solution is found.
Overall, multiple scenarios may be explored very quickly by varying parameters. They can be inspected and queried while in action and compared against each other. The results of the modeling and simulation, therefore, give confidence and clarity for analysts, engineers, and managers alike.
Simulation is the best approach for addressing business challenges
Learn why in the white paper Developing Disruptive Business Strategies with Simulation
The RMS simulation tool in PowerFactory can be used to analyse midterm and longterm transients under both balanced and unbalanced conditions, incorporating a simulation scan feature. DIgSILENT Simulation Language (DSL) is used for model definition, and a large library of IEEE standard models is available. Flexible cosimulation options are also available.
 Multiphase AC networks, DC networks
 Support of balanced and unbalanced grid conditions
 Fast, fixed step size and adaptive step size algorithm
 Astable numerical integration algorithms supporting longterm stability simulations with integration step sizes ranging from milliseconds to minutes, individually selectable for each model
 High precision event and interrupt handling
 Simulation of any kind of fault or event
 Transient motor starting (synchr./asynchr. machines)
 Support of all protection library relays
 Realtime simulation mode
 Simulation scan feature, e.g. frequency scan, loss of synchronism scan, synchronous machine speed scan, voltage/voltage recovery scan, fault ride through scan or common variable scan
 Frequency Analysis Tool, including Fast Fourier Transform (FFT) and Prony Analysis for single point in time as well as timerange assessment
 Frequency Response Analysis tool for dynamic models with Bode/Nyquist plots
UserDefined Dynamic Models
 DIgSILENT Simulation Language (DSL) for Dynamic RMS Modelling
 Graphical editor for drawing any kind of block diagram (AVR, prime mover, relay, etc.)
 Fully flexible signal wiring schemes having access to any grid object and their parameters via definition of Frames
 Support of vector signals and signal multiplexing
 Nesting of frames and model building blocks
 Fully flexible definition of simulation functions via the DSL syntax
 High precision builtin macros and functions
 Automatic initialisation of complex, nonlinear models
 Configuration scripts for initialisation using DPL
 Large builtin standard model library, including IEEE and CIM ENTSOE models
 Generic C interface for userdefined controller models
 IEC61400271 Ed. 1 Appendix F DLL based interface for external models
 Automatic DSLtoC interface converter
 Support of model precompilation for improved performance
 OPC interface 1 for realtime applications
 IEEE C37.118 simulation interface 2 for PMU data streaming
 DSL Encryption function 3
 Modelica Simulation Language
 Support of Modelica Simulation Language for discretetime models
 Integration of external models using FMI (Functional Mockup Interface)
 Support of Functional MockUp Unit (FMU) import for Model Exchange
 FMU export of Modelica models 4
CoSimulation Functionality
 Single domain cosimulation (RMS balanced – RMS balanced, RMS unbalanced – RMS unbalanced)
 Multiple domain cosimulation (RMS balanced – RMS unbalanced – EMT 5 )
 Cosimulation with external solver 6 (e.g. third party power systems simulation program) using FMI 2.0 (Functional MockUp Interface)
 Computing supported as builtin for increased performance
 Both accurate (implicit) and fast (explicit) cosimulation methods available
 Support of multiport Norton/Thevenin remote network equivalents for explicit method
 Easy to define cosimulation border using boundary objects
 Any number of cosimulation regions can be defined
 Cosimulation of networks split by regions depending on any criteria: localisation, voltage levels, etc.
1 This function has to be requested separately
2 C37 Simulation Interface licence required
3 Licence for DPL/DSL/QDSL encryption required. Encryption Function licence. DIgSILENT does not give any express warranties or guarantees for cryptographic security of encrypted models. In particular, DIgSILENT does not guarantee that the details and functionalities of an encrypted model are secure against all means of access or attack attempts.
4 Requires FMU Model Export licence
5 EMT licence required
6 Requires separate CoSimulation Interface licenceWhat Is a Monte Carlo Simulation?
Monte Carlo simulations are used to model the probability of different outcomes in a process that cannot easily be predicted due to the intervention of random variables. It is a technique used to understand the impact of risk and uncertainty in prediction and forecasting models.
A Monte Carlo simulation can be used to tackle a range of problems in virtually every field such as finance, engineering, supply chain, and science. It is also referred to as a multiple probability simulation.
Key Takeaways
 A Monte Carlo simulation is a model used to predict the probability of different outcomes when the intervention of random variables is present.
 Monte Carlo simulations help to explain the impact of risk and uncertainty in prediction and forecasting models.
 A variety of fields utilize Monte Carlo simulations, including finance, engineering, supply chain, and science.
 The basis of a Monte Carlo simulation involves assigning multiple values to an uncertain variable to achieve multiple results and then averaging the results to obtain an estimate.
 Monte Carlo simulations assume perfectly efficient markets.
Monte Carlo Simulation
Understanding Monte Carlo Simulations
When faced with significant uncertainty in the process of making a forecast or estimation, rather than just replacing the uncertain variable with a single average number, the Monte Carlo Simulation might prove to be a better solution by using multiple values.
Since business and finance are plagued by random variables, Monte Carlo simulations have a vast array of potential applications in these fields. They are used to estimate the probability of cost overruns in large projects and the likelihood that an asset price will move in a certain way.
Telecoms use them to assess network performance in different scenarios, helping them to optimize the network. Analysts use them to assess the risk that an entity will default, and to analyze derivatives such as options.
Insurers and oil well drillers also use them. Monte Carlo simulations have countless applications outside of business and finance, such as in meteorology, astronomy, and particle physics.
Monte Carlo Simulation History
Monte Carlo simulations are named after the popular gambling destination in Monaco, since chance and random outcomes are central to the modeling technique, much as they are to games like roulette, dice, and slot machines.
The technique was first developed by Stanislaw Ulam, a mathematician who worked on the Manhattan Project. After the war, while recovering from brain surgery, Ulam entertained himself by playing countless games of solitaire. He became interested in plotting the outcome of each of these games in order to observe their distribution and determine the probability of winning. After he shared his idea with John Von Neumann, the two collaborated to develop the Monte Carlo simulation.
Monte Carlo Simulation Method
The basis of a Monte Carlo simulation is that the probability of varying outcomes cannot be determined because of random variable interference. Therefore, a Monte Carlo simulation focuses on constantly repeating random samples to achieve certain results.
A Monte Carlo simulation takes the variable that has uncertainty and assigns it a random value. The model is then run and a result is provided. This process is repeated again and again while assigning the variable in question with many different values. Once the simulation is complete, the results are averaged together to provide an estimate.
Calculating a Monte Carlo Simulation in Excel
One way to employ a Monte Carlo simulation is to model possible movements of asset prices using Excel or a similar program. There are two components to an asset’s price movement: drift, which is a constant directional movement, and a random input, which represents market volatility.
By analyzing historical price data, you can determine the drift, standard deviation, variance, and average price movement of a security. These are the building blocks of a Monte Carlo simulation.
To project one possible price trajectory, use the historical price data of the asset to generate a series of periodic daily returns using the natural logarithm (note that this equation differs from the usual percentage change formula):
Next use the AVERAGE, STDEV.P, and VAR.P functions on the entire resulting series to obtain the average daily return, standard deviation, and variance inputs, respectively. The drift is equal to:
Alternatively, drift can be set to 0; this choice reflects a certain theoretical orientation, but the difference will not be huge, at least for shorter time frames.
Next, obtain a random input:
The equation for the following day’s price is:
To take e to a given power x in Excel, use the EXP function: EXP(x). Repeat this calculation the desired number of times (each repetition represents one day) to obtain a simulation of future price movement. By generating an arbitrary number of simulations, you can assess the probability that a security’s price will follow a given trajectory.
Special Considerations
The frequencies of different outcomes generated by this simulation will form a normal distribution, that is, a bell curve. The most likely return is in the middle of the curve, meaning there is an equal chance that the actual return will be higher or lower than that value.
The probability that the actual return will be within one standard deviation of the most probable (“expected”) rate is 68%, while the probability that it will be within two standard deviations is 95%, and that it will be within three standard deviations 99.7%. Still, there is no guarantee that the most expected outcome will occur, or that actual movements will not exceed the wildest projections.
Crucially, Monte Carlo simulations ignore everything that is not built into the price movement (macro trends, company leadership, hype, cyclical factors); in other words, they assume perfectly efficient markets.
Book a place
 We don’t have a date for this course yet. Please contact CPD Administrator to register your interest.
Overview
Simulation studies are an important tool for statistical research. They help statisticians and researchers understand the properties of statistical methods and compare different methods.
This twoday course will help you understand how simulation studies work, so you can critique published simulation studies and design one yourself.
You’ll learn how to plan, code, analyse and interpret simulation studies using Stata or R (data analysis and statistical software).
This course is run by the Institute of Clinical Trials and Methodology at UCL.
Course content and structure
Topics covered by the course include the following:
 Planning a simulation study
 Coding a simulation study
 Analysing simulation studies
 Analysing your simulation study and feeding back results
 Reporting simulation studies
The course involves a series of lectures and computer practicals, where you can use Stata or R.
Examples will be taken from the lecturers’ experiences, primarily relating to medical statistics. The principles for simulation studies in other applied areas are the same, but the examples may be less relevant.
Who this course is for
This course is suitable for:
 methodological or applied statisticians who need to evaluate the statistical properties of one or more methods
 PhD students who use simulation studies
 readers of methodology articles that evaluate methods by simulation
Entry requirements
You should already be familiar with Stata or R and know, for example, how to generate data, run a regression command and produce simple graphs.
You’ll need to bring your own laptop with Stata (version 12 or later) or a recent version of R installed.
Learning outcomes
By the end of this course you’ll be able to:
 critically read and evaluate simulation studies in the statistical literature
 conduct a simulation study
 explain the rationale for simulation
 understand the importance of careful planning
 code and debug simple simulation studies in Stata
 analyse simulation studies producing estimates of uncertainty
 present methods and results for publication
Cost and concessions
 standard – £270
 UCL staff and students – £150
 staff and students based at the ICTM – free
Course team
Dr Tim Morris
Tim is a statistician interested in practical methods for improving the design and analysis of randomised trials and observational studies. He’s based at the MRC Clinical Trials Unit at UCL, part of the Institute of Clinical Trials and Methodology.
Professor Ian White
Ian is a medical statistician with an interest in developing new methodology for design and analysis of clinical trials, metaanalysis and observational studies. He joined the MRC Clinical Trials Unit at UCL in 2017 after spending 16 years as a programme leader at the Medical Research Council’s Biostatistics Unit in Cambridge.
Dr Michael J. Crowther
Michael is Associate Professor of Biostatistics in the Biostatistics Research Group at the University of Leicester. He’s a Section Editor for the Journal of Statistical Software and an Associate Editor for the Stata Journal. His main research interests include survival analysis, multilevel and mixed effects models, and statistical software development.
“Despite having run simulation studies in the past I learned so much. Thank you.”
“Tutors were really helpful during practicals. It was good that we had the opportunity to think about and develop simulations from scratch with cheatsheets available if required.”
Book a place
 We don’t have a date for this course yet. Please contact CPD Administrator to register your interest.
Course information last modified: 1 Nov 2021, 15:16
It allows organizations in the manufacturing industry to analyze and experiment with their processes in a virtual environment, reducing the time and cost requirements associated with physical testing.
What is manufacturing simulation?
It’s the computerbased modeling of a real production system. Inventory, assembly, transportation and production can all be considered within a simulation model, resulting in decisions that can maintain or improve efficiency at the lowest possible cost.
What is FlexSim?
FlexSim is a powerful yet easytouse software package for simulation modeling. A fast and accurate simulation engine is hidden behind drag and drop controls, dropdown lists, and many other intuitive features that make it accessible for anyone to build a model. All simulation models are created to scale and are presented using 3D visuals, so it becomes easy to view and recognize bottlenecks in the production line or other deficiencies within the system. FlexSim also gives decision makers the data to confirm their observations, with impressive data reporting and analysis built right into the software.
Why simulate manufacturing?
The need for efficiency in the manufacturing industry has never been greater, with material, transportation and labor costs continuing to rise each year. Successful companies need to ensure that the costs associated with time, equipment and other investments are being considered and optimized. At its core, manufacturing simulation is an inexpensive, riskfree way to test anything from simple revisions to complete redesigns, always with the purpose of meeting production goals at the lowest possible cost. Simulation also provides a way to test and implement principles of Lean manufacturing and Six Sigma. And unlike spreadsheetbased analysis and forecasting, manufacturing simulation offers a quick and efficient method to adjust parameters and get faster results.
Why simulate with FlexSim?
Answer important questions – Manufacturing simulation with FlexSim can answer important questions that decision makers face every day. Just a few examples:
 Would adding equipment in the manufacturing plant increase throughput or create an unforeseen bottleneck?
 Can we add a new product line and still meet production goals?
 Can we identify and minimize the causes of defects?
Full 3D simulation – What do 3D visuals really add to a simulation model? How about an immersive experience that helps you and your colleagues truly understand what’s going on! FlexSim brings a visual experience to simulation modeling, providing rich 3D objects and enhanced realism. 3D simulation modeling brings the model to life, and aids in communication and discourse for staff members at all levels.
Easy to use, yet powerful – FlexSim contains userfriendly tools like dropdown menus and drag and drop functionality that allow beginners to build and test models in just minutes, with no background in computer coding. And our newest logicbuilding feature makes it easier when things start to get complex. Simply map out the process using activities in this powerful flowcharting tool—no need for coding.
In This Topic
 What is a sensitivity analysis?
 Perform a sensitivity analysis
 Examine the results
 Edit the model
What is a sensitivity analysis?
Sensitivity analysis often follows parameter optimization, which focuses on finding optimal settings for the inputs. For more information, go to Perform a parameter optimization.
Perform a sensitivity analysis
Use the sensitivity analysis to evaluate the effects of the input variation on the output variation.
 Choose Simulation > Sensitivity Analysis .
 If you have more than one output, a dropdown list appears so that you can choose the output that you want to examine.
 Examine the graph.
 Look for inputs that have sloped lines. Consider the relationship between changes to the standard deviation of the input and the % outofspecification.
 Look for inputs with flat lines. These inputs have little effect on the variability so you may be able to reduce the tolerances. You can bring this information back to your engineering team for consideration.
If the lines are close together, you can isolate an input line by choosing it from the graph legend, which helps you to select and evaluate changes.
If you prefer to evaluate the standard deviation rather than the % outofspecification, select Standard Deviation on the yaxis label. % Out of Spec is only available when you have specification limits.
Examine the results
Engage displays the results of the sensitivity analysis, assumptions, and guidance for next steps.
 View the results in the workspace. You can switch between the model view and the results view of the simulation.
Each time you repeat the simulation, the results will vary because the simulation is based on randomly selected values for the inputs.
Edit the model
After you analyze the results, you may want to return to the model and change inputs or outputs, then rerun it. This lets you test several possible scenarios so that you can get insight into the behavior of your system and make better decisions.
Start learning from our course Library of 7,500+ ondemand video tutorials in the top engineering design software and methods.
 SOLIDWORKS
 Inventor
 AutoCAD
 Revit
 Fusion 360
 SketchUp
 Mastercam
 CAMWorks
 MATLAB
 ANSYS
 CATIA
 Solid Edge
 Meshmixer
 Rhino
 Onshape
 DraftSight
 Abaqus
 Civil 3D
 Bluebeam
 Construction Cloud
SolidProfessor for Business
Upskill your design team with our industrytrusted engineering training platform. Get full access to 7,500+ video tutorials, skills assessments, and progress tracking.
SolidProfessor for Schools
Educate the next generation of engineers, architects, and manufacturers. Our expertguided online lessons help you get students certified and prepped for college and career.
Keep your engineering design, management, and teaching skills sharp with the help of our free resources.
We’re the goto resource for ondemand, selfpaced design training for professionals, design teams, and educators. We know engineering and architectural design software inside and out!
Solutions for brain researchers to conduct sustainable simulation studies and share their results
Forschungszentrum JГјlich / SBC Lehmann
AixMarseille UniversitГ© / Viktor Jirsa
Morphologies:Markram, H.,et al. (2016). Zenodo. doi: 10.5281/zenodo.57082; Mercer A, Thomson AM. Front. Neuroanat. doi:10.3389/fnana.2017.00083; Morphologies and model: Migliore et al. (2018). PLoS Comput Biol; doi:10.1371/journal.pcbi.1006423
In research, simulation acts as the main conduit for interchange between experiment and theory. It is a powerful instrument in our endeavour to understand the human brain, which is a complex dynamic system with a multiscale architecture, further complicated by significant differences between one personвЂ™s brain and anotherвЂ™s. The complexity and versatility of the brain, and the variations from one brain to another, are major scientific challenges, driving the development of simulation technology
Neuroscientists have different views on how to best tackle the intricacy of complex systems.
Neuroscientists have different views on how to best tackle the intricacy of such complex systems, advocating approaches that range from holistic to minimalistic, from the notion that only realistic models can account for the inner workings of our brains, to the assumption that only systematic model simplification will allow us to uncover fundamental principles. Likewise, brain models differ in size, complexity and level of detail. EBRAINS Simulation services offer technical solutions for brain researchers to conduct sustainable simulation studies and build upon prior work, and the means to share their results. The services provide integrated workflows for model creation, simulation and validation, including data analysis and visualisation. The simulation engines cover the entire spectrum of levels of description ranging from cellular to network to whole brain level.
Simulation
Services

 Simulation
Risk analysis is part of every decision we make. We are constantly faced with uncertainty, ambiguity, and variability. And even though we have unprecedented access to information, we can’t accurately predict the future. Monte Carlo simulation (also known as the Monte Carlo Method) lets you see all the possible outcomes of your decisions and assess the impact of risk, allowing for better decision making under uncertainty.
What is Monte Carlo Simulation?
Monte Carlo simulation is a computerized mathematical technique that allows people to account for risk in quantitative analysis and decision making. The technique is used by professionals in such widely disparate fields as finance, project management, energy, manufacturing, engineering, research and development, insurance, oil & gas, transportation, and the environment.
Monte Carlo simulation furnishes the decisionmaker with a range of possible outcomes and the probabilities they will occur for any choice of action. It shows the extreme possibilities—the outcomes of going for broke and for the most conservative decision—along with all possible consequences for middleoftheroad decisions.
The technique was first used by scientists working on the atom bomb; it was named for Monte Carlo, the Monaco resort town renowned for its casinos. Since its introduction in World War II, Monte Carlo simulation has been used to model a variety of physical and conceptual systems.
How Monte Carlo Simulation Works
Monte Carlo simulation performs risk analysis by building models of possible results by substituting a range of values—a probability distribution—for any factor that has inherent uncertainty. It then calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values.
By using probability distributions, variables can have different probabilities of different outcomes occurring. Probability distributions are a much more realistic way of describing uncertainty in variables of a risk analysis.
Common probability distributions include:
Or “bell curve.” The user simply defines the mean or expected value and a standard deviation to describe the variation about the mean. Values in the middle near the mean are most likely to occur. It is symmetric and describes many natural phenomena such as people’s heights. Examples of variables described by normal distributions include inflation rates and energy prices.
Values are positively skewed, not symmetric like a normal distribution. It is used to represent values that don’t go below zero but have unlimited positive potential. Examples of variables described by lognormal distributions include real estate property values, stock prices, and oil reserves.
All values have an equal chance of occurring, and the user simply defines the minimum and maximum. Examples of variables that could be uniformly distributed include manufacturing costs or future sales revenues for a new product.
The user defines the minimum, most likely, and maximum values. Values around the most likely are more likely to occur. Variables that could be described by a triangular distribution include past sales history per unit of time and inventory levels.
The user defines the minimum, most likely, and maximum values, just like the triangular distribution. Values around the most likely are more likely to occur. However values between the most likely and extremes are more likely to occur than the triangular; that is, the extremes are not as emphasized. An example of the use of a PERT distribution is to describe the duration of a task in a project management model.
The user defines specific values that may occur and the likelihood of each. An example might be the results of a lawsuit: 20% chance of positive verdict, 30% change of negative verdict, 40% chance of settlement, and 10% chance of mistrial.
During a Monte Carlo simulation, values are sampled at random from the input probability distributions. Each set of samples is called an iteration, and the resulting outcome from that sample is recorded. Monte Carlo simulation does this hundreds or thousands of times, and the result is a probability distribution of possible outcomes. In this way, Monte Carlo simulation provides a much more comprehensive view of what may happen. It tells you not only what could happen, but how likely it is to happen.
Monte Carlo simulation provides a number of advantages over deterministic, or “singlepoint estimate” analysis:
 Probabilistic Results. Results show not only what could happen, but how likely each outcome is.
 Graphical Results. Because of the data a Monte Carlo simulation generates, it’s easy to create graphs of different outcomes and their chances of occurrence. This is important for communicating findings to other stakeholders.
 Sensitivity Analysis. With just a few cases, deterministic analysis makes it difficult to see which variables impact the outcome the most. In Monte Carlo simulation, it’s easy to see which inputs had the biggest effect on bottomline results.
 Scenario Analysis: In deterministic models, it’s very difficult to model different combinations of values for different inputs to see the effects of truly different scenarios. Using Monte Carlo simulation, analysts can see exactly which inputs had which values together when certain outcomes occurred. This is invaluable for pursuing further analysis.
 Correlation of Inputs. In Monte Carlo simulation, it’s possible to model interdependent relationships between input variables. It’s important for accuracy to represent how, in reality, when some factors goes up, others go up or down accordingly.
An enhancement to Monte Carlo simulation is the use of Latin Hypercube sampling, which samples more accurately from the entire range of distribution functions.
Monte Carlo Simulation with Palisade
The advent of spreadsheet applications for personal computers provided an opportunity for professionals to use Monte Carlo simulation in everyday analysis work. Microsoft Excel is the dominant spreadsheet analysis tool and Palisade’s @RISK is the leading Monte Carlo simulation addin for Excel. First introduced for Lotus 123 for DOS in 1987, @RISK has a longestablished reputation for computational accuracy, modeling flexibility, and ease of use.
Welcome to the future of design validation. Easily test performance, optimize durability, and improve efficiency.
One platform, many possibilities
CFD, FEA, and thermal simulation software transformed from a desktop application into an online platform
Works with your standard CAD tool
SimScale supports all standard 3D files so you can continue using the CAD system you are familiar with.
Integrations
Use the SimScale addons to make your workflow as simple as possible.
Simple yet extremely powerful
At SimScale, we value your time. That’s why we reinvented the simulation software workflow with the goal of reducing your timetoresult from weeks to minutes.
Easy Setup
Most simulation software tools are complex and difficult to understand. SimScale uses a lean simulation workflow that guides you through the process stepbystep. And with the help of readytouse simulation templates, you never have to start from scratch.
Worldclass support
Your success is our priority. Our experienced team of engineers is there to answer any of your questions in real time via email, phone, or chat directly from within the platform. This way you can take advantage of simulation technology independent of your expertise level.
From weeks to minutes
With SimScale, you can test multiple design versions in parallel and quickly identify the bestperforming one. Even in the case of large or complex designs, access to up to 96 cores and realtime simulation allows you to get your results faster than ever before.
Zero hardware and software footprint
With SimScale, investing in expensive highperformance computing hardware and caring for simulation software installation and maintenance are a thing of the past.
Zero hardware or maintenance investment
By leveraging the power of the cloud, SimScale helps you save on average $30,000 by cutting the cost of expensive hardware and software maintenance fees.
No installation. No maintenance
All you need is Internet connection to run demanding simulations on a laptop or PC of your choice. We take care of software updates for you in the background.
Seamless collaboration
With SimScale, globally distributed design and engineering teams can easily share and collaborate on their projects in real time. No more hassle with data exchange or expanding the usage of CAE software across your organization.
Supported by over 250,000 users
Engineers. Designers. Scientists.
Explore the community
“Thanks to the platform’s ease of use, the professional support of SimScale engineers and the perfect communication with them, we were able to efficiently perform simulations and sort out our design problems. It’s hard to imagine how much physical prototyping time and measures we saved.”
CEO at CRYO Science
“The nice thing about SimScale is that you can essentially run an unlimited number of simulations simultaneously with no reduction in computational resources, because SimScale opens up each simulation on independent nodes. This is a huge advantage for us”
CAE Engineer at Johnson Screens – Aqseptence Group
“SimScale moves complex simulations to the next level. We are thrilled about the userfriendly interface, great customer support, and computing power, which enables us to handle our projects more efficiently.”
Civil Engineer at Ingenieurbüro Hausladen GmbH
“I loved the fact that I could do what I need to do in a browser. I started working without installing any software. Using any computer I could upload the model and start the analysis. That was the big “wow” moment.”
President at Custom Machines
Latest from SimScale
Your hub for everything you need to know about our
simulation software and the world of CAE What is Monte Carlo Simulation?
 How does Monte Carlo Simulation work?
 How to use Monte Carlo methods
 Monte Carlo Simulations and IBM
What is Monte Carlo Simulation?
Monte Carlo Simulation, also known as the Monte Carlo Method or a multiple probability simulation, is a mathematical technique, which is used to estimate the possible outcomes of an uncertain event. The Monte Carlo Method was invented by John von Neumann and Stanislaw Ulam during World War II to improve decision making under uncertain conditions. It was named after a wellknown casino town, called Monaco, since the element of chance is core to the modeling approach, similar to a game of roulette.
Since its introduction, Monte Carlo Simulations have assessed the impact of risk in many reallife scenarios, such as in artificial intelligence, stock prices, sales forecasting, project management, and pricing. They also provide a number of advantages over predictive models with fixed inputs, such as the ability to conduct sensitivity analysis or calculate the correlation of inputs. Sensitivity analysis allows decisionmakers to see the impact of individual inputs on a given outcome and correlation allows them to understand relationships between any input variables.
How does Monte Carlo Simulation work?
Unlike a normal forecasting model, Monte Carlo Simulation predicts a set of outcomes based on an estimated range of values versus a set of fixed input values. In other words, a Monte Carlo Simulation builds a model of possible results by leveraging a probability distribution, such as a uniform or normal distribution, for any variable that has inherent uncertainty. It, then, recalculates the results over and over, each time using a different set of random numbers between the minimum and maximum values. In a typical Monte Carlo experiment, this exercise can be repeated thousands of times to produce a large number of likely outcomes.
Monte Carlo Simulations are also utilized for longterm predictions due to their accuracy. As the number of inputs increase, the number of forecasts also grows, allowing you to project outcomes farther out in time with more accuracy. When a Monte Carlo Simulation is complete, it yields a range of possible outcomes with the probability of each result occurring.
One simple example of a Monte Carlo Simulation is to consider calculating the probability of rolling two standard dice. There are 36 combinations of dice rolls. Based on this, you can manually compute the probability of a particular outcome. Using a Monte Carlo Simulation, you can simulate rolling the dice 10,000 times (or more) to achieve more accurate predictions.
How to use Monte Carlo methods
Regardless of what tool you use, Monte Carlo techniques involves three basic steps:
 Set up the predictive model, identifying both the dependent variable to be predicted and the independent variables (also known as the input, risk or predictor variables) that will drive the prediction.
 Specify probability distributions of the independent variables. Use historical data and/or the analyst’s subjective judgment to define a range of likely values and assign probability weights for each.
 Run simulations repeatedly, generating random values of the independent variables. Do this until enough results are gathered to make up a representative sample of the near infinite number of possible combinations.
You can run as many Monte Carlo Simulations as you wish by modifying the underlying parameters you use to simulate the data. However, you’ll also want to compute the range of variation within a sample by calculating the variance and standard deviation, which are commonly used measures of spread. Variance of given variable is the expected value of the squared difference between the variable and its expected value. Standard deviation is the square root of variance. Typically, smaller variances are considered better.
Read more about how to conduct a Monte Carlo simulation here (link resides outside IBM)
Monte Carlo Simulations and IBM
Although you can perform Monte Carlo Simulations with a number of tools, like Microsoft Excel, it’s best to have a sophisticated statistical software program, such as IBM SPSS Statistics, which is optimized for risk analysis and Monte Carlo simulations. IBM SPSS Statistics is a powerful statistical software platform that delivers a robust set of features that lets your organization extract actionable insights from its data.
With SPSS Statistics you can:
 Analyze and better understand your data and solve complex business and research problems through a userfriendly interface.
 More quickly understand large and complex data sets with advanced statistical procedures that help ensure high accuracy and quality decision making.
 Use extensions, Python, and R programming language code to integrate with opensource software.
 More easily select and manage your software with flexible deployment options.
Using the simulation module in SPSS Statistics, you can, for example, simulate various advertising budget amounts and see how that affects total sales. Based on the outcome of the simulation, you might decide to spend more on advertising to meet your total sales goal. Read more about how to use IBM SPSS Statistics for Monte Carlo simulations here (link resides outside IBM).
IBM Cloud Functions can also assist in Monte Carlo Simulations. IBM Cloud Functions is a serverless functionsasaservice platform that executes code in response to incoming events. Using IBM Cloud functions, an entire Monte Carlo Simulation was completed in just 90 seconds with 1,000 concurrent invocations. Read more about how to conduct a Monte Carlo Simulation using IBM tooling, here.
For more information on Monte Carlo Simulations, sign up for the IBMid and create your IBM Cloud account.
Bringing real and simulated data together
JASmine is a Java platform that aims to provide a unique simulation tool for discreteevent simulations, including agentbased and microsimulation models
Key Benefits
In developing largescale, datadriven models, JASmine aims to use standard, opensource tools already available in the software development community whenever possible
Embedded RDBMS (relational database management systems) tools and automatic CSV table creation
Facilitates the design of experiments (DOE)
Complete separation of regression specifications from the code, and permit uncertainty analysis of the model outcome by bootstrapping the estimated coefficients across different simulation runs
Key Features
JASmine allows the separation of data representation and management, which is automatically taken care of by the simulation engine, from the implementation of processes and behavioural algorithms, which should be the primary concern of the modeler. This results in quicker, more robust and more transparent model building.
 A discreteevent simulation engine, allowing for both discretetime and continuoustime simulation modelling
 A ModelCollectorObserver structure
 Interactive (GUI based) batch and multirun execution modes, the latter allowing for detailed design of experiments (DOE)
 A library implementing a number of different matching methods, to match different lists of agents
 A library implementing a number of different alignment methods (including binary and multiple choice alignment), to force the microsimulation outcomes meeting some exogenous aggregate targets
 A Regression library implementing a number of common econometric models, from continuous response linear regression models to binomial and multinomial logit and probit models, which includes automatic bootstrapping of the coefficients for uncertainty analysis of the model outcomes
 A statistical package based on the cern.jet libraries
 Embedded H2 database
 Export to .CSV files as a faster alternative to database persistence
 MS Excel I/O communication tools
 Automatic GUI creation for parameters by using Java annotation
 Automatic output database creation
 Automatic agents’ sampling and recording
 Powerful probes for realtime statistical analysis and data collection
 A rich graphical library for realtime plotting of simulation outcomes
 Eclipse plugin, which allows to create a JASmine project in just a few clicks, with template classes organised in the JASmine standard package and folder structure
 Maven version control
Next steps
Learn how to obtain and install JASmine, work through a sample project, and discover all classes and methods available in the API
Download
Download the latest version of the software from our repository
Learn
Learn more about the documentation, including the cookbook and tutorials
Discover
Work through the stepbystep demo models, starting with the Demo07 sample model.
Create
Create your own model, using classes and methods available in the API.
Sign up
Institute for Social and Economic Research
University of Essex, Wivenhoe Park,
Colchester, Essex, CO4 3SQ UK
+44 (0)1206 872957 Accessibility
 Information Security
 Cookie Policy
 Privacy Policy
Institutional member of the International Microsimulation Association
TSISCORSIM 2022 Release
TSISCORSIM 2022 is now available with Bing Maps background option to your simulation network and redesigned interface for an enhanced user experience.
TSISCORSIM 2022 Release
TSISCORSIM 2022 is now available with Bing Maps background option to your simulation network and redesigned interface for an enhanced user experience.
Traffic Software Integrated System – Corridor Simulation
TSIS is an integrated modeling environment that enables users to conduct traffic operations analysis. Built using a component architecture, TSIS is a toolbox that allows the user to define and manage traffic analysis projects, model traffic networks, create inputs for traffic simulation analysis, run traffic simulation, and interpret the results of those models.
CORSIM is a microscopic traffic simulation capable of modeling surface streets, freeways, highways, and integrated networks, including segments, weaves, merge/diverges, and intersection:stop/yield sign, traffic signals. It simulates traffic and traffic control systems using researchbacked and established vehicle and driver behavior models.
According to the Federal Highway Administration (FHWA), TSISCORSIM has been used by FHWA for conducting research and applied by thousands of practitioners and researchers worldwide over the past 30 years, embodying a wealth of experience and maturity. Volume 4 of the Traffic Analysis Toolbox (CORSIM Application Guidelines) is available on the FHWA traffic analysis tools home page
Examples of TSISCORSIM Applications
TSISCORSIM can model facilities and networks beyond the scope of the Highway Capacity Manual procedures, including:
 Urban street grid
 Spatial and temporal effects of congestion
 Interactions between freeways and urban streets
 Alternative intersections
 Routing (dynamic traffic assignment – DTA)
 Interruptions to traffic flow (e.g., rail crossing)
 Effects of incidents on traffic flow
 Work zones and lane closures on freeways and arterials
 Active Traffic Management analyses for arterials and freeways (e.g., ramp metering)
 Managed lanes including High Occupancy Vehicles (HOV) and High Occupancy Toll (HOT) lanes
 Signal Timing Optimization assessment and testing in conjunction with TRANSYT7F (included in TSIS) and HCS and algorithms
Microscopic Traffic Simulation
CORSIM is a microscopic traffic simulation tool, which replicates driver behavior and the interaction between vehicles individually in small time steps within a model network. In a microsimulation tool, many parameters are stochastic and realistic, and the tool can simulate the interaction between different network elements, such as urban arterials and freeways.
Monte Carlo Analysis is a risk management technique used to conduct a quantitative analysis of risks. This mathematical technique was developed in 1940 by an atomic nuclear scientist named Stanislaw Ulam and is used to analyze the impact of risks on your project — in other words, if this risk occurs, how will it affect the schedule or the cost of the project? Monte Carlo gives you a range of possible outcomes and probabilities to allow you to consider the likelihood of different scenarios.
For example, let’s say you don’t know how long your project will take. You have a rough estimate of the duration of each project task. Using this, you develop a bestcase scenario (optimistic) and worstcase scenario (pessimistic) duration for each task.
You can then use Monte Carlo to analyze all the potential combinations and give you probabilities of when the project will complete.
The results would look something like this:
 2% chance of completing the project in 12 months (if every task finished by the optimistic timeline)
 15% chance of completion within 13 months
 55% chance of completion within 14 months
 95% chance of completion within 15 months
 100% chance of completion within 16 months (If everything takes as long as the pessimistic estimates)
Using this information, you can now better estimate your timeline and plan your project.
Benefits of Monte Carlo analysis in project management
The primary benefits of using Monte Carlo analysis on your projects are:
 Provides early inducation of how likely you are to meet project milestones and deadlines
 Can be used to create a more realistic budget and schedule
 Predicts the likelihood of schedule and cost overruns
 Quantifies risks to assess impacts
 Provides objective data for decision making
Limitations of Monte Carlo analysis in project management
There are some challenges to using the Monte Carlo analysis. These include:
 You must provide three estimates for every activity or factor being analyzed
 The analysis is only as good as the estimates provided
 The Monte Carlo simulation shows the overall probability for the entire project or a large subset (such as a phase). It can’t be used to analyze individual activities or risks.
Definition: Monte Carlo Simulation is a mathematical technique that generates random variables for modelling risk or uncertainty of a certain system.
The random variables or inputs are modelled on the basis of probability distributions such as normal, log normal, etc. Different iterations or simulations are run for generating paths and the outcome is arrived at by using suitable numerical computations.
Monte Carlo Simulation is the most tenable method used when a model has uncertain parameters or a dynamic complex system needs to be analysed. It is a probabilistic method for modelling risk in a system.
The method is used extensively in a wide variety of fields such as physical science, computational biology, statistics, artificial intelligence, and quantitative finance. It is pertinent to note that Monte Carlo Simulation provides a probabilistic estimate of the uncertainty in a model. It is never deterministic. However, given the uncertainty or risk ingrained in a system, it is a useful tool for approximation of realty.
Description: The Monte Carlo Simulation technique was introduced during the World War II. Today, it is used extensively for modelling uncertain situations.
Although we have a profusion of information at our disposal, it is difficult to predict the future with absolute precision and accuracy. This can be attributed to the dynamic factors that can impact the outcome of a course of action. Monte Carlo Simulation enables us to see the possible outcomes of a decision, which can thereby help us take better decisions under uncertainty. Along with the outcomes, it can also enable the decision maker see the probabilities of outcomes.
Monte Carlo Simulation uses probability distribution for modelling a stochastic or a random variable. Different probability distributions are used for modelling input variables such as normal, lognormal, uniform, and triangular. From probability distribution of input variable, different paths of outcome are generated.
Compared to deterministic analysis, the Monte Carlo method provides a superior simulation of risk. It gives an idea of not only what outcome to expect but also the probability of occurrence of that outcome. It is also possible to model correlated input variables.
For instance, Monte Carlo Simulation can be used to compute the value at risk of a portfolio. This method tries to predict the worst return expected from a portfolio, given a certain confidence interval for a specified time period.
Normally, stock prices are believed to follow a Geometric Brownian motion (GMB), which is a Markov process, which means a certain state follows a random walk and its future value is dependent on the current value.
The generalised form of the Geometric Brownian motion is:
The first term in the equation is called drift and the second is shock. This means the stock price is going to drift by the expected return. Shock is a product of standard deviation and random shock. Based on the model, we run a Monte Carlo Simulation to generate paths of simulated stock prices. Based on the outcome, we can compute the Value at Risk (VAR) of the stock. For a portfolio of many assets, we can generate correlated asset prices using Monte Carlo Simulation.
I n further detail, these steps include:
Formulate the Flow Problem
The first step of the analysis process is to formulate the flow problem by seeking answers to the following questions:
 what is the objective of the analysis?
 what is the easiest way to obtain those objective?
 what geometry should be included?
 what are the freestream and/or operating conditions?
 what dimensionality of the spatial model is required? (1D, quasi1D, 2D, axisymmetric, 3D)
 what should the flow domain look like?
 what temporal modeling is appropriate? (steady or unsteady)
 what is the nature of the viscous flow? (inviscid, laminar, turbulent)
 how should the gas be modeled?
Model the Geometry and Flow Domain
The body about which flow is to be analyzed requires modeling. This generally involves modeling the geometry with a CAD software package. Approximations of the geometry and simplifications may be required to allow an analysis with reasonable effort. Concurrently, decisions are made as to the extent of the finite flow domain in which the flow is to be simulated. Portions of the boundary of the flow domain conicide with the surfaces of the body geometry. Other surfaces are free boundaries over which flow enters or leaves. The geometry and flow domain are modeled in such a manner as to provide input for the grid generation. Thus, the modeling often takes into account the structure and topology of the grid generation.
Establish the Boundary and Initial Conditions
Since a finite flow domain is specified, physical conditions are required on the boundaries of the flow domain. The simulation generally starts from an initial solution and uses an iterative method to reach a final flow field solution.
Generate the Grid
The flow domain is discretized into a grid. The grid generation involves defining the structure and topology and then generating a grid on that topology. Currently all cases involve multiblock, structured grids; however, the grid blocks may be abbuting, contiguous, noncontiguous, and overlapping. The grid should exhibit some minimal grid quality as defined by measures of orthogonality (especially at the boundaries), relative grid spacing (15% to 20% stretching is considered a maximum value), grid skewness, etc. Further the maximum spacings should be consistent with the desired resolution of important features. The resolution of boundary layers requires the grid to be clustered in the direction normal to the surface with the spacing of the first grid point off the wall to be well within the laminar sublayer of the boundary layer. For turbulent flows, the first point off the wall should exhibit a y+ value of less than 1.0.
Establish the Simulation Strategy
The strategy for performing the simulation involves determining such things as the use of spacemarching or timemarching, the choice of turbulence or chemistry model, and the choice of algorithms.
Establish the Input Parameters and Files
A CFD codes generally requires that an input data file be created listing the values of the input parameters consisted with the desired strategy. Further the a grid file containing the grid and boundary condition information is generally required. The files for the grid and initial flow solution need to be generated.
Perform the Simulation
The simulation is performed with various possible with options for interactive or batch processing and distributed processing.
Monitor the Simulation for Completion
As the simulation proceeds, the solution is monitored to determine if a “converged” solution has been obtained, which is iterative convergence. Further discussion can be found on the page entitled Examining Iterative Convergence.
PostProcess the Simulation to get the Results
PostProcessing involves extracting the desired flow properties (thrust, lift, drag, etc. ) from the computed flowfield.
Make Comparisons of the Results
The computed flow properties are then compared to results from analytic, computational, or experimental studies to establish the validity of the computed results.
Repeat the Process to Examine Sensitivities
The sensitivity of the computed results should be examined to understand the possible differences in the accuracy of results and / or performance of the computation with respect to such things as:
 dimensionality
 flow conditions
 initial conditions
 marching strategy
 algorithms
 grid topology and density
 turbulence model
 chemistry model
 flux model
 artificial viscosity
 boundary conditions
 computer system
Further information can be found on the pages entitled Verification Assessment and Validation Assessment.
The word “simulation” implies an imitation of a reallife process in order to provide a lifelike experience in a controlled environment. It can be thought of as somewhere to learn from mistakes without doing any damage.
What is simulationbased training?
Simulation trainings are used as a tool to teach trainees about the skills needed in the real world. It provides a lifelike pointofcare learning experience, and has been widely applied in fields such as aviation, the military, and healthcare.
Simulationbased training is an important part of the development and learning process of knowledge and skills.
 5 reasons why
 Practical examples
 Innovative solutions
White paper download “The added value of simulationbased training”
The added value of simulationbased training
A wellconstructed simulation allows trainees to answer the question, “If I do this, what happens?” It provides learners with an opportunity to test out different scenarios to see what works and to understand how they arrived at the right and wrong answers. This trialanderror approach gives trainees the knowledge and confidence they need to apply their new skills in the real world.
The value of simulation training is further enhanced by following up with a debriefing and coaching session. With the help of video recordings, the training sessions can be analyzed, errors identified, successes marked, and emotions or feelings that influenced the trainees can be discussed. This is when the real learning takes place.
Five reasons why simulation training in healthcare works
The highest goal for every healthcare worker is to improve the quality of a patient’s life and to ensure patient safety. Simulationbased training can achieve that for the following reasons:
1. Practicing in a safe environment
In a simulation facility, preferably in a real clinical area where the staff normally works, you can test new tools and methods, focus on crisis resource management, and develop knowledge, skills, and attitudes in a safe and secure environment. It is important for the trainees to be assured that the simulations are confidential, and that video recordings are private.
2. Understanding human behavior
Simulating events show us how we react in real life situations, and in some sense, show us how unconscious processes work. Since it is not occurring in real life, it enables us to learn from our mistakes. As such, it can help prevent errors and optimize responses in (critical) situations. For example, noise, a bad smell, or other disturbances can be simulated, and can give a good feel for how distraction works.
In the experimental study conducted by Jessica Jones and her colleagues, the researchers examined the impact of clinical interruptions on simulated trainee performances during central venous catheterization (CVC). They found that interruptions during the experimental condition resulted in a number of serious procedural errors. The reduction in performance, time taken, and number of attempts made was significantly worse when the interruption occurred during a more complex part of the procedure.
3. Improving teamwork
Teamwork includes behaviors such as effective communication, collaboration, team leading, team building, and crisis resource management. Teamwork is not an automatic consequence of placing people together in the same room; it depends on a willingness to cooperate toward shared goals.
In healthcare, shared goals might include maintaining a patient’s health status and avoiding errors. In a simulated environment, teamwork can be studied and significantly improved by training specific teamwork skills.
4. Providing confidence
Simulation training provides an opportunity to apply theory and gain experience in skills or procedures, which provides trainees with the confidence to manage similar reallife scenarios. Confidence is directly linked to competence. For example, the robust communication skills needed when interacting with patients requires being able to handle the situation with confidence.
As trainees gain confidence, they are more comfortable in making their own decisions and exerting their autonomy. As well as confidence being essential for an individual, demonstrating confidence is important for the patients who have put a lot of trust in healthcare professionals.
5. Giving insight into trainees’ own behavior
In a simulation facility, you can record the training sessions on video. By expanding the video recording with an eye tracker or data acquisition system to measure eye gaze behavior or physiological responses of the trainees, even more insight into behavior can be gained. Immediately after the session, the video recordings can be shown with all other data during debriefing, giving a complete picture.
Using realistic scenarios
For years, simulationbased training uses mannequins, i.e. fully bodied patient simulators for safe training of technical medical skills. Adding an immersive room to a simulationbased training enhances the lifelike experience even further. By projecting a scene on three surrounding walls, it actually feels like you are in that scene. The experience is complete by adding elements such as smell and temperature.
Transform a team of experts into an expert team
Clearly, the major role of simulation is to educate, train, and provide rehearsal for those actually preparing for or working in the delivery of healthcare. Simulationbased training can transform a team of experts into an expert team.
The userfriendly software suite Viso is an ideal recording solution for simulation training. With Viso, recordings are easily captured, and immediate viewing facilitates debriefing after each scenario.
ENVImet software allows you to create sustainable living conditions in a constantly changing environment. With the support of its interactive modules, it is possible to specify surface types and building materials, as well as vegetation on walls and roofs, to scientifically analyze the impacts of design measures on the local environment and help mitigate factors such as urban heat stress.
The software model is used worldwide for environmental analysis and urban planning – from the tropics to polar regions. The software’s potential has been validated and its calculations verified in over 3,000 scientific publications and independent studies.
ENVImet World Tour
Every day millions of people are exposed to changing climatic conditions in big cities. While sustainability is often perceived as protecting the environment at the cost of meeting people’s needs, this does not have to be the case – and we can prove it with scientifically verifiable data. On our World Tour we take you monthly around the globe to visit cities located on different continents with locally relevant challenges, conduct analysis and demonstrate possible solutions.
“ENVImet provides us an incredibly powerful tool for looking at a series of Environmental Quality and Outdoor Thermal Comfort criteria while we are in the design process. This was unthinkable not so long ago and will revolutionize our workflows. The integration via plugins with other tools like Sketchup puts these tools at our fingertips and gives us an agile and decentralized methodology perfectly suited to the realities of work life today.”
Steven Velegrinis, Director
Head of Masterplanning at AECOM in the Middle East“We have been using the ENVImet software for more than 10 years to study the impact of vegetation in urban environments. We chose ENVImet because it is one of the few softwares that realistically simulates the most important climate processes – such as the interactions between soil, plants and atmosphere – in urban environments and thus analyzes thermal comfort in cities. ENVImet has already contributed massively to simulation scenario modelings for São Paulo and is continuing to help us model complex scenarios, including topography, faster.”
Prof. Dr. Denise Helena Silva Duarte
Full professor at the Faculty of Architecture and Urbanism, University of São Paulo, Brazil“Werner Sobek Holding has been working with the ENVImet simulation software since 2007. We use it for numerous projects at home and abroad, especially for construction projects with challenging climatic conditions. ENVImet takes the interactions of wind, green spaces, solar radiation, buildings and many other factors into account – and helps us to design climatefriendly and sustainable cities.”
Prof. Dr.Ing. Dr.Ing. E. H. Dr. h. c. Werner Sobek
Werner Sobek Holding, GermanyENVImet UNI LAB Award 2022
Besides our mission to help build more liveable cities for people, we are committed to raising awareness about climate change amongst students worldwide. We have therefore launched an international competition to reward student projects which demonstrate innovative solutions against this threat using the microclimatic analysis tools of ENVImet.
20.1 Generating Random Numbers
Simulation is an important (and big) topic for both statistics and for a variety of other areas where there is a need to introduce randomness. Sometimes you want to implement a statistical procedure that requires random number generation or sampling (i.e. Markov chain Monte Carlo, the bootstrap, random forests, bagging) and sometimes you want to simulate a system and random number generators can be used to model random inputs.
R comes with a set of pseuodorandom number generators that allow you to simulate from wellknown probability distributions like the Normal, Poisson, and binomial. Some example functions for probability distributions in R
 rnorm : generate random Normal variates with a given mean and standard deviation
 dnorm : evaluate the Normal probability density (with a given mean/SD) at a point (or vector of points)
 pnorm : evaluate the cumulative distribution function for a Normal distribution
 rpois : generate random Poisson variates with a given rate
For each probability distribution there are typically four functions available that start with a “r”, “d”, “p”, and “q”. The “r” function is the one that actually simulates randon numbers from that distribution. The other functions are prefixed with a
 d for density
 r for random number generation
 p for cumulative distribution
 q for quantile function (inverse cumulative distribution)
If you’re only interested in simulating random numbers, then you will likely only need the “r” functions and not the others. However, if you intend to simulate from arbitrary probability distributions using something like rejection sampling, then you will need the other functions too.
Probably the most common probability distribution to work with the is the Normal distribution (also known as the Gaussian). Working with the Normal distributions requires using these four functions
Here we simulate standard Normal random numbers with mean 0 and standard deviation 1.
We can modify the default parameters to simulate numbers with mean 20 and standard deviation 2.
If you wanted to know what was the probability of a random Normal variable of being less than, say, 2, you could use the pnorm() function to do that calculation.
You never know when that calculation will come in handy.
20.2 Setting the random number seed
When simulating any random numbers it is essential to set the random number seed. Setting the random number seed with set.seed() ensures reproducibility of the sequence of random numbers.
For example, I can generate 5 Normal random numbers with rnorm() .
Note that if I call rnorm() again I will of course get a different set of 5 random numbers.
If I want to reproduce the original set of random numbers, I can just reset the seed with set.seed() .
In general, you should always set the random number seed when conducting a simulation! Otherwise, you will not be able to reconstruct the exact numbers that you produced in an analysis.
It is possible to generate random numbers from other probability distributions like the Poisson. The Poisson distribution is commonly used to model data that come in the form of counts.
20.3 Simulating a Linear Model
Simulating random numbers is useful but sometimes we want to simulate values that come from a specific model. For that we need to specify the model and then simulate from it using the functions described above.
Suppose we want to simulate from the following linear model
\[ y = \beta_0 + \beta_1 x + \varepsilon \]
where \(\varepsilon\sim\mathcal
(0,2^2)\) . Assume \(x\sim\mathcal (0,1^2)\) , \(\beta_0=0.5\) and \(\beta_1=2\) . The variable x might represent an important predictor of the outcome y . Here’s how we could do that in R. We can plot the results of the model simulation.
What if we wanted to simulate a predictor variable x that is binary instead of having a Normal distribution. We can use the rbinom() function to simulate binary random variables.
Then we can procede with the rest of the model as before.
We can also simulate from generalized linear model where the errors are no longer from a Normal distribution but come from some other distribution. For examples, suppose we want to simulate from a Poisson loglinear model where
\[ Y \sim Poisson(\mu) \]
\[ \log \mu = \beta_0 + \beta_1 x \]
and \(\beta_0=0.5\) and \(\beta_1=0.3\) . We need to use the rpois() function for this
Now we need to compute the log mean of the model and then exponentiate it to get the mean to pass to rpois() .
You can build arbitrarily complex models like this by simulating more predictors or making transformations of those predictors (e.g. squaring, log transformations, etc.).
20.4 Random Sampling
The sample() function draws randomly from a specified set of (scalar) objects allowing you to sample from arbitrary distributions of numbers.
To sample more complicated things, such as rows from a data frame or a list, you can sample the indices into an object rather than the elements of the object itself.
Here’s how you can sample rows from a data frame.
Now we just need to create the index vector indexing the rows of the data frame and sample directly from that index vector.
Other more complex objects can be sampled in this way, as long as there’s a way to index the subelements of the object.
Norbert Neumann finds out about the metaverse and its potential for defence applications such as military training.
The idea of metaverse compels excitement in some, while making others cringe. Most of us have seen films or video games where the idea of a synthetic world comes to life, where participants can engage with each other free from the limits of reality. Like it or not, the metaverse did not elude the attention of the defence industry.
What is the metaverse?
There is no consensus around the definition of the metaverse, and explaining it is a bit like trying to define the internet. It is not a singular technology or even a singular concept. But for argument’s sake, this article will use a workable definition provided by War on the Rocks writers Jennifer McCardle and Caitlin Dohrman: “A metaverse is a series of interconnected and immersive virtual worlds that afford their users a sense of presence via agency and influence.”
It is similar to virtual or augmented reality in that it provides a spectrum where the physical and digital can meet. In military settings, these platforms are mostly used for the purpose of training. Commercial metaverse technology will not satisfy the military due to the lack of dexterity. But the price of the best available equipment on the market is eyewatering, and defence departments may adopt technologies in the early stages with the aim of improving them to fit the requirements of the armed forces.
“If there is a greater push for cheaper but higher quality haptics and cheaper things like motion capture, and investments are going into those capabilities, then it could end up yielding something important for the military in the future,” says James Crowley, business development director at immersive urban training expert 4GD. “This will both improve the technology for the civilian market, and will also make it cheaper and much more accessible.”
The next level of simulation training: enter the defence metaverse
Opinion: China’s dominance of the rare earths supply chain is a geopolitical risk
Will future wars be fought with missiles and rockets?
Defence in the metaverse
Militaries have been using different forms of rudimentary metaverses for training for years. The development of the first simulator networking (SIMNET), where different virtual words were stitched together, started in the 1980s by the US military. SIMNET was a wide area network with vehicle simulators and displays for realtime distributed combat simulations that included tanks, helicopters and planes on a virtual battlefield. In the past two decades, the fidelity and effectiveness of simulated training and synthetic environments have grown momentously.
Nick Brown, defence product marketing director at distributed computing company Hadean, says: “Perhaps we can categorise the defence metaverse as an ‘industrial metaverse’. While the more familiar metaverse is focused on entertainment and social interaction of itself, industrial metaverses use the same technology in order to enhance activity in the physical world.”
Brown believes there are three key aspects to the metaverse that make it so appealing to defence. “Firstly, the virtual worlds of the metaverse are getting increasingly better at connecting more people from disparate locations,” he says. “Secondly, they can be used to simulate physical events at a high fidelity such that greater knowledge about the ‘real world’ can be derived.”
The third point, Brown says, is the metaverse’s ability to offer an immersive experience that would be too expensive, logistically or economically unfeasible or simply impossible to conduct physically. Combining these three aspects when placed into the concept of modern military acquisition, force development and training requirements, it is clear why defence is interested in the metaverse.
Crowley expects continued military interest in the metaverse in the future. “There’s an area for influence and an area where we need to be conscious of disparate groups coming together, and perhaps forming either very shortterm collectives or of longerterm political groupings and we need to be fairly politically live to that as organisations,” he says.
He says if the metaverse changes the way individuals come together and the way information is exchanged, militaries and governments will need to be aligned to know how those interactions are taking place.
The challenges of the metaverse
It is also very important to consider the operational challenges of the metaverse. Building the computer power and infrastructure capable of running highfidelity virtual worlds across a range of devices and handling enormous amounts of data will be equally imperative for the metaverse to exist.
“Being able to create these kinds of simulations may be cutting edge, but they’re no good if you need a PC the size of a car seat next to you while you run it,” Brown explains. “Every participant, whose device may vary, needs to be able to view and act within the simulation.
“This is where distributed cloud and edge computing are set to change the game. If you can dynamically scale computation across cloud and edge environments, then this vastly lowers the requirements needed by devices to run the simulation.”
Crowley echoes the importance of computer power by saying: “This is probably the most impactful part of it. Unless you can reduce your latency to a point that it doesn’t make people ill and feels realistic, unless you can store move and communicate data across various different people in different simulators, you’re not going to provide a practical training tool.”
Another major challenge in developing an open metaverse where militaries of different countries could engage with each other is the aspect of security.
“Ultimately, allied operations, for example among Nato countries, will be greatly enhanced by completing training simulations together, incentivising the creation of an open defence metaverse for them,” says Brown. He could envisage an open metaverse for defence where allied forces could all plug into the same virtual version of a given strategic context.
Crowley warns, however, that defence will have to be ruthless and extremely pragmatic when adopting certain technologies. Militaries need to ensure they do not take on new technologies just because they are the most recent on the market, but ensure they can provide the required results.
The development and employment of various metaverses and augmented realities are no new to defence, but the industry still has a long way to go before entering the metaverse for military purposes..
Related Companies
Viable Power Conversion Technologies
Rugged Custom Power Supply Solutions for Aerospace and Defense Applications
Our editors will review what you’ve submitted and determine whether to revise the article.
 British Broadcasting Corporation – Computer Simulation
 h2g2 – Computer Simulation
 University of Florida – Department of Computer And Information Science And Engineering – Simulation
Our editors will review what you’ve submitted and determine whether to revise the article.
 British Broadcasting Corporation – Computer Simulation
 h2g2 – Computer Simulation
 University of Florida – Department of Computer And Information Science And Engineering – Simulation
computer simulation, the use of a computer to represent the dynamic responses of one system by the behaviour of another system modeled after it. A simulation uses a mathematical description, or model, of a real system in the form of a computer program. This model is composed of equations that duplicate the functional relationships within the real system. When the program is run, the resulting mathematical dynamics form an analog of the behaviour of the real system, with the results presented in the form of data. A simulation can also take the form of a computergraphics image that represents dynamic processes in an animated sequence.
Computer simulations are used to study the dynamic behaviour of objects or systems in response to conditions that cannot be easily or safely applied in real life. For example, a nuclear blast can be described by a mathematical model that incorporates such variables as heat, velocity, and radioactive emissions. Additional mathematical equations can then be used to adjust the model to changes in certain variables, such as the amount of fissionable material that produced the blast. Simulations are especially useful in enabling observers to measure and predict how the functioning of an entire system may be affected by altering individual components within that system.
The simpler simulations performed by personal computers consist mainly of business models and geometric models. The former includes spreadsheet, financial, and statistical software programs that are used in business analysis and planning. Geometric models are used for numerous applications that require simple mathematical modeling of objects, such as buildings, industrial parts, and the molecular structures of chemicals. More advanced simulations, such as those that emulate weather patterns or the behaviour of macroeconomic systems, are usually performed on powerful workstations or supercomputers. In engineering, computer models of newly designed structures undergo simulated tests to determine their responses to stress and other physical variables. Simulations of river systems can be manipulated to determine the potential effects of dams and irrigation networks before any actual construction has taken place. Other examples of computer simulations include estimating the competitive responses of companies in a particular market and reproducing the movement and flight of space vehicles.
The Editors of Encyclopaedia Britannica This article was most recently revised and updated by Erik Gregersen.
Norbert Neumann finds out about the metaverse and its potential for defence applications such as military training.
The idea of metaverse compels excitement in some, while making others cringe. Most of us have seen films or video games where the idea of a synthetic world comes to life, where participants can engage with each other free from the limits of reality. Like it or not, the metaverse did not elude the attention of the defence industry.
Top Guide to the Suppliers of Antennas, Masts & Towers
What is the metaverse?
There is no consensus around the definition of the metaverse, and explaining it is a bit like trying to define the internet. It is not a singular technology or even a singular concept. But for argument’s sake, this article will use a workable definition provided by War on the Rocks writers Jennifer McCardle and Caitlin Dohrman: “A metaverse is a series of interconnected and immersive virtual worlds that afford their users a sense of presence via agency and influence.”
It is similar to virtual or augmented reality in that it provides a spectrum where the physical and digital can meet. In military settings, these platforms are mostly used for the purpose of training. Commercial metaverse technology will not satisfy the military due to the lack of dexterity. But the price of the best available equipment on the market is eyewatering, and defence departments may adopt technologies in the early stages with the aim of improving them to fit the requirements of the armed forces.
“If there is a greater push for cheaper but higher quality haptics and cheaper things like motion capture, and investments are going into those capabilities, then it could end up yielding something important for the military in the future,” says James Crowley, business development director at immersive urban training expert 4GD. “This will both improve the technology for the civilian market, and will also make it cheaper and much more accessible.”
The next level of simulation training: enter the defence metaverse
Opinion: China’s dominance of the rare earths supply chain is a geopolitical risk
Will future wars be fought with missiles and rockets?
Defence in the metaverse
Militaries have been using different forms of rudimentary metaverses for training for years. The development of the first simulator networking (SIMNET), where different virtual words were stitched together, started in the 1980s by the US military. SIMNET was a wide area network with vehicle simulators and displays for realtime distributed combat simulations that included tanks, helicopters and planes on a virtual battlefield. In the past two decades, the fidelity and effectiveness of simulated training and synthetic environments have grown momentously.
Nick Brown, defence product marketing director at distributed computing company Hadean, says: “Perhaps we can categorise the defence metaverse as an ‘industrial metaverse’. While the more familiar metaverse is focused on entertainment and social interaction of itself, industrial metaverses use the same technology in order to enhance activity in the physical world.”
Brown believes there are three key aspects to the metaverse that make it so appealing to defence. “Firstly, the virtual worlds of the metaverse are getting increasingly better at connecting more people from disparate locations,” he says. “Secondly, they can be used to simulate physical events at a high fidelity such that greater knowledge about the ‘real world’ can be derived.”
The third point, Brown says, is the metaverse’s ability to offer an immersive experience that would be too expensive, logistically or economically unfeasible or simply impossible to conduct physically. Combining these three aspects when placed into the concept of modern military acquisition, force development and training requirements, it is clear why defence is interested in the metaverse.
Crowley expects continued military interest in the metaverse in the future. “There’s an area for influence and an area where we need to be conscious of disparate groups coming together, and perhaps forming either very shortterm collectives or of longerterm political groupings and we need to be fairly politically live to that as organisations,” he says.
He says if the metaverse changes the way individuals come together and the way information is exchanged, militaries and governments will need to be aligned to know how those interactions are taking place.
The challenges of the metaverse
It is also very important to consider the operational challenges of the metaverse. Building the computer power and infrastructure capable of running highfidelity virtual worlds across a range of devices and handling enormous amounts of data will be equally imperative for the metaverse to exist.
“Being able to create these kinds of simulations may be cutting edge, but they’re no good if you need a PC the size of a car seat next to you while you run it,” Brown explains. “Every participant, whose device may vary, needs to be able to view and act within the simulation.
“This is where distributed cloud and edge computing are set to change the game. If you can dynamically scale computation across cloud and edge environments, then this vastly lowers the requirements needed by devices to run the simulation.”
Crowley echoes the importance of computer power by saying: “This is probably the most impactful part of it. Unless you can reduce your latency to a point that it doesn’t make people ill and feels realistic, unless you can store move and communicate data across various different people in different simulators, you’re not going to provide a practical training tool.”
Another major challenge in developing an open metaverse where militaries of different countries could engage with each other is the aspect of security.
“Ultimately, allied operations, for example among Nato countries, will be greatly enhanced by completing training simulations together, incentivising the creation of an open defence metaverse for them,” says Brown. He could envisage an open metaverse for defence where allied forces could all plug into the same virtual version of a given strategic context.
Crowley warns, however, that defence will have to be ruthless and extremely pragmatic when adopting certain technologies. Militaries need to ensure they do not take on new technologies just because they are the most recent on the market, but ensure they can provide the required results.
The development and employment of various metaverses and augmented realities are no new to defence, but the industry still has a long way to go before entering the metaverse for military purposes..
Statistical analysis is the collection and interpretation of data in order to uncover patterns and trends. It is a component of data analytics. Statistical analysis can be used in situations like gathering research interpretations, statistical modeling or designing surveys and studies. It can also be useful for business intelligence organizations that have to work with large data volumes.
In the context of business intelligence (BI), statistical analysis involves collecting and scrutinizing every data sample in a set of items from which samples can be drawn. A sample, in statistics, is a representative selection drawn from a total population.
The goal of statistical analysis is to identify trends. A retail business, for example, might use statistical analysis to find patterns in unstructured and semistructured customer data that can be used to create a more positive customer experience and increase sales.
Statistical analysis can be broken down into five discrete steps, as follows:
 Describe the nature of the data to be analyzed.
 Explore the relation of the data to the underlying population.
 Create a model to summarize an understanding of how the data relates to the underlying population.
 Prove (or disprove) the validity of the model.
 Employ predictive analytics to run scenarios that will help guide future actions.
Software for statistical analysis will typically allow users to do more complex analyses by including additional tools for organization and interpretation of data sets, as well as for the presentation of that data. IBM SPSS Statistics, RMP and Stata are some examples of statistical analysis software. For example, IBM SPSS Statistics covers much of the analytical process. From data preparation and data management to analysis and reporting. The software includes a customizable interface, and even though it may be hard form someone to use, it is relatively easy for those experienced in how it works.