Docsity
Docsity

Prepare-se para as provas
Prepare-se para as provas

Estude fácil! Tem muito documento disponível na Docsity


Ganhe pontos para baixar
Ganhe pontos para baixar

Ganhe pontos ajudando outros esrudantes ou compre um plano Premium


Guias e Dicas
Guias e Dicas

Introduction to the art and science of simulation, Notas de estudo de Engenharia de Produção

Vantagens e Desvantagens do uso da simulação computacional

Tipologia: Notas de estudo

Antes de 2010

Compartilhado em 03/11/2009

andrei-la-neve-11
andrei-la-neve-11 🇧🇷

5

(1)

37 documentos

1 / 8

Documentos relacionados


Pré-visualização parcial do texto

Baixe Introduction to the art and science of simulation e outras Notas de estudo em PDF para Engenharia de Produção, somente na Docsity! Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. INTRODUCTION TO THE ART AND SCIENCE OF SIMULATION Robert E. Shannon Industrial Engineering Texas A&M University College Station, Texas 77843-3131 U.S.A.ABSTRACT This introductory tutorial presents an over view of the process of conducting a simulation study of any discrete system. The basic viewpoint is that conducting such a study requires both art and science. Some of the issues addressed are how to get started, the steps to be followed, the issues to be faced at each step, the potential pitfalls occurring at each step, and the most common causes of failures. 1 INTRODUCTION Simulation is one of the most powerful tools available to decision-makers responsible for the design and operation of complex processes and systems. It makes possible the study, analysis and evaluation of situations that would not be otherwise possible. In an increasingly competitive world, simulation has become an indispensable problem solving methodology for engineers, designers and managers. We will define simulation as the process of designing a model of a real system and conducting experiments with this model for the purpose of understanding the behavior of the system and /or evaluating various strategies for the operation of the system. Thus it is critical that the model be designed in such a way that the model behavior mimics the response behavior of the real system to events that take place over time. The term's model and system are key components of our definition of simulation. By a model we mean a representation of a group of objects or ideas in some form other than that of the entity itself. By a system we mean a group or collection of interrelated elements that cooperate to accomplish some stated objective. One of the real strengths of simulation is the fact that we can simulate systems that already exist as well as those that are capable of being brought into existence, i.e. those in the preliminary or planning stage of development.7 In this paper we will be discussing the art and science of moving a modeled system through time. Simulation is the next best thing to observing a real system in opera- tion since it allows us to study the situation even though we are unable to experiment directly with the real system, either because the system does not yet exist or because it is too difficult or expensive to directly manipulate it. We consider simulation to include both the construction of the model and the experimental use of the model for studying a problem. Thus, we can think of simulation modeling as an experimental and applied methodology, which seeks to: ♦ Describe the behavior of a system. ♦ Use the model to predict future behavior, i.e. the effects that will be produced by changes in the system or in its method of operation. 2 ADVANTAGES AND DISADVANTAGES Simulation has a number of advantages over analytical or mathematical models for analyzing systems. First of all, the basic concept of simulation is easy to comprehend and hence often easier to justify to management or customers than some of the analytical models. In addition, a simulation model may be more credible because it's behavior has been compared to that of the real system or because it requires fewer simplifying assumptions and hence captures more of the true characteristics of the system under study. Additional advantages include: ♦ We can test new designs, layouts, etc. without committing resources to their implementation. ♦ It can be used to explore new staffing policies, operating procedures, decision rules, organizational structures, information flows, etc. without disrupting the ongoing operations. ♦ Simulation allows us to identify bottlenecks in information, material and product flows and test options for increasing the flow rates. ♦ It allows us to test hypothesis about how or why certain phenomena occur in the system. Shannon♦ Simulation allows us to control time. Thus we can operate the system for several months or years of experience in a matter of seconds allowing us to quickly look at long time horizons or we can slow down phenomena for study. ♦ It allows us to gain insights into how a modeled system actually works and understanding of which variables are most important to performance. ♦ Simulation's great strength is its ability to let us experiment with new and unfamiliar situations and to answer "what if" questions. Even though simulation has many strengths and advantages, it is not without drawbacks. Among these are: ♦ Simulation modeling is an art that requires specialized training and therefore skill levels of practitioners vary widely. The utility of the study depends upon the quality of the model and the skill of the modeler. ♦ Gathering highly reliable input data can be time consuming and the resulting data is sometimes highly questionable. Simulation cannot compensate for inadequate data or poor management decisions. ♦ Simulation models are input-output models, i.e. they yield the probable output of a system for a given input. They are therefore "run" rather than solved. They do not yield an optimal solution, rather they serve as a tool for analysis of the behavior of a system under conditions specified by the experimenter. 3 THE SIMULATION TEAM Although some small simulation studies are conducted by an individual analyst, most are conducted by a team. This is due to the need for the variety of skills required for the study of complex systems. First of all we need people who know and understand the system being studied. These are usually the designers, systems, manufacturing or process engineers. But it may also be the managers, project leaders and/or operational personnel who will use the results. Secondly we will have to have people who know how to formulate and model the system as well as program the model (simulation specialists). These members will also need data collection and statistical skills. The first category of personnel must of necessity be internal i.e. members of the organization for whom the study is being conducted. If we do not have people in the second category, we have several choices. We can: (a) hire people with the necessary skills, (b) contract the modeling to outside consultants, (c) train some of our own people, or (d) some combination of the above. If we choose to train some of our own people, it is important to note that data collection and statistical skills are probably more important than programming skills. The new simulation packages8 have made the computer skills required less important than they once were. It is important to realize that knowledge of a simulation software package does not make someone a simulationist any more than knowing FORTRAN makes one a mathematician. As stated earlier, simulation is both an art and a science. The programming and statistical components are the science part. But the analysis and modeling components are the art. For example, questions such as how much detail to include in the model or how to represent a certain phenomena or what alternatives to evaluate are all a part of the art. How does one learn an art? Suppose you want to learn how to do oil portraiture. We could teach you the science of oil painting such as perspective, shading, color mixture etc.(computer programming, statistics and software packages). But you would still not be able to do creditable oil portraits. We could take you to the museums and show you the paintings of the Masters and point out the techniques used (studying other people's models). But you would still have only minimal ability to do acceptable portraits. If you want to become proficient in an art you must take up the tools (palette, canvas, paint and brushes) and begin to paint. As you do so, you will begin to see what works and what doesn't. The same thing is true in simulation. You learn the art of simulation by simulating. Having a mentor to help show you the way can shorten the time and effort. This is why many companies choose to get started in simulation by using a combination of consultants and internal trainees. 4 A SIMULATION CONCEPT Although there are several different types of simulation methodologies, we will limit our concerns to a stochastic, discrete, process oriented approach. In such an approach, we model a particular system by studying the flow of entities that move through that system. Entities can be customers, job orders, particular parts, information packets, etc. An entity can be any object that enters the system, moves through a series of processes, and then leaves the system. These entities can have individual characteristics which we will call attributes. An attribute is associated with the specific, individual entity. Attributes might be such things as name, priority, due date, required CPU time, ailment, account number etc. As the entity flows through the system, it will be processed by a series of resources. Resources are anything that the entity needs in order to be processed. For example, resources might be workers, material handling equipment, special tools, a hospital bed, access to the CPU, a machine, waiting or storage space, etc. Resources may be fixed in one location (e.g. a heavy machine, bank teller, hospital Introduction to the Art and Science of SimulationDepending upon the situation, there are several potential sources of data. These include: ♦ Historical records ♦ Observational data ♦ Similar systems ♦ Operator estimates ♦ Vendor's claims ♦ Designer estimates ♦ Theoretical considerations Each of these sources has potential problems (Brately et al 1983, Pegden et al 1995). Even when we have copious data, it may not be relevant. For example we may have sales data when we need demand data (sales do not show unmet demand). In other cases we may have only summary statistics (monthly when we need daily). When historical data does not exist (either because the system has not been built or it is not possible to gather it), the problem is even more difficult. In such cases we must estimate both the probability distribution and the parameters based upon theoretical considerations. For guidance see Law and Kelton (1991) and/or Pegden et al. (1995). 9 MODEL TRANSLATION AND SIMULATION LANGUAGES We are finally ready to describe or program the model in a language acceptable to the computer to be used. Well over a hundred different simulation languages are commercially available. In addition there are literally hundreds of other locally developed languages in use in Companies and Universities. We have three generic choices, namely: ♦ Build the model in a general-purpose language. ♦ Build the model in a general-purpose simulation language. ♦ Use a special purpose simulation packages. Although general purpose programming languages such as FORTRAN, C++, Visual Basic, or Pascal can be used they very seldom are anymore. Using one of the general or special purpose simulation packages has distinct advantages in terms of ease, efficiency and effectiveness of use. Some of the advantages of using a simulation package are: ♦ Reduction of the programming task. ♦ Provision of conceptual guidance. ♦ Increased flexibility when changing the model. ♦ Fewer programming errors. ♦ Automated gathering of statistics. The goal of any simulation package is to close the gap between the user's conceptualization of the model and an executable form. Simulation packages divide themselves11more or less into two categories, namely (a) general- purpose simulation languages and (b) special purpose simulators. In the first category are those which can solve almost any discrete simulation problem. Among these are such systems as ARENA®, AweSim®, GPSS/H™, Simscript II.5®, Extend™ etc. Some systems are used for the simulation of manufacturing and material handling problems. Packages such as SimFactory, ProModel®, AutoMod™, Taylor II®, and Witness® fall into this category. Others are designed for conducting Business Process Reengineering studies. These include BPSimulator™, ProcessModel™, SIMPROCESS®,and Extend+BPR. Still others are for healthcare delivery (MedModel®), or communications networks (COMNET II.5). Since there are numerous software tutorials as well as demonstrations being given at the conference, we will not pursue further discussion of this subject. 10 VERIFICATION AND VALIDATION The fact that a model compiles, executes, and produces numbers does not guarantee that it is correct or that the numbers being generated are representative of the system being modeled. After the development of the model is functionally complete, we should ask, "Does it work correctly?" There are two aspect to this question. First, does it operate the way the analyst intended? Second, does it behave the way the real world system does or will? We find the answers to these questions through model verification and model validation. Verification seeks to show that the computer program performs as expected and intended. Validation on the other hand, questions whether the model behavior validly represents that of the real-world system being simulated. Verification is a vigorous debugging aimed at showing that the parts of the model work independently and together using the right data at the right time. Even though the analyst thinks he or she knows what the model does and how it does it, anyone who has done any programming knows how easy it is to make errors. Throughout the verification process, we try to find and remove unintentional errors in the logic of the model. Validation on the other hand is the process of reaching an acceptable level of confidence that the inferences drawn are correct and applicable to the real-world system being represented. We are basically trying to answer the questions: ♦ Does the model adequately represent the real-world system. ♦ Is the model generated behavioral data characteristic of the real-world system's behavioral data? ♦ Does the simulation model user have confidence in the model's results? ShannonThrough validation, we try to determine whether the simplifications and omissions of detail that we have knowingly and deliberately made in our model, have introduced unacceptably large errors in the results. Validation is the process of determining that we have built the right model, whereas verification is designed to see if we have built the model right. Model verification and validation are often difficult and time consuming but are extremely important to success. If the model and results are not accepted by the decision maker(s), then the effort has been wasted. It is absolutely mandatory that we be able to transfer our confidence in the model to the people who must use the results. Several excellent references are Pegden et al. (1995), Shannon (1981), Balci (1995) and Sargent (1996). 11 FINAL EXPERIMENTAL DESIGN Now that we have developed the model, verified its correctness, and validated its adequacy, we again need to consider the final strategic and tactical plans for the execution of the experiment(s). We must update project constraints on time (schedule) and costs to reflect current conditions. Even though we have exercised careful planning and budget control from the beginning of the project, we must now take a hard, realistic look at what resources remain and how best to use them. We will also have learned more about the system in the process of designing, building, verifying and validating the model which we will want to incorporate into the final plans. The design of a computer simulation experiment is essentially a plan for purchasing a quantity of information that costs more or less depending upon how it was acquired. Design profoundly affects the effective use of experimental resources because: ♦ The design of the experiments largely determine the form of statistical analysis that can be applied to the data. ♦ The success of the experiments in answering the desired questions is largely a function of choosing the right design. Simulation experiments are expensive both in terms of the analyst's time and labor and in some cases, in terms of computer time. We must therefore carefully plan and design not only the model but also its use (Cook 1992, Hood and Welch 1992, Nelson 1992, Swain and Farrington 1994, Kelton 1995, Montgomery 1997). 12 EXPERIMENTATION AND ANALYSIS Next we come to the actual running of the experiments and the analysis of the results. We now have to deal with issues such as how long to run the model (i.e. sample size), what to do about starting conditions, whether the output12data are correlated, and what statistical tests are valid on the data. Before addressing these concerns, we must first ascertain whether the real system is terminating or non- terminating because this characteristic determines the running and analysis methods to be used. In a terminating system, the simulation ends when a critical event occurs. For example, a bank opens in the morning empty and idle. At the end of the day it is once again empty and idle. Another example would be a duel where one or both participants are killed or the weapons are empty. In other words, a system is considered to be terminating if the events driving the system naturally cease at some point in time. In a non-terminating system, no such critical event occurs and the system continues indefinitely (e.g. a telephone exchange or a hospital). A second system characteristic of interest is whether the system is stationary or non-stationary. A system is stationary if the distribution of it's response variable (and hence it's mean and variance) does not change over time. With such systems we are generally concerned with finding the steady state conditions, i.e. the value which is the limit of the response variable if the length of the simulation went to infinity without termination. Whether the system is terminating or non-terminating, we must decide how long to run the simulation model i.e. we must determine sample size (Shannon 1975). But first we must precisely define what constitutes a single sample. There are several possibilities: 1 Each transaction considered a separate sample. For example, turn-around time for each job or total time in the system for each customer. 2 A complete run of the model. This may entail considering the mean or average value of the response variable for the entire run as being a datum point. Multiple runs are referred to as replication. 3 A fixed time period in terms of simulated time. Thus a simulation may be run for n time periods, where a time period is an hour or a day or a month. 4 Transactions aggregated into groups of fixed size. For example, we might take the time in the system for each 25 jobs flowing through and then use the mean time of the group as a single datum point. This is usually referred to as batching. If the system is a non-terminating, steady-state system we must be concerned with starting conditions, i.e. the status of the system when we begin to gather statistics or data. If we have an empty and idle system i.e. no customers present, we may not have typical steady state conditions. Therefore, we must either wait until the system reaches steady state before we begin to gather data (warm-up period), or we must start with more realistic starting conditions. Both of these approaches requires that we be able to identify when the system has reached steady state (a difficult problem). Introduction to the Art and Science of SimulationFinally, most statistical tests require that the data points in the sample be independent i.e. not correlated. Since many of the systems we model are queueing networks, they do not meet this condition because they are auto-correlated. Therefore, very often we must do something to insure that the data points are independent before we can proceed with the analysis (Law and Kelton 1991, Banks et al 1995, Kelton 1996). 13 IMPLEMENTATION & DOCUMENTATION At this point we have completed all the steps for the design, programming and running of the model as well as the analysis of the results. The final two elements that must be included in any simulation study are implementation and documentation. No simulation study can be considered successfully completed until it's results have been understood, accepted and used. It is amazing how often modelers will spend a great deal of time trying to find the most elegant and efficient ways to model a system and then throw together a report to the sponsor or user at the last minute. If the results are not used, the project was a failure. If the results are not clearly, concisely and convincingly presented, they will not be used. The presentation of the results of the study is a critical and important part of the study and must be as carefully planned as any other part of the project (Sadowski 1993). Among the issues to be addressed in the documentation of the model and study are: ♦ Choosing an appropriate vocabulary (no technical jargon). ♦ Length and format of both written and verbal reports (short and concise). ♦ Timeliness ♦ Must address the issues that the sponsor or user consider important. 14 PATHS TO FAILURE Not all simulation studies are unqualified successes. In fact, unfortunately, too many fail to deliver as promised. When we look at the reasons that projects fail, we find that it is usually traceable to the same reasons over and over. Most failures occur on early projects i.e. the first or second project undertaken by an organization. Many inexperi- enced modelers bite off more than they can chew. This is not surprising since in most cases they have learned the science but not the art of simulation. This is why it it is advisable to begin with small projects that are not of critical significance to the parent organization. Almost all other failures can be traced to one of the following: ♦ Failure to define a clear and achievable goal.13♦ Inadequate planning and underestimating the resources needed. ♦ Inadequate user participation. ♦ Writing code too soon before the system is really understood. ♦ Inappropriate level of included detail (usually too much). ♦ Wrong mix of team skills (see section 3 above). ♦ Lack of trust, confidence and backing by management. 15 PATHS TO SUCCESS Just as we can learn from studying projects that fail, we can also learn from those that succeed (Musselman 1994, Robinson and Bhatia 1995). Obviously the first thing we want to do is avoid the errors of those who fail. Thus we want to: ♦ Have clearly defined and achievable goals. ♦ Be sure we have adequate resources available to successfully complete the project on time. ♦ Have management's support and have it known to those who must cooperate with us in supplying information and data. ♦ Assure that we have all the necessary skills required available for the duration of the project. ♦ Be sure that there are adequate communication channels to the sponsor and end users. ♦ Have a clear understanding with the sponsor and end users as to the scope and goals of the project as well as schedules. ♦ Have good documentation of all planning and modeling efforts. 16 SUMMING UP Simulation provides cheap insurance and a cost effective decision making tool for managers. It allows us to minimize risks by letting us discover the right decisions before we make the wrong ones. REFERENCES Banks, J., J.S. Carson II, and B.L. Nelson, 1995, Dis-crete- Event Systems Simulation, 2nd Edition, Prentice-Hall. Banks, J. 1996, Software for Simulation, in Proceedings of the 1996 Winter Simulation Conference, ed. J.M. Charnes, D.J. Morrice, D.T. Brunner, and J.J. Swain. Balci, O., 1995, in Proceedings of the 1995 Winter Simulation Conference, ed. C Alexopoulos, K Kang, W.R. Lilegdon, and D. Goldsman. Brately, P., B.L. Fox and L.E. Schrage, 1987, A Guide to Simulation, 2nd Edition, Springer-Verlag. Cook, L.S., 1992, Factor Screening of Multiple Responses, in Proceedings of the 1992 Winter Simulation
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved