Click to edit Master notes styles
Second Level
Third Level
Fourth Level
Fifth Level
 
(Shubik, Chpt 1, 1975). In many instances Gaming and Simulation can be used interchangeably.  However, there is a difference between the two.  Games, by definition, include a human player.  Simulations do not necessarily provide direct interaction by a human after an initial setup. 
The uses of each type of game and simulation transcends the slight difference in definition.
1.Teaching games help instructors to convey principles.  In the case of a business simulation, players are able to be put in business situations that provide start-up capital and then either profit profits or losses based on the decisions made over time. 
2.Training applications are one of the most common.  The students are able to experiment with the simulator to create situations that might be too dangerous or expensive to perform live.  Examples of these are medical applications where a medical student operates on a virtual patient or test pilots trying combat situations that might prove deadly in the field.  If a death of a patient or the crash of the plane occurs, the simulation can easily be reset.
3.Operational games and simulators provide a double check of a certain scenario.  Here the participants can go through situations and look for flaws in their checklists.  In a real emergency, there might not be ability to improve the checklist for deficiencies.
(Shubik, Chpt 1, 1975).
4.Research fields use simulators to test reactions to hypothetical situations.  The findings from the model are used to validate the hypothesis or to improve it.  Research areas such as communications and the functioning of the organization whether in business or in the military are covered in this topic.
5.The work by Joseph Deasy (2002) and his team used Monte Carlo statistical methods to pinpoint the amount of radiation that was necessary and safe in the treatment of certain cancers.  The model is certainly a lot safer than the exposure to actual radiation.
6.Entertainment – no survey of gaming and simulation could be complete without the entertainment games.  There is hardly a young American today (of any age) that has not played on a Nintendo Game cube, X-box, Playstation, Gameboy Advanced, or visited a video arcade.  There are almost no users of Microsoft Windows  that hasn’t found the Solitaire game.  Of course the amount of lost productivity and the loss of creative play by youths can be debated.  That’s for another paper, at another time!
Metaphors or fantasies might give your simulation a nice feel, but they must be realistic or you will lose your audience. Forcing a quick decision from the trainee will keep them interested.  A simulated negotiation, for example, will teach students when to wait and when to make offers. The more options, the deeper the person can learn.  If you hit a wall too early, the game is over.  The learning is over also. The simulation should provide a rich number of scenarios and be easy to return to the beginning if the trainee does hit a wall.  You must be able to reset the game and not have to press Cntl+alt+delete to get out. Simulators cost lots of money.  They provide the user the ability to customize their learning environment for the way they learn and for the situations that they need to practice.
(Eikenberry, 1999) An easy way to create metaphors to ease learning is to follow the 3C model. 
Create – this is always a tough step in an endeavor, but you can help your creative ability by determining the part of a course you want to focus on.  Once you’ve determined what part you are going to work on, write down the general parts of the topic you wish to cover.  If you have small children or grandchildren, the next step may be easier.  Ask yourself how you would teach the topic to a 8 or 10 year old?  Write down all the ideas that come into your head, regardless of how silly they may seem. Go through the list and make associations with the list
Connect – this is a quick step.  Compare the metaphor ideas with the content of the topic.  Compare a potential metaphor with the content.  Ask yourself are the elements connected? How are they alike? How are they different?
Combine – determine how to combine the metaphor into your instructional flow.  Can it become a theme for the class?  Think about how to introduce it.  Can you use more than one of the senses to reinforce it?  Good Luck !
According to Coughlan (1995) a special purpose simulator is distinguished from a simulation language or system by the three properties above.  Most simulation systems are only used by trained experts.  The special purpose simulator is developed to insulate a user from the complexities of the underlying system.
The main part of a special purpose simulator is the parameterized model of the system being studied.  The front end interface guides the user in the set-up of the parameters in a similar manner to many familiar software wizards. 
Once the model is run, the results are presented in report format. 
These models have been created for many industries and for many purposes.  They are targeted at users with almost no simulation experience.  These include manufacturing engineers and managers. 
One MS Windows based simulation package is SIMAN, a commercial spreadsheet, which uses BASIC as the underlying programming language.
 The languages list above (Araten, 1992) was developed for the purpose of solving complex mathematical problems.  I  personally wrote air cooled heat exchanger simulations in the IBM MVS/TSO environment.  Card deck images were created and read by the SYSIN statement of the batch environment using JCL.   The output was sometimes hundreds of pages long.  As far as a Human Computer interface, if a card deck was an interface, then I guess we had one.  Current students would not know what the cards looked like.  APL was another programming language that was strange in it’s interface.  It only had mathematical symbols and required a special keyboard.
Microsaint was developed in 1985 to fill in the gaps in simulations for human based tasks.  According to its developers, Microsaint fills in the gap in commercial simulators by targeting industries that have underutilized simulation.  These industries include manufacturing, healthcare, process reengineering and human factors  (Drury, 1996).
Microsaint uses a graphical development system to ease development by users who are not computer savvy.  Its development methodology is called Task Network Modeling (TNM).  TNM allows for the design of the simulation in similar manner to creating a work flow diagram.  The entities on the diagram represent such things as people, patients, documents, or spare parts.  Arrows  are used to connect the entities to show their flow.  Mean times to complete a task are added to provide constraints.  Other conditions are then added to the diagram to determine what sequence tasks are performed and to produce output effects.  The simulator can be used on a stand-alone basis or be connected to a process data source for real-time updates (Barnes, 1996).
Simulation had its roots in equipment development.  The use of MicroSaint expanded this realm into areas that had not previously benefited from the technology.  Simulations can aide in deciding the optimal number of people to use in a given situation.  These situations may be emergency or normal steady-state.  Answers to questions such as “how many people should be on a fire truck or can a pilot handle all situations by himself?” are now possible.
MicroSaint helped the military with the development of the Comanche Helicopter.  The Army wished to develop an an attack/reconnaissance helicopter operated solely by the pilot.  The simulation was developed with one key question to answer, “Can one person do it all”? 
Four alternative cockpit designs were entered into the simulator.  Variables considered included auditory, visual, cognitive, and psychomotor loadings.  With the variables defined, experiments were performed to capture data on each cockpit design.  The data was fed into the simulator to predict workload over a wide number of conditions. 
The final recommendation based on the simulation was that a one person attack/reconnaissance helicopter was not feasible. 
Klein (1998) breaks the development of simulation into two basic types.  In the Monolithic design technique, all relevant information about the model is kept in the model itself.  These systems are usually built from scratch and do not share the components that comprise them.  The decline of this model occurred in the mid 1990’s when the Administration lowered Defense Department budgets.  There became a need for greater efficiency in the creation of the simulation software.
The distributed model breaks the total model into a set of sub-models.  These sub-models can be interchanged between systems to lower total development costs.  High Level Architecture was developed by the US Department of Defense and is an upcoming IEE simulation interoperability standard. 
Under this standard, HLA defines the behavior of the overall distributed model, called a Federation, and the component pieces, called Federates.  The infrastructure specification details how the Federates communicate through the Runtime Interface (RTI).  Finally, the Object Model Template (OMT) defines the documentation process for Federations and Federates.
(Klein,1998).   Although HLA was developed for military use, Klein provides an example for commercial use.   The example is of a traffic control system.
Two methods are examined.  A static development model fixes its elements at the beginning of execution.  The parameters are either hard coded into the model or read in from a data source on start-up.  In contrast, a dynamic model can have its elements change during the actual simulation.   Past development of traffic flow models used Monolithic, static techniques.  Future design will utilize the flexibility of HLA for distributed dynamic simulation.
Primary information is used by the core of the model at runtime. It is the variables and the objects that are the basis of the model.  Secondary information is support information for such items as animation that are outside the model’s core. Under HLA, formerly static secondary information can be made dynamic.  
In the example above, Federate simulations for cards, pedestrians, and traffic lights are tied together through the HLA Runtime Infrastructure.
University of California San Diego with community clinics used simulation to optimize operations at a local health clinic serving the poor.  The clientele are of little means and the doctors who serve the clinic are more concerned with the welfare of the patients than to make themselves rich.  The clinics are usually run out of donated space in churches or motels.  (Alexopoulos, 2001) 
To make each dollar count, the PIP used Industrial Engineering concepts to gather data on the operation.  This data was analyzed and broken into functional relationships.  The Arean simulation language was used to model these relationships.  There are five pieces to the model. 
Critical input variables include number of patients in the system, number of providers, average time for each immunization.
Design of the interface is made as simple as possible as the staff is not highly computer literate.
The five stages of the model perform the following tasks:
1)Check-in: Probability distributions are used to predict arrival times, deviations from appointments, etc.  Health records are gathered and entered.
2)Waiting Room/Pre exam – Height, weight, body temperature are taken before entry to the exam room.
3)Exam:  The actual examination is performed.  A referral may be given, if needed.
4)Checkout and Charting – patients leave the exam area.  The activity of the exam is placed on the chart, health data is recorded, documenting future needs is created.
5)Post-checkout – the data that is collected and entered into the computer is processed.  Preparations are made for the next visit by the patient.
The goal of the design was to simulate the control of the messages between a satellite, a terminal and a terminal controller.  The purpose of the terminal controller (TC) is to control the satellite antennae, manage the communications network and monitor the networks and terminals.
The need for the simulator was precipitated by limited satellite time for testing of the messaging and control.  The project was undertaken with several goals.
1)Make it work.
2)Explore ADA95
3)Use 3rd party components to make it as cheap as possible.
4)
Round 1 had no Graphical User Interface.  It was added later.
   (Brooke, 2002)
Real-time computing systems control many aspects of our lives.  Although many people may have skeptical opinions of computers, many aspects of our lives are controlled by computers.  One area is the use of transportation systems.  Our lives depend on them everyday.
NJ Transit, PATH trains, and airport monorails are all part of the railroad system that relies on computer control. 
TRAINSET was developed by Cornell University using the C language.
The design utilizes one sub window for each running train. 
The design of the ACI controls five areas:
1)Establish a network connection and receive reports on the state of the railroad.
2)Process Commands to change switch block configurations, speed of trains, set new goal speeds, change direction.
3)Query state information about any train.
4)Provide fault tolerant control via a voting mechanism.
5)Provide utility programs for changing layouts, and general purpose timing information.
(Bulowski, 1997). Scenarios are able to be recreated in a virtual world that would be too unsafe and cost too much to be created in a real environment. 
Engineers and architects can use the simulator to determine the speed, direction, and size of fires for any given design.  Evacuation plans can be updated or the building redesigned to provide safer escape routes.
The two main components of the simulator are the National Institute of Standards and Technology’s Consolidated Model of Fire and Smoke Transport and Berkeley’s 3-D Architectural Walkthrough software. 
CFAST is considered the world’s most accurate fire simulation.  The addition of the Walkthrough software provides an in building view from the perspective of a person having to escape a blaze.
As the model becomes more sophisticated, the designers hope to turn it into a training aid for fire departments and building occupants. 
The design of CFAST uses a series of differential equations to move gases and physical properties such as temperature, pressure, etc through the portals into adjoining volumes.
The simulator uses the following parameters for its physical properties: 
Gas concentration for each type of gas such as oxygen, carbon dioxide, carbon monoxide
Raw fuel density which can burn
Combustion byproducts
Atmospheric pressure and temperature
Wall, ceiling, and floor temperatures
Volumes are represented by floor to ceiling height and the distance between the walls.  A precise position in space is irrelevant for the calculations.  This is different than the Architectural model which assumes an X-Y-Z coordinate for each room.
Similarly, the shape of the vent is entered, but the only other relevant factor that influences the flow of the gases and fire is whether the position of the vent is horizontal or vertical.  
The Walkthru model contains detailed material information about the building area and is a good feeder system to the CFAST model.  Where CFAST contains the chemistry, burn properties, and ignition times, Walkthru provides the detailed geometric and material specifications that architects would need to evaluate strength of building supports and the like.
The most important piece of information that Walthru provides to CFAST is the geometry of objects contained in a room. This information is fed into CFAST to provide obstacles to the flow of the fire. 
The role of Walkthru is as the visualizer for the fire in progress.  It is designed to show the progress of the fire and smoke and give a visual representation of what can and cannot be seen.  Pictures of how Walkthru views appear can be found at:
http://www.cs.berkeley.edu/~bukowski/wkfire/
The simulation model is designed in six parts.  The crew participates in all six areas with the commander having final action power.  (Carlino, 1986)
1.Mission Requirements – the requirements entered into the system and the perceived environment sent via sensors forms the basis for all decision making.  The underlying design concept is to provide the ability to crew members to recover from subsystem failures in real time and to asses threats from the outside.
2.Information processing – The main processor uses an algorithm to check the availability of the subsystems and to evaluate their importance in case of failure.  A matrix comprised of N subsystem columns by M user-operator rows is evaluated with a weight given the failure’s importance.
3. Once a failure is evaluated, a message is sent to the operator with a recommended action.  The command chain then evaluates the recommendation and the final action is selected.
4. The action system is fed the corrective action and the matrix is reevaluated.  The actions are entered until the alarm state is removed.
5. Sensors are used to monitor ship performance and environmental factors such as outside threats or found targets.
6. Threat/ Targets – Once identified through the sensing devices, decisions on proper actions are fed into the action system.  Threats and targets can then be dealt with appropriately.
S3E2S stands for Specification, Simulation and Synthesis of Embedded Electronic Systems. (Carro, 2000)
S3E2S is a CAD design environment to explore the best mix of processor architectures for a desired application. S3E2S minimizes costs by exploring existing processors for an application before the development of new ones. 
S3E2S is unique in its ability to model multiple domains with multiprocessor selection.  This allows compatibility over a wider range of hardware.  Time to find a base set of predefined chips is minimized.  The simulator is available to small and medium companies that do not have the resources to develop new chips from scratch for each new product.
Besides embedded code requirements, S3E2S also explores the design trade offs of cost, architecture, and power consumption.  It also finds a balance between chip and software based solutions.
(Corsi, 1998) STAR stands for the Simulator for Turbine-Alternator Real-Time Systems.  The power grid simulation provides the user the ability to see system responses under a wide variety of situations such as overload, equipment failure or power dips. 
The system provides a visual display of the system being simulated.  Set points and operating variables are fed in by the operator.  The model then processes these inputs and the output responses are then displayed.  These outputs are used as training aids for power plant operators and as aids to engineers.  The engineers use the output response to design for situations not available through live testing.
The main components of the system include:
 The real-time simulation unit – a signal computer to provide signal variation
 Data visualization unit – displays the current state of the grid.
 Signal conditioning unit – Provides interpretation of I/O board signals
 Dynamic Model unit – provides the software to simulate the turbines, grid, etc.
Communications include telephones, public address systems, pocket pagers, and visual pagers. Personnel includes physicians, nurses, technicians. Equipment includes emergency carts or cases with a variety of drugs. Mobility refers to elevators, doors, corridors etc.
Detect phase – someone discovers that an emergency exists. Dispatch phase – has two parts. Part 1 – the dispatch system must be alerted (usually by the person who discovers the emergency). Part 2- the system must alert the team members and direct them to the site of the emergency. Deploy phase – specific medical personnel must travel to the patient treatment location, special supplies must be obtained and transported and certain facilities must be prepared. Deliver phase – the time period during which the members of the emergency team are treating the patient. Disperse phase – occurs when the emergency is over and represents the time required to restore the initial settings (return material to its standby location)
The program has been done in a modular manner in order to permit the evaluation of a wide variety of hospital configurations and emergency mobilization approaches
The system has been tested over a period of two years (1966, 1967) and 229 cardiopulmonary arrests. Significant improvements have been reported. The system was tested on an IBM360-75 machine.
(Levine, 1969)
Operation panel was realized by a portable PC and operated by the crews according to the captain’s orders. It fulfils data installation, working state choice, voyage control etc. Display panel is realized by another portable PC and backups with Operation Panel mutually. The interface cabinet consists of intelligent I/O board, high speed RAM and power modules. It is designed to fulfil the real time communication with the rest of the systems.
Motion modeling – underneath water model, near surface model, submergence and emergence maneuvering model, emergency maneuvering model
Navigation devices modeling – inertial navigation system, GPS receiver
Software modules – submarine motion module, navigation devices simulation module, sea environment module, information communication module, image display module, module of PID controller or rudder
Simulation can not only emulate the motion characteristic of the specific submarine, but also train the skills of crews in emergencies. Video and audio effects can provide VR circumstances for the crew. The simulator can be used as a drill device for training and also as a method to study the maneuvering performance of a submarine.
(Liu et. al., 1998)
Harpoon missiles used by S-3B aircraft – a twin jet antisubmarine warfare carrier based aircraft, carrying a crew of four: pilot, copilot, tactical coordinator and sensor operator. Built by Lockheed California Company
HACLCS provides the means for preparing and launching the missiles. It includes Harpoon Aircraft Command and Launch Control (HACLC), two Control Distribution Boxes (CDB), two Fire detection Control Units (FDCU) and two missile umbilical cables.
Computer mode is highly automated using the capabilities of both the aircraft computer and the HACLCS. LOS provides a backup launch capability in the event of degraded equipment performance
(Leonard et. al., 1983)
Abnormal mode includes hung missile, digital fault or missile fire. Normal indicates a missile ready to fire.
The various capabilities are to be simulated by the system and to replicate exactly the real-life behavior of the Harpoon missile.  The operator has control of such parameters as mission target, time to launch, arming, and launch. 
The mission can also be rerun if necessary.
(Leonard et. al., 1983)
ASIA – simple agent-based simulation software, implemented in Java. Applications for economic and environmental studies including the international greenhouse gas (GHG) emission trading.
Social layer – describes the basic role of agents in the society. Central, Participant and Watcher agents were implemented. Central agents create, register and initiate Participant and Watcher agents.
A prototype for the greenhouse gas emission trading under the Kyoto protocol was developed.
Gaming simulations with human players in an environment similar to the agents’ environment are expected to help in constructing plausible behavior models and extracting essential dynamics. The gaming simulations are planned to be executed at several universities.
(Mizuta et. al. 2002)
MICS designed to replace manual dispatch procedures and to improve the effectiveness of the Department’s operations MICS designed about dual PDP-11.45s and supported in fallback by dual INTEL 8080 microprocessors
It processes alarms received by telephone, new electronic street boxes and older mechanical street boxes.
FDNY required an emergency simulator to be included in the system. The simulator consists of an offline scenario generator, an online load generator module and a performance monitor module FORTRAN used to develop the simulator
Alarm receipt – the system receives the alarm, from one of the possible sources.
Decision dispatch – an automatic determination is made regarding the appropriate response
Display/Fallback – visual aid to the dispatching process or automated fallback system. Provides a geographic map of the coverage area displaying the availability of the resources
Notification – notifies the resources about the emergency
Status monitoring – monitors the status of both the units and incidents
Management – a variety of functions that are performed as support to the dispatching process – information retrieval, data recording, message switching, load generation, performance monitoring etc.
The system was tested using various simulations. Some hardware deficiencies were found in memory management and software overhead. The environmental simulator was used as the vehicle to ascertain the system acceptance criteria for MICS. It was used to drive the system under peak alarm conditions (3 alarms per minute). Results were satisfying.
(Mohan et. Al., 1976)
ACT is a network of powered horizontal and vertical monorail conveyors which permits the movement of materials and supplies from the centralized supply processing and distribution center to the various decentralized areas throughout the hospital and vice versa
Factors that need to be studied include; the mix and numbers of carts – there are nine types of carts that are to be employed, each type having a different purpose; number of transporters; manpower – there will be a significant number of required personnel that will handle the system. There will also be interactions between people from other departments and the system (such as people from the food service department, accessing the system to transport food).
An optimal schedule would have to satisfy all the delivery demands: minimize capital investment, provide staffing patterns for every unit, minimize travel time etc.
(Ross et. al., 1978)
Concept 1: Events can be defined as triggering or triggered.
Concept 2: Use this list in order to monitor the flow of carts in the simulation. This requires a priori knowledge of the precedence relationships for each event within the system.
WIDES is a Fortran and GASP based event simulation language with event-oriented perspective. It is composed of more than fifty subroutines, used as model building blocks in the process of modeling the ACT system
Database - contained the input of the system – user demands for materials (scheduled or unscheduled)
Scheduler – a program written to assist in scheduling deliveries
Simulation program – 2 modules:  the initiation module (initialize system variables and parameters) and the execution module. Output – messages are constantly output on the CRT. Three types of hard copy output: WIDES trace, ACT movement table and ACT transportation table
With what has been demonstrated through simulation, an erroneous operational plan has been averted from being implemented in the new facility. The simulation has proven to be an invaluable educational tool for all those involved with the ACT system.
(Ross et. al., 1978)
Some common application areas: modeling manufacturing processes, such as production lines, to examine resource utilization, modeling transportation systems to examine scheduling and resource requirements, modeling service systems, modeling training systems, modeling human operator performance and interaction under changing conditions.
Task mean time – average time required for completing a task, once it has begun executing
Conditions – sometimes there are situations when a task cannot start until certain conditions are met
Current state of the system might change when a task begins or ends
Three decision types provided in order to deal with task sequencing : probabilistic, multiple, and tactical.
One player in the manufacturing industry used the optimization feature provided by Microsaint in order to increase their efficiency. One player in the health care system liked the simulation of the flow of patients from ambulatory surgery, to surgery, to recovery and back.
(Schunk, 2002)
Communication is key to successful operations of business, military and health care systems. In general, two types of communications are sought: dynamic data exchange during simulation runs directly between simulations and sharing data between simulation runs through a central repository.
COM Services was developed using the Component Object Model.
CATT – a technical approach for providing mission training for helicopter pilots on the dynamic conditions and alternative courses of action. The user would be able to access a GUI component that would communicate through COM Services with the Micro Saint simulations.
CART – developed under the Air Force Research Laboratory. The addition of an adaptive simulation interoperability environment allowed CART models to communicate with other simulations.
(Schunk et. al., 2001)
GPS greatly eases the task of building computer models for certain types of discrete-event simulations. It lends itself particularly to modeling systems in which discrete units of traffic compete for scarce resources. GPSS has been applied to the modeling of manufacturing systems, communication systems, computer systems, transportation systems, health care systems. It has also been used in chemical engineering, mining engineering, and cancer research.
Only seven statements are required to model a one-line, one-server queuing system in GPSS. Beginners can learn quickly how to use it.
Can be used on a variety of systems: MicroVAX I, MicroVAX 2, MicroVAX 3, Apollo, Optimum 5/10, Optimum V, IRIS, IRIS Turbo, Sun 3, IBM PC etc.
GPSS-FORTRAN, APL-GPSS PL/1 GPSS – all these languages have GPSS embedded.
(Schriber, 1988)
Recent implementations are much faster than earlier ones.
Recent implementations have GPSS embedded, so there is no need to call HELP routines.
GPSS is trivial to learn only to a superficial depth. Real mastery of GPSS requires considerable study.
Misconceptions about the lack of power of GPSS come from people with an insufficient grasp of the language.
Early versions were batch oriented. Current versions are designed for both interactive and batch use.
(Schriber 1988)
Two broad classes of simulation practitioners – simulation software developers (research model) and problem solvers who use simulation (practice model)
Both medical education and simulation education require the student to perform certain tasks, after acquiring an extensive knowledge (the patient in the case of the doctor, or the system to be modeled and simulated in the case of the engineer).
Basic coursework – foundations of decision modeling. These courses should focus on applied problem solving using the models rather than derivations. Additional coursework – all students should have enough coverage to feel comfortable dealing with non-technical management in businesses and other organizations
(Seila, 2000)
Service processing requirements has always been a difficult problem to solve and it is of overwhelming importance in certain service sectors such as hospitals or the military. Queuing theory has emerged as a useful decision making technique. Often times, the use of simulation modeling has been required in problem analysis.
Order requests arrive in batches in a random fashion with an average of 1.4584 per day. The total service process is comprised of several service activities that must be performed in sequence. Average processing time for pre-USPGO orders is 45.4 minutes and for post-USPFO orders is 80 minutes. A single line forms for all orders waiting for processing. The sequence in which orders are served is first-come, first-served.
Five models were developed. The most successful was model 5, which had introduced additional clerks when the number of orders in the queue reached a particular level. Manipulation of the queue size and clerk number was effective in reducing overall processing time of the orders
(Shimshak et. al., 1983)
Automatic Support Detachment (ASD) provides the development and support of all telecommunication systems for the HQ Department of the Army. The current system was designed to handle 1500 messages per hour with an average message size of 2000 characters, operating on a round the clock basis. The proposed improved system would handle 6000 messages per hour.
Three models  were developed, representing the Front End Control, Message Processing and Staff Service Center Subsystems. Applications would fall in eight categories: control programs, I/O, video display, message processing, table subsystem, data management subsystem, offline programs, COM subsystem.
The language could model multi-CPU system, is easily modifiable, facilitates the linking of several modules to form one conglomerate system, allows tracing of all of the simulation activities. The objectives of the modeling effort were to determine effects of hardware conversion, system design and software modification.
A flexible interface between the data collection effort and the PCTCS model was designed. User-written simulation reports provide a clear picture of the “system life’ of the simulated messages. PCTCS model will be used to evaluate design alternatives and proposed system enhancements.
(Sprung at. Al., 1976)
Regular users generally don’t like change. When simulation is employed to solve problems, users are generally management or industrial engineers, not the providers to whom the solutions are routinely applied. Without a comprehensive introduction to the tool and its potential, non users view simulation based recommendations as ‘black-box” answers to complex problems.
To gain acceptance of any solution, regardless of its source, it is imperative that every member of the affected group be involved in the decision-making process. The more complex the problem, the more critical management support and commitment become.
There is considerable resistance to the dehumanizing nature of time and motion analysis. Most workers in general, and health care providers in particular, regard such things as treatment-time evaluation and standardization as unreasonable and unrealistic.
Healthcare environment is far more complex than any industrial environment. The idea is to be as flexible as possible and to avoid being imprisoned by your own methods. Take advantage of all the opportunities offered by opposing views by evaluating the model’s sensitivity to its inputs.
(Lowery et. al., 1994)
A large number of studies that have not been accepted and their implementation failed were in this position as a direct result of poor procedures. Many of the studies had a great deal of promise and could have produced significant contributions if they were structured correctly.
Simulation occupies a position of prominence, with respect to the potential for analytical disaster. Firm objectives must be established, pertinent questions must be identified and answered, specific measures of performance must be singled out and the focus should be on the problem at hand.
There are a lot of other barriers that can’t be put into one category and all they have in common is that they can be divided into two distinct groups: barriers that are inborn and barriers that are generated by the analyst.
The only rule that could be applied for these barriers is to try to avoid building them. Instead, the analyst should focus on bringing everyone to a common denominator, to make everyone speak the same language.
(Lowery et. al., 1994)
Simulation models emphasize the direct representation of the structure and logic of a system as opposed to abstracting the system into a mathematical form. The availability of system descriptions and data influences the choice of simulation model parameters, as well as which objects and which of their attributes can be included in the model.
Alternatives can be assessed without the fear that negative consequences will damage day-to-day operations as would be the case if experiments were conducted directly on existing, operating systems.
Variation has to do with the reality that no system does the same activity in exactly the same way or in the same amount of time always. Variation may be represented by the second central moment of a statistical distribution, the variance. Variation may also arise from decision rules that change processing procedures based on what a system is currently doing, or because of the characteristics of the patient receiving care.
(Standridge, 1999)
Public policy applications have to do with evaluating strategies for delivering health care that are implemented in state or national policies. Example – a simulation model for projecting the number of physicians, nurse practitioners and physician’s assistants in Indiana, from 1975 to 2000 as well as the demand for primary health care. Example – a family therapy process called Brief Systems Family Therapy (BSFT). Six checkpoints in this process were specified; problem formation, solution formation, enactment formation, hypothesis formation, task intervention formation and follow-up.
Determining the level of capital expenditure for equipment needed to effectively provide patient care is an important application area for simulation in health care delivery. Example – a simulator for helping to establish the resource requirements of a small animal veterinary practice
There are many applications of simulation to operational policies of health care providers. Example – an application of simulation to emergency room operating procedures. At issue was the excessive length of time non-urgent patients waited for care in an emergency room at a non-profit hospital.
(Standridge, 1999)
Internet based simulation is difficult to learn and too easy to make mistakes which have disastrous consequences.  Mistakes such as faulty use of pointers are difficult to find. There is no inherent mechanism to describe parallelism; Internet debugging tools are simulation unaware, i.e. they operate at a level far below that which would be convenient and necessary for most simulations.
HLA is a simulation interoperability standard currently being developed by the US Department of Defense. The architecture is defined by: rules which govern the behavior of a distributed simulation (federation) and the individual distributed components (federates); an interface specification which defined the interface between each federate and the Runtime Infrastructure (RTI); an Object Model Template (OMT) which provides the framework for defining federations and federates.
HLA defines a two part interface which federates are required to use for communicating with the RTI. It is based on the ambassador paradigm. A federate communicates with the RTI using its RTI ambassador. Conversely, the RTI communicates with a federate via the federate’s ambassador. From a programmer’s point of view, the ambassadors are objects and the communication is done by calling methods of these objects.
Being a member of a distributed simulation imposes some general problems that the stand-alone simulations do not have to deal with: synchronization, data exchange, and data representation.
(Strassburger et. al., 1998)
This is the most obvious solution. If the source code of a tool is available and well documented, this is the most straightforward and probably the least complicated solution.
Some simulation tools translate model descriptions written in a tool-dependent modeling language into another programming language. This intermediate code is then compiled to an executable file. It is possible to modify this code to realize the HLA extensions.
This solution is well suited for tools that offer an open and extensible architecture. The tool should offer a library interface with the ability to call functions or methods in these libraries. Additionally, the tool should make it possible to implement callback functions or methods.
The last solution for tools which can not be connected to the RTI by any of the prior methods is the development of a gateway program. The gateway program could communicate with the simulation tool via appropriate means (files, pipes, network) depending on the capabilities of the simulation tool.
(Strassburger et. al., 1998)
Hospital emergency departments are having to cope with increasing pressures from competition, reimbursement problems and healthcare reform. The hospital’s customers are less willing to accept long waits in any department, but especially in the Emergency department.
Simulation was done for a hospital where the average patient waiting time in the emergency department was 157 minutes, significantly greater than the acceptable average of 120 minutes.
MedModel is a healthcare industry-specific simulator package with some advantages over other products.  These advantages include the ability to capture and release resources, the use of pathway networks to allow resources to walk up and down hallways and through doors, and graphical interactions.
The first step was to identify the process. In this case, the process was patient flow through Emergency Services. The study would focus on all the steps occurring from the time the patient entered the emergency department until the patient was released, admitted to ward or transferred to another department
The objective was to reduce the patient’s length of stay. Each alternative could be tested on-screen and evaluated for effectiveness.
The model should be planned and defined upfront, with the data collection requirements thoughout and scheduled in advance. Failure to take the time to design the model is one of the biggest reasons for projects not being completed on time.
The need to have a central database of information about patient visits became apparent. The type of data needed for the study was the same data that is needed over and over again to track progress and for assessment of current trends. 14 categories of patients were identified, flow charts were made for each type.
The model included 17 resources, 4 entities, 29 shifts, 6 result files, 20 variables, 20 attributes, 1 array, 8 subroutines, 12 macros, 8 function tables, 2 distribution tables, 11 arrival cycle tables and patient processing and routing logic
(McGuire, 1994).
Verifying the model is a process of comparing the actual patient flow with the on-screen patient flow. The various documents (flow charts, arrival rates, etc) should be combined with records of the various team meetings to form an “assumption document”.
Validation involves testing the model to ensure that the actual system length of stay times are mirrored by the simulation model. This particular case model was particularly difficult to validate. Any patient type with significant admission rate would not validate.
Final three stages are related to setting up alternatives, run each alternative, evaluate each alternative and choose the one(s) that best suit the initial goal. Successful simulation studies are dependent on the cooperation of each department that is affected by the study and that affects the objective of the study.
Simulation models in Java can be made widely available. An applet can be retrieved and run and does not have to be ported to a different platform or even recompiled and linked. Java provides a high degree of dynamism (applets run in browsers). Java threads make it easier to implement  simulation following the process interactive paradigm. Built-in support for animation.
Simjava – a process based discrete event simulation package for building working models of complex systems. Facilities for representing simulation objects as animated icons on screen. DEVSJAVA – an environment based on DEVS (Discrete Event System Specification). It supports High Level Architecture, agent based modeling and System Entity Structure JSIM – an environment supporting web based simulation as well as component-based technology. The environment uses Java Beans to create reusable components. Simulation can be built either using the event package or the process package. JavaSim – a Java implementation of the original C++SIM Simulation toolkit at University of Newcastle upon Tyne, UK.
JavaGPSS – a simulation tool which was specifically designed for the internet.
Silk – commercially available general-purpose simulation language based around a process-interaction approach and implemented in Java. WSE – Web-enabled Simulation Environment) combines web technology with the use of Java and CORBA.
(Kuljis et. Al., 2000)
Major market trends are driving manufacturing from mass production to mass customization. Manufacturing enterprises could follow a virtual manufacturing operation composed of three modules: an agent architecture that decomposes the system to address both information modularity and the physical realities of manufacturing; a simulator and an infrastructure to support the implementation of the agents.
Exploration, discovery-based and learning by doing are valuable methods of learning which give a learner the feeling of involvement. Learning how to build models is best done by actually building models. Various military training applications are the bulk of training environments that are migrating to the web.
There was a lot of use of simulation in the military domain, as it could be seen in the previous slides. A web-based system called ASTAR (The Army Standards Repository System) was developed by the Army Model and Simulation Office to enhance the army’s decentralized, consensus based standards development process. A web based tool facilitates the Standards Development Process.
Large scale computer simulations can take days to run and produce massive amounts of output. An example of a scientific simulation application is the Weather Scenario Generator – intended to mine a very large array of environmental data and provide results to a user at interactive speed.
Web based simulation of autonomous software agents is another example of web based environments used to explore the potential of the web and new software technologies. Another example is the problem of controlling crowds in public places. Simulations can be used to model these problems.
(Kuljis et. Al., 2000)
(Hubal, 2003)  Federal and corporate funding were used to develop a training aid for police officers who need to deal with the mentally ill.   The application is called JUST-TALK.  It is built upon five basic scenarios.  All scenarios involve a young, adult male behaving strangely.  The youth walks into the middle of a street and is almost run over by a car.  The officer responds to the report and finds the youth either sitting on a bench, standing near it, or pacing around it. 
Through gestures and a natural language interpreter, JUST-TALK allows the officer in training to determine if the virtual human is schizophrenic, paranoid, depressed, sad, but normal, or stressed but normal. 
The model is based on the interaction with a lifelike virtual human which displays emotion and responds to commands in some manner.  The developers of the software believe that the simulation along with classroom instruction and role playing will lead to an effective learning environment.
(Hubal, 2003)   JUST-TALK utilizes an architecture called AVATALK to allow conversations between people and virtual humans.   The human user then gets to see and hear responses from their virtual human subject.  The system is broken into three main components: 1) Language Processor – this module breaks down human speech into a semantic representation suitable for interpretation.  Once interpretation is performed, the response is fed back from the behavior engine.  The Language processor works in reverse and speaks or formulates a facial or hand gesture. 
2) Behavior Engine – dynamically loads the context and the knowledge needed by the Language Processor. 
3) Visualization Engine – takes gesture, movement, and speech output and uses a 3-D virtual human to perform the requested actions.  The mouth moves to lip synch the words using a morphing of the 3D model and playing selected animation frames created by motion capture techniques. 
Situation awareness is hard to keep up for fighter pilots while outside of live action zones.   The simulator allows skill development without the expense and risk of live flight.  While nothing can replace live flight, simulation is a key training tool. 
Simulation is high on the list of requested activities by pilots, training managers, and air weapons controllers. 
McDonnell developed the SIMNET program standing for “Simulator networking” under sponsorship of the Advanced Research Projects Agency and the Army.  The key feature of the simulator was the use of independent processors in a distributed environment.  This allowed independent update of target and pursuit participants. 
The design utilized F-15 cockpits with stick and throttle controls.  Each has a color touch screen front panel. 
The simulator was rated higher than actual flight training in a number of different areas.  The reason for this satisfaction is clear.  At no time is the pilot actually at risk.  In multi-bogey (enemy airship) situations, the skies become crowed and the risk of a mid-air collision increases. 
Similar reasons exist for the other high satisfaction areas.  For number 2, no missiles are actually being fired at the pilot.  The evasive maneuvers can be practiced without risk.
Electronic countermeasures can be employed without loss of radio contact or radar. 
Escort tactics can be simulated without fear of mid air collisions.
Visibility can be changed during any heavy weather.  Low altitude can be practiced without crashing into the ground.
Using touch screens instead of rudder pedals did not lower the pilot’s opinion of the simulator.
Despite the fact that the pilots were highly satisfied with the design of the simulator in many areas, some areas were lower in satisfaction than real flight.
The images that were projected on the forty foot dome were not as sharp and clear as the view in the sky. 
Formation variations and realism was limited as the projections were at a fixed distance away from the cockpit. 
The clarity of the image hindered identification of the aircraft.
Mutual support to attack bogies was not as realistic as combat situations. 
The biggest influence on the overall satisfaction was the software version of the radar.  The simulator used an older version than the one in the F-15.  This changed tactics on occasion.
(Cavazza, 2003) Virtual humans have become important in the development of intelligent user interfaces.  Their role can be as intelligent assistant and instructors or become part of the actual training scenario as a patient.
This simulator integrates a knowledge-based system with a 3-D virtual patient.  The medical students use the system to learn decision making during cardiac emergencies.
The use of a 3-D patient allows full view of a patient’s internal structure.  
The work was spun off from Badler’s work on battlefield casualties.  In Badler’s system, field medics would learn how to deal with emergency treatment.  Cavazza’s simulation differs by using a physiological model of the patient’s internals. 
Prior to invasive action, the trainee can also see the patient’s appearance and read-outs that would appear on a hospital’s heart monitor. 
(Cavazza, 2003) The appearance of the patient varies according to his or her vital signs during the coronary emergency.  The input to this appearance includes the patient’s Heart Rate (HR) and Mean Arterial Pressure (MAP).  Complimentary examinations reveal other parameters.  For example, if a cardiac catheterization is performed, the trainee can obtain Pulmonary capillary pressure (Pcap).
Instructions to the virtual nurses are performed via speech recognition using a simple command language.  This language is comprised of a simple command language and its parameters.  For example, a command might be “Inject drug {Drug name, dosage}”.
An initial state of clinical conditions is given to begin the event.  These conditions propagate to a reaction set of symptoms requiring diagnosis.  When a treatment is given, the symptoms are updated until a steady state is reached.
The virtual presentation of the patient and the surrounding area are representative of the intensity and gravity of the ER situation.  The lighting, available equipment, patient’s appearance, background noise, and in room sounds intensify based on the condition of the patient.  This allows the realism needed to simulate a life and death situation.
As needed the trainee can turn the patient and view the equipment placed in the room.  Monitors can be viewed to show the patient’s vital signs.  Nurses respond to voice commands or to those given through a series of menus.  Drugs can be administered. 
Animation of the patient shows response to the various treatment given.  This animation might show labored breathing or writhing. 
All aspects of the 3-D environment are used to provide as much realism as possible.
(Duncombe, 1997)  Mobil Subscriber Equipment (MSE) is used by the Army for battlefield voice and data communications.  The System Control Center (SCC) provides battlefield personnel with orders and looks to receive reports back from the field.  The SCC is to be replaced by the newer Network Mangement Tool (NMT).  To prepare the troops for the new NMT, GTE corporation created the Communications Network Simulator (CNS).    This simulator provides tactical situations requiring  a precise execution of tasks. 
The first area that operators are trained in is the setting up of the system.  These processes include the software and communications network initialization and to connect the system to the message generator.  The operator also learns to deal with normal operations, degraded operations, maintenance and administration of the system and shutting and tearing the system down.
Messages flow from the NMT at a simulated rate of 231 per hour based on a plan.  At least 130 of these messages can be from special communications outside of the active area. These special areas include Line of Sight (LOS) radios and NATO interfaces.  Emergency situations simulated include loss of partial communications, loss of equipment, and disabling of personnel.
(Duncombe, 1997) The NMT is tied into seven subsystems, six of which are currently operational and one, Battlefield Spectrum Management, is planned as a future enhancement.  These subsystems are:
1.COTS/Utility software – commercial software, such as e-Mail, word processing, and spreadsheets are used to supplement custom NMT functions.
2.System Administration – provides control over the network, communications, and data systems.
3.Soldier/Machine Interface – provides graphical maps and evaluation of terrain.
4.WAN Management – provide simulation WAN plan activation, network configuration and monitoring, and WAN messaging.
5.Battlefield Spectrum Management – future enhancement to provide radio frequency allocation and assignment, threat analysis, and data entry.
6.Network Planning and Engineering – Access and maintain plan database, change plan defaults.
7.Application Executive – provide menus, data downloads, user access, messaging windows.
ISD was one of the first systematic training models ever developed. It was used in World War II by the US military to train soldiers in aircraft recognition. This is a synthesized high-level framework for evaluating training effectiveness.
The approach is based on the ISD process and contains six major components: training task identification, training proficiency evaluation, training task prioritization, identification of simulation training support, simulation training execution and feedback.
The methodology requires tasks be identified in advance of training. Each task is then ranked, relative to others, by the military unit undergoing training. The weights reflect the importance of each task to mission accomplishment. Precise measurable standards are best for evaluating the training proficiency of the military unit.
(McGinnis et. Al., 1996)
Simple Virtual Environments (SVE) are used as the backbone of this application.
VE includes a furnished single family home, a fire truck, a fire hydrant, various tools and firefighters. The trainee issues commands to the virtual firefighters. The command entry is performed by an operator who translates the verbal commands of the trainee into commands in the GCI.
GCI reduces input errors, provides fast and easy entry of commands into the system and allows the operator to monitor the status of the fire teams in the VE. GCI is a standalone application that uses TCP/IP sockets to exchange messages with the VE.
A*  algorithm is used for planning the path of the firefighter. A path from the firefighter’s current position to their destination that avoids all the obstacles must be found
Animation and models were done using 3D Studio Max. Animation includes: cutting, chopping, walking, crawling, climbing, pulling and spraying a hose, etc.
NIST’s Fire Dynamic Simulator is used to compute realistic physical fire and smoke behavior and output volumetric data inside of our house.
Run Length Encoding compression is used to compress the data files to a manageable size.
A voxel-based splatting renderer is used to draw the fire and the smoke.
This project is a prototype and is still in development. Currently, the authors are working to improve some aspects of the application and to add new features.
(St. Julien et. Al., 2003)
Student employees have seemingly unlimited resources of energy and potential. However they are often not willing to expend either of these things at work.
Examining outstanding employees - the simplest thing to do was look at the current employees and decide what skills and abilities the best students possessed.
Skills that can be taught  - technical skills and knowledge.
Skills that can’t be taught – patience and people skills, responsibility,
The main reason why students are in school is to get an education. Work is a secondary goal and we must keep that in mind. Also, we need to take into account the time required for study, the year the students are in and their computer skills.
Always look for people who are willing to work hands-on with clients, people who are self-monitoring and have an interest in advancement, people who have previous customer service experience.
Minimum time – usually very little time is available for training new student employees.
Maximum abilities – rules and policies are covered first. Next, cover basic functions of the job. Next, a very important thing – showing students where they can find the relevant information needed to solve a problem. Also train them in specific tasks.
Multiple media forms – use various types of media when training new employees. Always have printed copies of things, even if only summaries.
Web pages, databases, interactive websites – very useful tools to aid and supplement training.
Training topics – provide several topics for future exploration, so that the employees will continue to familiarize themselves with things.
Instructor-led training – most people learn best in an environment where there is an instructor or someone to interact with.
Self-paced training – make sure that the employees have the ability to work independently.
To track student progress, we could use the point system – a student works to earn points, which can be exchanged for a raise.
Making students want to improve is not an easy thing. Promises of a raise could have very powerful results. Other incentives are moral incentives: recognition of merit in front of the other employees is one example.
(Osborn, 2000)
Strategic directions are areas in which simulation can be applied immediately, but where we have not taken full advantage of the technology that is available. It is possible to embed simulation modes in the operating systems of computer systems. These systems can feed data about their operations into a data store accessible to the simulation process. Periodic execution of these would evaluate the performance data and identify operational trends.
The world is filled with opportunities to apply computer simulations to assist in real-time decision making. Any place that information is available in a digital form and humans are evaluating that information to make decisions based on it, there is an opportunity to support the human with simulation. Combat consultant, aircraft navigation or crowd management are just three examples of the potential areas of application.
We need to create virtual environments that are persistent over many years and that form the foundation for specific studies, training and entertainment that will be conducted within them. The gaming community has already done some steps in this direction. Similar virtual worlds need to be created by high level sponsors of studies and training events.
In the simulation business we strive to create virtual worlds that accurately represent the real world. However, there are few simulations that portray a really convincing virtual world. Statistically accurate simulations are excellent for many applications, but we need to focus also on the richness of the environments.
(Page at. al., 1999)
Many simulations are driven by statistical distributions that characterize the average behavior of a system, but do not claim accuracy for individual events or small time intervals. These distributions do not model instantaneous behaviors of intelligent or reactive beings. We need techniques for inserting intelligent, reactive, unique human behavior in the virtual world.
It should be possible to develop an architecture that supports an entire domain of simulation systems, providing a large common pool of functionality.
Distributed simulations cannot exist without sufficient reliable communication bandwidths for delivering events and synchronizing execution of the entire system. This bandwidth is currently one of the limiting factors on the size of a distributed simulation. Since bandwidth is a problem for every internet based application, a lot of commercial research is being done in this direction.
A lot of research has been done to discover techniques for practical and efficient synchronization of distributed simulation processes. We must identify applications that are well served by the different modes of event management. Research needs to find a practical and valuable home in commercial, government and military simulation systems.
 
(Page at. al., 1999)