Showing posts with label ExtendSim. Show all posts
Showing posts with label ExtendSim. Show all posts

Thursday, October 17, 2024

Exploring Multicore Analysis for Simulation

When it comes to speeding up model analysis, Multicore Analysis (MCA) is a game-changer. By distributing your model workload across multiple instances of ExtendSim, you can significantly enhance performance and efficiency. Here’s how you can make the most of it.

Why Use Multicore Analysis? 

Think of MCA as having multiple hands working on a task simultaneously. Instead of running a model or models sequentially to test different scenarios, you can run multiple instances of the model/s in parallel. This not only saves time, but also allows for more comprehensive analysis. 

Running Copies of a Model in Parallel 

Instead of sequentially running one model to test various scenarios, use MCA to spawn Child-Nodes (new instances) of ExtendSim. These Child-Nodes can perform parallel model execution, drastically reducing the time needed for analysis. 

Open Additional Instances for Development 

The Multicore Analysis feature also allows you to manually launch multiple instances of ExtendSim, enabling the concurrent running of multiple models. Manually opening additional instances of ExtendSim can simplify model debugging. You can run different models on separate cores, making it easier to identify and fix issues without slowing down your workflow. 

Benefits of Multicore Analysis

  • Speed: Running models in parallel significantly cuts down the time required for analysis.
  • Efficiency: It allows for handling larger and more complex simulations that would be impractical to run sequentially.
  • Scalability: As your simulation needs grow, the ability to run models concurrently on more than the default 4 instances of ExtendSim is available. For details and pricing, visit ExtendSim.com. 

Conclusion 

Multicore Analysis is a powerful tool for anyone looking to enhance their simulation capabilities. By distributing workloads across multiple instances of ExtendSim, you can achieve faster, more efficient, and scalable model analysis. Whether you’re running multiple instances for parallel execution or simplifying debugging, MCA can make a significant difference in your workflow.

Thursday, October 3, 2024

What are Response Definitions and why are they so critical?

Response definitions create results

When setting up a simulation, defining responses is crucial. These responses are the model outputs you select to automatically collect results from each replication. Let’s dive into the different types of responses you can set up in the ExtendSim Analysis Manager block and how it will help you streamline your analytical processes so you can grab the simulation results you need.

Understanding Response Definitions in Simulations 

Responses are essentially the data points you want to track during your simulation. By defining these, you ensure that the results you need are collected automatically, saving you time and effort. Think of them as the specific results or data points you want to track and analyze after each run. Here’s a bit more detail:

  1. What They Are: Response definitions specify what you’re looking to measure in your simulation. This could be anything from performance metrics, error rates, throughput, or any other relevant data points that are crucial for your analysis.
  2. How They Work: When you set up your simulation, you define these responses in the Analysis Manager block. For example, if you’re simulating a manufacturing process, your responses might include the number of units produced, the time taken for each unit, or the defect rate.
  3. Data Collection: At the end of each simulation run, the Analysis Manager block automatically collects the data based on these response definitions. It then stores this data in the Analysis database for you to review and analyze later.
  4. Why They Matter: Having clear response definitions helps ensure that you’re capturing all the necessary data to evaluate the performance and outcomes of your simulation accurately. It makes your analysis more structured and meaningful. 

Types of Responses You Can Define

Remember from last week’s article, Introducing the New Analysis Manager Block the Analysis Manager acts as a data management system for consolidated control of parameters and collection of model results. It automatically creates an Analysis database and stores all your core analytical process definitions for you – both factor and response definitions - plus it collects and catalogues results from your replications for superb record-keeping and further analysis. The Analysis Manager can collect:

  • Block Responses that can be added to the Analysis Manager using either the:
    • Right-click Method: Simply right-click on any output parameter or checkbox in any block dialog or on a cloned output in your model and choose Add Response.
    • Search Model Method: Click the Search Model button to open the dialog of the Search Blocks block (found in the Utilities library). This allows you to build a filtered list of blocks and their associated dialogs to add as responses to your model.
  • Database Responses are added by clicking the green +/- button in the lower right corner of the DB factors table and selecting Add DB response(s) to open the Database Address Selector. From here, you can choose a field or record to use as a response. 
  • Reliability Responses - If your model includes one or Reliability Block Diagrams (RBDs), the Responses tab of the Analysis Manager will display a table for adding Reliability Responses. Add them by using the:
    • Edit in DB Button: Opens the Reliability Responses table for direct editing.
    • Use Model Data Button: Fills the Reliability Responses table with all the fail-modes currently defined in the model. 

So, in a nutshell, response definitions are your way of telling the Analysis Manager exactly what results you’re interested in. This ensures that all the important data is collected and stored in the Analysis database systematically, making your analysis process much smoother and more efficient.

Thursday, September 26, 2024

Introducing the New Analysis Manager Block

Hey there, simulation enthusiasts! 🌟 I’m excited to share some insights about a fantastic new tool from the Analysis library in ExtendSim – the Analysis ManagerAnalysis Manager block.
This nifty block is designed to streamline your analytical processes and make your life a whole lot easier. The Analysis Manager acts as a data management system for consolidated control of parameters and collection of model results. Here's how:   


1. Automatic Database Creation


First off, the Analysis Manager block takes the hassle out of data management. It automatically creates an Analysis database and stores all your core analytical process

definitions for you…Core Analytical Process factor and response definitions, plus it collects and catalogues results from your replications for superb record-keeping and further analysis No more manual setups – just plug and play!


2. Declaring Factors and Responses

Define factors (inputs) and from this one location in the Analysis Manager, experiment with different input values. Define responses (outputs) you want collected from experiment runs. These factors and responses from your blocks and/or databases will be neatly stored in the Analysis database right alongside replication results. It’s like having a personal assistant for your data!

3. Running Replications Made Easy

But wait, there’s more! Once you’ve declared your factors and responses, the Analysis Manager block steps up its game by helping you run your replications. Here’s how it works:

  • Initial Run Setup: At the beginning of the first run, it uses your factor definitions to update the input values in your model. This ensures everything is set up perfectly from the get-go.
  • Result Collection: At the end of each run, it collects the results based on your response definitions and stores them in the Analysis database. It’s like having a meticulous record keeper who never misses a detail.

And there you have it! The Analysis Manager block is all about making your simulation analysis smoother and more efficient. Give it a try and see how it transforms your workflow. Happy simulating! 🚀

Thursday, July 2, 2020

Race Conditions

written by Dave Krahl, QMT Group
In computer science and engineering a race condition is "an undesirable situation that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence to be done correctly". In discrete event simulation programs such as ExtendSim, a race condition is when multiple events happen at the same simulated time and the order of these events has an impact on the operation of the model. Because discrete event simulators process events at discrete times, race conditions are fairly common. There is no standard solution to this, and it is handled differently by the various simulation software programs. When an event occurs in a discrete event simulation model any number of actions can be triggered. The order of these actions can have a significant effect on the behavior of the model. A classic example is if a resource is released and the item (or entity) releasing that resource requests the same resource again and other items are waiting for that resource. Is the releasing item in contention for the resource?

ExtendSim has a number of features that provide additional control over the sequence and scheduling of events as well as the transmission of messages. These include:
  • Scheduling a 0 time event in an equation before it is evaluated. This provides the ability to delay the calculation of the equation and any subsequent messages sent by the equation returning control to the block that initiated the calculation. This can be implemented in custom blocks as well.
  • Detailed control of messages in the equation and custom blocks. Whether or not a connector responds to a message is a user-defined option.
  • In custom blocks, there are message handlers for every type of interaction. These can be used to control how other messages are sent out.
While there are numerous tools for solving race condition problems, the biggest challenge is detecting them in the first place. This often requires detailed inspection of the results of an event in the simulation. Some tools available for this in ExtendSim are:
  • Tracing of simulation execution
  • Enabling debugging code and in particular examining the "stack" of function calls and message handlers
  • Animation of item movement
  • Record Message block in the ExtendSim Utilities library shows sequences of messages
  • History block that shows item movement and properties.
Race conditions are an artifact of discrete event simulation as a technology. This issue is something that every experienced simulation modeler has encountered in their modeling experience. Identifying the race condition can be challenging. However, tools exist in simulation programs to address any race condition problems and control the execution of the model.

Wednesday, May 10, 2017

Improving Performance in Resource-Constrained Systems

I worked on a simulation study last year with Dr. Holt (University of Washington) and Dr. Srinivasan (University of Tennessee). The results of the study surprised me. It made me start thinking differently about variation and its effect on systems. It might change the way you look at bottlenecks in a resource-constrained system as well.

We modeled a multi-project system, but the results we found can be applied to any system. This multi-project system (think of the projects as engineering projects) required various resources at different stages. We modeled a variety of project structures. The projects we modeled used several common resources. The primary output we studied was the project flow time, or the time it takes a project to complete the system.

It is very difficult to correctly determine the appropriate workload that should be placed upon resources in this environment. There is no question that putting too little work into the system will tend to starve key resources. And while there is pressure to keep resources busy, overloading them usually results in unfavorable outcomes like projects taking too long.

In an ideal setting, work schedules can be developed in advance, so that resources have just the right amount of work allocated to them at various points in time. However, in the project world, demand is highly uncertain, workflow is quite unpredictable, and task durations have significant variability. Even the best-planned schedules become difficult to execute in this environment. And when many different resources are used multiple times in a single project and frequently shared between projects, any unexpected delay in a single task can cause significant ripple effects delaying one or more projects. Even a small delay in a task far away from a key resource can cause chaos in the complicated and interrelated schedules that exist in a project environment, and attempts to tightly schedule projects are soon abandoned.

Our study outlined several steps to dramatically improve the performance of these organizations and I want to talk about two of them here. 1. Determine how resources should be loaded 2. Identify the appropriate level of reserve resources

The first step is not a new concept. This is basically controlling the amount of work in the system, and there are several approaches to implement this. In manufacturing, you might refer to it as CONWIP or Constant Work in Process. In the project-management environment, the term is CONPIP or Constant Projects in Process. We applied a slightly different mechanism, but it had a similar effect as CONWIP or CONPIP. In our system, we monitored the backlog of work for the resources. This backlog would generally not be completely present in the immediate queue of work at that resource. We would only release new work into the system once the bottleneck resource was below a specified threshold.

The chart below shows the first set of results from the resource loading study. The X-Axis shows the resource workload. For example, at 100 percent, the bottleneck resource has enough work in the system to keep it busy for 36 days. At 200 percent, the bottleneck resource has enough work in the system to keep it busy for 72 days. The blue line shows the effect that an increased workload has on the average flow time. As more work is pushed into the system, the average flow time increases. Increased flow time means it takes longer for projects to complete, so customers are much less happy! Longer flow times can be detrimental to a company. The black line is plotting the throughput. With an increased workload in the system, more projects can be completed, but at a certain point, the increase is negligible.

The red line is something we called the project value index. This is defined as the number of projects completed over a given period divided by the 90-percent probable flow time. The project value index is a value we want to maximize (more projects completed while decreasing flow time). We tend to want to be just a bit to the right of the high point on the project value index. This is a good balance of throughput.



The next issue we studied was the use of additional resources. The results of this study are what really surprised me. The typical thought process for improving a system is to add resources to the bottleneck. Then to keep improving the system, you would find the next bottleneck to add resources to. This feels like the natural progression for improving projects, right? In the system studied, we did have a clear bottleneck. We had nine resources. When the bottleneck resource was at 100 percent utilization, the other resources ranged from 50 percent to 75 percent.

Another strategy we tried was to use an Expert resource. This is a resource that can be used to help any other resource, not just the bottleneck. This would be the most experienced staff member that can do everything. We didn’t want this expert resource working just at the bottleneck resource. The task durations were all random. We let the expert resource help any resource when the task was taking longer than the expected value. These expert resources would ONLY be requested for help after the task duration had exceeded the expected mean value. For example, let’s say the expected task duration was six days. If the task was not complete by day six, then the expert resource would be requested to help complete the task to “shorten the long tail” of the task duration. The expert resource is specifically used to reduce the long tail on the right side of the service time distribution.

In the chart below, we used the Project Value Index to compare the two strategies of a) adding an Expert resource, which helps reduce the long task times and b) adding a resource at the bottleneck. As you can see, using the Expert resource had a significantly better impact! Wow. I did not expect this.



 Here is what I learned from this study: When a task consumes a resource for an excessive amount of time, it not only delays this project from completing but it also delays every project in the queue for this resource. So long-tail tasks have an impact on potentially all the projects in the system, not just on the individual project. Focusing on these long-tail tasks, even on non-bottleneck processes, has a bigger impact on the system than just focusing on improving the bottleneck process.

That is something you should noodle on. This concept can of course be applied not only to project-management systems but also to many other resource-constrained systems.

Popular Posts