13.4 Managing Rapid Growth in the Governor’s Office of Regulatory Assistance (GORA) GORASim: User Instructions and Model Training Lab

In the process of finding a way of managing the initial symptoms of performance problems at the Governor’s Office of Regulatory Assistance, Eliot Benchman paid an expensive team of Rockefeller College consultants to develop a simulation model to analyze several policies for GORA. The model, named GORASim,[1] is available through a web-interface at https://forio.com/app/naspaa/gora/ (see Figure 13.4). You will use this tool to test some potential policies and recommendations for Eliot Benchman.

Figure 13.4 Online interface for GORASim.

The current version of the simulation model can test the following policy options:

  1. Increasing the number of State supported positions
  2. Improvements in efficiency to process new hires fill vacancies
  3. Reducing the time to gain experience through training or mentoring programs
  4. Increasing productivity through the use of technology or lean practices
  5. Adding a fee to the service through a third party partner

The simulation starts at month zero, when GORA was created by the Governor and simulates 50 months of operation of the program. Given that the first 12 months of operations already happened, any policy will only affect indicators after month 12.

For the purposes of this exercise, assume that GoraSIM represents the best simulation model available to Benchman. Your task as consultant will be to implement multiple simulation runs corresponding to different policy alternatives, policy packages (i.e., implementing two or more policies simultaneously), and scenario analyses. You will use this to understand the long-term effects of these policies, trade-offs between policy alternatives, and recommend the best course of action.

Exploring Policy Alternatives: Introduction to GORASim

This worksheet will familiarize you with the GORASIM model, so that you can experiment and learn about policy alternatives to Eliot Benchman as well as consequences of each policy on performance. Please work through the following steps in teams.

Step One: Load GORASin

  • Use your favorite browser to load the model interface from https://forio.com/app/naspaa/gora/. There are five tabs, described below. As shown in Figure 1, you can navigate between tabs by clicking on the links in the upper part of the interface.
  1. Introduction. This page contains a brief description of the case.
  2. Decisions. This view is a “cockpit” from which you can run the model. It includes all policy options described in the previous section. The default values in this view of the simulation were selected by the simulation experts based on historic data. The only exception is the number of State Supported Positions after the first year, which reflects the initial calculations into the future, also based on historic data, and introduced in the case description.
  3. Dashboard. This view includes the results of the simulations. Before running any experiment, the page will show a “Baseline” simulation, which uses the default values in the decisions page. This simulation is commonly used as a benchmark for any change in the decisions. This page includes the average completions per month and the current backlog of requests, representing the average closed requests and the open requests respectively. The page also includes graphs for the behavior over time for the workload ratio, the service delivery delay and market saturation. In the case of workload ratio and service delivery delay, the 100% mark represents a situation where workload and waiting times are as expected. When those indicators go above 100% are indicating a problematic situation involving excessive workloads and waiting times. Market saturation is the percentage of potential clients in the State being served
  4. Staff and Quality. This view includes charts for other important parameters in the simulation. Total staff and fraction experienced represent the total number of employees and the percentage of them that are fully trained to provide services. Productivity represents the total number of requests completed on average by each employee per month. The view includes again the workload ratio and the delivery delay, adding an index to represent the quality of the service provided. Any quality of 100% or above represents good quality, and any value below 100% represents problems with the quality of the service. All output can be used to explore impacts of each policy change.
  5. Run Manager. This page allows you to delete some of the runs and to select those that you want to be displayed in the output page.

Step Two: Run each of the policies from the “Decisions” view, one at a time. Figure 13.5 shows a screenshot of this interface.

Figure 13.5. Decisions View.

  • In the Decisions view, set the desired policy to test, for example, 18 State Supported Positions after First Year. Name your simulation run with a descriptive name and click on the “Save & Simulate” button to implement each policy. Create different runs where you make slight changes to the input values for each policy, such as State Supported Positions to 18, 22 or 26.
  • Explore the output of the model from the Dashboard and the Staff and Quality views, and discuss the graphs with your team. Can you hypothesize a potential causal explanation between your decision and the output?
  • You can download your favorite charts to include in any other document. Hover the mouse over your desired chart and click on the “Download Chart” button that appears in the bottom of the chart.
  • Be systematic in your exploration of policies. Make sure that you compare every policy with the “Baseline” run (where you do not implement any policies), to see the true impact of a policy.
  • Make notes about the runs. Document which inputs you changed, your thoughts on causal explanations for your findings, and any questions you have about how the model works, the policy, or the behavior you see in the graphs over time.

Step Three: Explore combinations of two or more simultaneous policies.

  • Run several policies at the same time. Discuss the results with your group and write down notes about these runs, including which inputs you changed, your causal explanations, and any questions.
  • Find a “policy package” of multiple policies that creates a good outcome, from your point of view. This will take some time, as you may need to rerun numerous combinations of policies and then explore all output to understand what the model is doing and why. Take careful note of counterintuitive effects, if combining policies yields different results from the runs where you only tested individual policies.

[1] GORASim was developed by Mohammad Mojtahedzadeh. The web interface was developed by Luis F. Luna-Reyes.

Attribution

By Luis F. Luna-Reyes, Erika Martin amd Mikhail Ivonchyk, and licensed under  CC BY-NC-SA 4.0.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Data Analytics for Public Policy and Management Copyright © 2022 by Luis F. Luna-Reyes, Erika G. Martin and Mikhail Ivonchyk is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book