Category Archives: Vensim

What’s the empirical distribution of parameters?

Vensim‘s answer to exploring ill-behaved problem spaces is either to do hill-climbing with random restarts, or MCMC and simulated annealing. Either way, you need to start with some initial distribution of points to search.

It’s helpful if that distribution is somehow efficient at exploring the interesting parts of the space. I think this is closely related to the problem of selecting uninformative priors in Bayesian statistics. There’s lots of research about appropriate uninformative priors for various kinds of parameters. For example,

  • If a parameter represents a probability, one might choose the Jeffreys or Haldane prior.
  • Indifference to units, scale and inversion might suggest the use of a log uniform prior, where nothing else is known about a positive parameter

However, when a user specifies a parameter in Vensim, we don’t even know what it represents. So what’s the appropriate prior for a parameter that might be positive or negative, a probability, a time constant, a scale factor, an initial condition for a physical stock, etc.?

On the other hand, we aren’t quite as ignorant as the pure maximum entropy derivation usually assumes. For example,

  • All numbers have to lie between the largest and smallest float or double, i.e. +/- 3e38 or 2e308.
  • More practically, no one scales their models such that a parameter like 6.5e173 would ever be required. There’s a reason that metric prefixes range from yotta to yocto (10^24 to 10^-24). The only constant I can think of that approaches that range is Avogadro’s number (though there are probably others), and that’s not normally a changeable parameter.
  • For lots of things, one can impose more constraints, given a little more information,
    • A time constant or delay must lie on [TIME STEP,infinity], and the “infinity” of interest is practically limited by the simulation duration.
    • A fractional rate of change similarly must lie on [-1/TIME STEP,1/TIME STEP] for stability
    • Other parameters probably have limits for stability, though it may be hard to discover them except by experiment.
    • A parameter with units of year is probably modern, [1900-2100], unless you’re doing Mayan archaeology or paleoclimate.

At some point, the assumptions become too heroic, and we need to rely on users for some help. But it would still be really interesting to see the distribution of all parameters in real models. (See next …)

Timing Vensim models

Need to time model runs? One way to do it is with Vensim’s log commands, in a cmd script or Venapp:

LOG>CREATE|timing.txt
LOG>MESSAGE|timing.txt|"About to run."
LOG>TIMESTAMP|timing.txt
MENU>RUN|o
LOG>TIMESTAMP|timing.txt
LOG>MESSAGE|timing.txt|"Ran."

These commands were designed for logging user interaction, so they don’t offer millisecond resolution needed for small models. For that, another option is to use the .dll.

Generally, model execution time is close to proportional with equation count x time step count, with exceptions for iterative functions (FIND ZERO) and RK auto integration. You can use the .dll’s vensim_get_varattrib to count equations (expanding subscripts) if it’s helpful for planning to maximize simulation speed.

Vensim + data with ODBC

I haven’t had much time to write lately – too busy writing Vensim code, working on En-ROADS, and modeling the STEM workforce.

So, in the meantime, here’s a nice tutorial on the use of ODBC database links with Vensim DSS, from Mohammad Jalali:

This can be a powerful way to ingest a lot of data from diverse sources, and to share and archive simulations.

Big data is always a double-edged sword in consulting projects. Without it, you don’t know much. But with it, your time is consumed with discovering all the flaws of the data, which remain because most likely no one else ever looked at it seriously from a strategic/dynamic perspective before. It’s typically transactionally correct, because people verify that they get their orders and paychecks. But at an aggregate level it’s often rife with categorization mismatches across organizational boundaries and other pathologies.

Setting up Vensim compiled simulation on Windows

If you don’t use Vensim DSS, you’ll find this post rather boring and useless. If you do, prepare for heart-pounding acceleration of your big model runs:

  • Get Vensim DSS.
  • Get a C compiler. Most flavors of Microsoft compilers are compatible; MS Visual C++ 2010 Express is a good choice (and free). You could probably use gcc, but I’ve never set it up. I’ve heard reports of issues with 2005 and 2008 versions, so it may be worth your while to upgrade.
  • Install Vensim, if you haven’t already, being sure to check the Install external function and compiled simulation support box.
  • Launch the program and go to Tools>Options…>Startup and set the Compiled simulation path to C:\Documents and Settings\All Users\Vensim\comp32 (WinXP) or C:\Users\Public\Vensim\comp32 (Vista/7).
    • Check your mdl.bat in the location above to be sure that it points to the right compiler. This is a simple matter of checking to be sure that all options are commented out with “REM ” statements, except the one you’re using, for example:
  • Move to the Advanced tab and set the compilation options to Query or Compile (you may want to skip this for normal Simulation, and just do it for Optimization and Sensitivity, where speed really counts).

This is well worth the hassle if you’re working with a large model in SyntheSim or doing a lot of simulations for sensitivity analysis and optimization. The speedup is typically 4-5x.

Elk, wolves and dynamic system visualization

Bret Victor’s video of a slick iPad app for interactive visualization of the Lotka-Voltera equations has been making the rounds:

Coincidentally, this came to my notice around the same time that I got interested in the debate over wolf reintroduction here in Montana. Even simple models say interesting things about wolf-elk dynamics, which I’ll write about some other time (I need to get vaccinated for rabies first).

To ponder the implications of the video and predator-prey dynamics, I built a version of the Lotka-Voltera model in Vensim.

After a second look at the video, I still think it’s excellent. Victor’s two design principles, ubiquitous visualization and in-context manipulation, are powerful for communicating a model. Some aspects of what’s shown have been in Vensim since the introduction of SyntheSim a few years ago, though with less Tufte/iPad sexiness. But other features, like Causal Tracing, are not so easily discovered – they’re effective for pros, but not new users. The way controls appear at one’s fingertips in the iPad app is very elegant. The “sweep” mode is also clever, so I implemented a similar approach (randomized initial conditions across an array dimension) in my version of the model. My favorite trick, though, is the 2D control of initial conditions via the phase diagram, which makes discovery of the system’s equilibrium easy.

The slickness of the video has led some to wonder whether existing SD tools are dinosaurs. From a design standpoint, I’d agree in some respects, but I think SD has also developed many practices – only partially embodied in tools – that address learning gaps that aren’t directly tackled by the app in the video: Continue reading

Vensim Compiled Simulation on the Mac

Speed freaks on Windows have long had access to 2 to 5x speed improvements from compiled simulations. Now that’s available on the Mac in the latest Vensim release.

Here’s how to do it, in three easy steps:

  • Get a Mac.
  • Get the gcc compiler. The only way I know to get this is to sign up as an Apple Developer (free) and download Xcode (I grabbed 3.2.2, which is much smaller than the 3.2.6+iOS SDK, but version shouldn’t matter much). There may be other ways, but this was easy.
  • Get Vensim DSS. After you install (checking the Install external function and compiled simulation support to: box), launch the program and go to Vensim DSS>Preferences…>Startup and set the Compiled simulation path to /Users/Shared/Vensim/comp. Now move to the advanced tab and set the compilation options to Query or Compile (you may want to skip this for normal Simulation, and just do it for Optimization and Sensitivity, where speed really counts).

OK, so I cheated a little on the step count, but it really is pretty easy. It’s worth it, too: I can run World3 1000 times in about 8 seconds interpreted; compiled gets that down to about 2.

Update: It turns out that an installer bug prevents 5.10d on the Mac from installing a needed file; you can get it here.

Monday tidbits- tools, courses

I neglected to cross-post an interesting new Vensim model documentation tool that’s in my model library.

Shameless commerce dept.: I’m teaching Vensim courses in Palo Alto in April and Bozeman in June. Following the June offering, Ventana’s Bill Arthur will be teaching “SMLOD” – Small Models with Lots of Data – a deep technical dive into the extraction of insight from large datasets.

Optimizing Vensim models

Danger – another technical post, mainly relevant to users of advanced Vensim versions.

The title has a double meaning: I’m talking about optimizing the speed of a model, which is most often needed for optimization problems.

Here’s the challenge: you have a model, and you think you understand it. You’d like to calibrate it to data, do some policy optimization to identify good decision rules, and do some Monte Carlo simulation to identify sensitive parameters and robust policies. All of those things take thousands or even millions of model runs. Or, maybe you’d just like to make a slightly sluggish model fast enough to run interactively with Synthesim.

Here’s what you can do:

  • Run compiled. Probably your best bet is to use MS Visual c++ 2010 Express, which is free, or an old copy of MSVC 6. Some of the versions in between apparently have problems. You may need to change Vensim’s mdl.bat file to match the specific paths of your software. You might succeed with free tools, like gcc+, but I haven’t tried – I’d be interested to hear of such adventures.
  • Use the INITIAL() statement as much as you can. In particular, be sure that any data-retrieval functions like GET DATA AT TIME are wrapped in an initial if possible (they’re comparatively slow).
  • Consider using data equations for input calculations that are not part of the feedback structure of the model. Data equations get executed once, at the start of a group of optimization/sensitivity/Synthesim simulations, rather than with each iteration. However, don’t use data equations where parameters you plan to change are involved, because then your changes won’t propagate through the results. If you want to save startup time too, move complex data calculations into an offline data model.
  • Switch any sparse array summary calculations from SUM, VMIN, VMAX, and PROD functions to VECTOR SELECT or VECTOR ELM MAP.
  • Update: Calculate vector sums only once. For example, instead of writing share[k] = quantity[k]/SUM(quantity[k!]), calculate the sum separately, so that share[k] = quantity[k]/total quantity and  total quantity = SUM(quantity[k!]).
  • Get rid of any extraneous structure that’s not related to your payoff or other variables of interest. If you still want the information, move it to a separate model that reads the main model’s .vdf for postprocessing.
  • Consider whether your payoff is separable. In other words, if your model boils down to payoff = part1(parameter1) + part2(parameter2), you can optimize sequentially for parameter1 and parameter2, since they don’t interact. Vensim’s algorithm is designed for the worst case – parameters that interact, multiple optima, discontinuities, etc. You can automate a sequential set of optimizations with a command script.
  • Consider transforming some of your optimization parameters. For example, if you are fitting to data for a stock with a first order outflow, that outflow can be written as stock/tau or stock*delta, where tau and delta are the lifetime and fractional loss rate, respectively. Mathematically, it makes no difference whether you use the tau or delta approach, but in some practical cases it might. For example, if you think delta might be near zero (a long lifetime), you might do better to optimize over delta = [0,1] than tau = [1,1e9].
  • If you’re looking for hardware improvements, clock speed matters more than multicores and cache.

These options are in the order that I thought of them, which means that they’re very roughly in order of likely improvement per unit effort.

Unfortunately, it’s often the case that all of this will get you a 10x improvement, and you need 1000x. Unless you have a supercomputer or massive parallel grid at your disposal, the only real remedy is to simplify your model. Fortunately, that’s not necessarily a bad thing.

Update: One more thing to try: if you’re doing single simulations with a large model, the bottleneck may be the long disk write of the output .vdf file. In that case, you can use a savelist (.lst) to restrict the number of variables stored to just the output you’re interested in. However, you should occasionally do a run without a savelist, and browse through the model results to be sure that things are OK. Consider writing some Reality Checks to enforce quality control on things you won’t be looking at.

Update 2: Another suggestion, via Hazhir Rahmandad: when using the ALLOC functions (DEMAND AT PRICE, etc.), choose simple demand/supply function shapes, like triangular, rather than complex shapes like exponential. The allocation functions iterate to equilibrium internally, and this can be time consuming. Avoiding the use of FIND ZERO and SIMULTANEOUS where possible is also helpful – often a lookup or polynomial approximation to the solution of simultaneous equations will suffice.

Subversive modeling

This is a technical post, so Julian Assange fans can tune out. I’m actually writing about source code management for Vensim models.

Occasionally procurements try to treat model development like software development. That can end in tragedy, because modeling isn’t the same as coding. However, there are many common attributes, and therefore software tools can be useful for modeling.

One typical challenge in model development is version control. Model development is iterative, and I typically go through fifty or more named model versions in the course of a project. C-ROADS is at v142 of its second life. It takes discipline to keep track of all those model iterations, especially if you’d like to be able to document changes along the way and recover old versions. Having a distributed team adds to the challenge.

The old school way

Continue reading