Quick Reference
This guide shows you exactly what to implement to use SimOptDecisions.
Workflow Overview
SimOptDecisions supports two main workflows:
- Exploratory modeling with
explore(): Run all (policy, scenario) combinations and analyze the result matrix to understand where policies succeed or fail - Policy search with
optimize(): Use multi-objective optimization to find Pareto-optimal policies, then stress-test them withexplore()
Both build on simulate(), which runs a single (policy, scenario) pair through user-defined callbacks. See Framework Architecture for why the framework is structured this way.
Checklist: What You Implement
Types (5 required)
| Type | Purpose | Subtype of | Typical definition |
|---|---|---|---|
| Config | Fixed parameters (shared across scenarios) | AbstractConfig |
Plain struct |
| Scenario | Uncertain parameters (one possible future) | AbstractScenario |
@scenariodef |
| State | Your model’s internal state | AbstractState |
Plain struct |
| Action | What gets decided at each timestep | AbstractAction (optional) |
Plain struct |
| Policy | Decision rule with tunable parameters | AbstractPolicy |
@policydef |
Use @scenariodef and @policydef for types whose fields are explored as dimensions by explore(). Config, State, and Action typically have plain fields and are defined as regular Julia structs. Macros @configdef and @statedef are available if you need parameter wrappers on those types too.
Callbacks (5 required)
| Callback | Signature | Returns |
|---|---|---|
initialize |
(config, scenario, rng) |
State |
get_action |
(policy, state, t, scenario) |
Action (any type) |
run_timestep |
(state, action, t, config, scenario, rng) |
(new_state, step_record) |
time_axis |
(config, scenario) |
Iterable with length() |
compute_outcome |
(step_records, config, scenario) |
Outcome |
For explore() (parameter wrappers required)
Scenario, Policy, and Outcome fields must use parameter types. Use @scenariodef, @policydef, and @outcomedef to define these types—the macros auto-wrap plain values.
| Macro | Produces subtype of | Use |
|---|---|---|
@scenariodef |
AbstractScenario |
Uncertain parameters |
@policydef |
AbstractPolicy |
Decision parameters (with optional bounds) |
@outcomedef |
AbstractOutcome |
Simulation results for explore() |
Field macros within these definitions:
| Field macro | Wraps as | Example |
|---|---|---|
@continuous |
ContinuousParameter{T} |
@continuous growth_rate or @continuous x 0.0 1.0 |
@discrete |
DiscreteParameter{Int} |
@discrete count |
@categorical |
CategoricalParameter{Symbol} |
@categorical climate [:low, :high] |
@timeseries |
TimeSeriesParameter{T,Int} |
@timeseries water_levels |
@generic |
GenericParameter{Any} |
@generic metadata (skipped in flattening) |
For Optimization
When all @policydef fields are bounded @continuous, everything is auto-derived:
params(policy)— extracts current parameter valuesparam_bounds(::Type)— returns bounds from field definitionsMyPolicy(x::AbstractVector)— vector constructor for the optimizer
If some fields are not bounded @continuous, you must define param_bounds(::Type) and the vector constructor manually.
Executors
| Executor | Parallelism | Traced exploration? |
|---|---|---|
SequentialExecutor(; crn=true, seed=1234) |
None | Yes |
ThreadedExecutor(; crn=true, seed=1234) |
Threads.@threads |
Yes |
DistributedExecutor(; crn=true, seed=1234) |
asyncmap across workers |
No |
All executors support Common Random Numbers (CRN, enabled by default): each scenario index gets a deterministic RNG seed, so the same scenario produces the same random stream across all policies. ThreadedExecutor also accepts ntasks (defaults to Threads.nthreads()).
Utility Functions
| Function | Signature | Returns |
|---|---|---|
is_first |
(t::TimeStep) |
true if t is the first timestep |
is_last |
(t::TimeStep, times) |
true if t is the last timestep |
discount_factor |
(rate, t) |
1 / (1 + rate)^t |
timeindex |
(times) |
Iterator of TimeStep(i, v) pairs |
value |
(param) |
Unwrap a parameter type to its raw value |
Minimal Working Example
Here’s a complete example you can copy and modify:
using SimOptDecisions
using Random
# =============================================================================
# TYPES (using macros for Scenario/Policy)
# =============================================================================
struct MyConfig <: AbstractConfig
horizon::Int
end
@scenariodef MyScenario begin
@continuous growth_rate
end
struct MyState{T<:AbstractFloat} <: AbstractState
value::T
end
struct MyAction <: AbstractAction end
@policydef MyPolicy begin
@continuous threshold 0.0 10.0
end
# =============================================================================
# CALLBACKS
# =============================================================================
function SimOptDecisions.initialize(config::MyConfig, scenario::MyScenario, rng::AbstractRNG)
return MyState(1.0)
end
function SimOptDecisions.get_action(policy::MyPolicy, state::MyState, t::TimeStep, scenario::MyScenario)
return MyAction()
end
function SimOptDecisions.run_timestep(
state::MyState,
action::MyAction,
t::TimeStep,
config::MyConfig,
scenario::MyScenario,
rng::AbstractRNG,
)
new_value = state.value * (1 + value(scenario.growth_rate))
step_record = (value=state.value,)
return (MyState(new_value), step_record)
end
function SimOptDecisions.time_axis(config::MyConfig, scenario::MyScenario)
return 1:(config.horizon)
end
function SimOptDecisions.compute_outcome(
step_records::Vector,
config::MyConfig,
scenario::MyScenario,
)
return (final_value=step_records[end].value,)
end
# =============================================================================
# RUN
# =============================================================================
config = MyConfig(10)
scenario = MyScenario(growth_rate = 0.05) # auto-wrapped by @scenariodef
policy = MyPolicy(threshold = 5.0) # auto-wrapped by @policydef
result = simulate(config, scenario, policy)
println("Final value: ", result.final_value)explore() with this example
To use explore(), define an outcome type with @outcomedef so results can be assembled into labeled arrays:
@outcomedef MyOutcome begin
@continuous final_value
end
# Update compute_outcome to return MyOutcome instead of a plain tuple:
function SimOptDecisions.compute_outcome(step_records::Vector, config::MyConfig, scenario::MyScenario)
return MyOutcome(final_value=step_records[end].value)
end
scenarios = [MyScenario(growth_rate=rand() * 0.1) for _ in 1:100]
policies = [MyPolicy(threshold=t) for t in 1.0:2.0:10.0]
result = explore(config, scenarios, policies)
result[:final_value] # YAXArray with dims (policy, scenario)Understanding Each Piece
Types
Config holds parameters that are fixed across all scenarios:
struct MyConfig <: AbstractConfig
horizon::Int # how many timesteps to simulate
endScenario holds uncertain parameters. Use @scenariodef with @continuous, @discrete, etc.:
@scenariodef MyScenario begin
@continuous growth_rate # uncertain: could be 3%, 5%, 7%...
endState tracks your model’s internal state that evolves over time:
struct MyState{T<:AbstractFloat} <: AbstractState
value::T # current value being tracked
endAction represents what gets decided at each timestep. Can be any type:
struct MyAction <: AbstractAction endPolicy defines how decisions are made. Use @policydef with bounds for optimization:
@policydef MyPolicy begin
@continuous threshold 0.0 10.0 # tunable parameter with bounds
endCallbacks
initialize: Create the starting state.
SimOptDecisions.initialize(config::MyConfig, scenario::MyScenario, rng::AbstractRNG) = MyState(1.0)get_action: Given the current state, decide what to do.
SimOptDecisions.get_action(policy::MyPolicy, state::MyState, t::TimeStep, scenario::MyScenario) = MyAction()run_timestep: Apply the action and advance the model. Returns (new_state, step_record).
function SimOptDecisions.run_timestep(state::MyState, action::MyAction, t::TimeStep, config::MyConfig, scenario::MyScenario, rng::AbstractRNG)
new_value = state.value * (1 + value(scenario.growth_rate))
return (MyState(new_value), (value=state.value,))
endtime_axis: Define when timesteps occur.
SimOptDecisions.time_axis(config::MyConfig, scenario::MyScenario) = 1:config.horizoncompute_outcome: Summarize the simulation into an outcome. For simulate() and optimize(), plain tuples work. For explore(), use @outcomedef to wrap fields in parameter types.
# Plain tuple (works with simulate and optimize)
function SimOptDecisions.compute_outcome(step_records::Vector, config::MyConfig, scenario::MyScenario)
return (final_value=step_records[end].value,)
endAdding Optimization
When all fields are bounded @continuous, @policydef auto-derives everything needed for optimization: params(), param_bounds(::Type), and the vector constructor.
@policydef OptimizablePolicy begin
@continuous threshold 0.0 10.0
end
# That's it — no manual method definitions neededThen set up and run optimization:
using Metaheuristics
using Statistics
# Aggregate outcomes across scenarios into metrics
function calculate_metrics(outcomes)
values = [o.final_value for o in outcomes]
return (expected_value=mean(values), worst_case=minimum(values))
end
# Sample many possible futures
scenarios = [MyScenario(growth_rate=rand() * 0.1) for _ in 1:100]
# Run optimization (flat API)
result = optimize(
config, scenarios, OptimizablePolicy, calculate_metrics,
[maximize(:expected_value)];
backend=MetaheuristicsBackend()
)
# Get the best policy
best_params = result.pareto_params[1]
best_policy = OptimizablePolicy(best_params)Next Steps
→ Tutorial — Learn SimOptDecisions through a complete worked example (house elevation under flood risk)