Getting Started
This guide shows you exactly what to implement to use SimOptDecisions.
Checklist: What You Need to Implement
Types (5 required)
| Type | Purpose | Subtype of |
|---|---|---|
| Config | Fixed parameters (shared across scenarios) | AbstractConfig |
| SOW | Uncertain parameters (one possible future) | AbstractSOW |
| State | Your model’s internal state | AbstractState |
| Action | What gets decided at each timestep | AbstractAction |
| Policy | Decision rule with tunable parameters | AbstractPolicy |
Callbacks (5 required)
| Callback | Signature | Returns |
|---|---|---|
initialize |
(config, sow, rng) |
State |
get_action |
(policy, state, sow, t) |
Action |
run_timestep |
(state, action, sow, config, t, rng) |
(new_state, step_record) |
time_axis |
(config, sow) |
Iterable with length() |
finalize |
(final_state, step_records, config, sow) |
Outcome |
For Optimization (2 additional)
| Method | Signature | Returns |
|---|---|---|
params |
(policy) |
Vector of parameter values |
param_bounds |
(PolicyType) |
Vector of (min, max) tuples |
Plus a constructor: MyPolicy(params::AbstractVector).
Minimal Working Example
Here’s a complete example you can copy and modify:
using SimOptDecisions
using Random
# =============================================================================
# TYPES
# =============================================================================
struct MyConfig <: AbstractConfig
horizon::Int
end
struct MySOW{T<:AbstractFloat} <: AbstractSOW
growth_rate::T
end
struct MyState{T<:AbstractFloat} <: AbstractState
value::T
end
struct MyAction <: AbstractAction end
struct MyPolicy <: AbstractPolicy end
# =============================================================================
# CALLBACKS
# =============================================================================
function SimOptDecisions.initialize(config::MyConfig, sow::MySOW, rng::AbstractRNG)
return MyState(1.0)
end
function SimOptDecisions.get_action(policy::MyPolicy, state::MyState, sow::MySOW, t::TimeStep)
return MyAction()
end
function SimOptDecisions.run_timestep(
state::MyState,
action::MyAction,
sow::MySOW,
config::MyConfig,
t::TimeStep,
rng::AbstractRNG,
)
new_value = state.value * (1 + sow.growth_rate)
step_record = (value=state.value,)
return (MyState(new_value), step_record)
end
function SimOptDecisions.time_axis(config::MyConfig, sow::MySOW)
return 1:(config.horizon)
end
function SimOptDecisions.finalize(
final_state::MyState,
step_records::Vector,
config::MyConfig,
sow::MySOW,
)
return (final_value=final_state.value,)
end
# =============================================================================
# RUN
# =============================================================================
config = MyConfig(10)
sow = MySOW(0.05)
policy = MyPolicy()
result = simulate(config, sow, policy)
println("Final value: ", result.final_value)Understanding Each Piece
Types
Config holds parameters that are fixed across all scenarios:
struct MyConfig <: AbstractConfig
horizon::Int # how many timesteps to simulate
endSOW (State of the World) holds uncertain parameters. Each SOW represents one possible future:
struct MySOW{T<:AbstractFloat} <: AbstractSOW
growth_rate::T # uncertain: could be 3%, 5%, 7%...
endState tracks your model’s internal state that evolves over time:
struct MyState{T<:AbstractFloat} <: AbstractState
value::T # current value being tracked
endAction represents what gets decided at each timestep:
struct MyAction <: AbstractAction end # can be empty for simple modelsPolicy defines how decisions are made. For optimization, include tunable parameters:
struct MyPolicy <: AbstractPolicy end # no parameters in this simple exampleCallbacks
initialize: Create the starting state.
SimOptDecisions.initialize(config::MyConfig, sow::MySOW, rng::AbstractRNG) = MyState(1.0)get_action: Given the current state, decide what to do.
SimOptDecisions.get_action(policy::MyPolicy, state::MyState, sow::MySOW, t::TimeStep) = MyAction()run_timestep: Apply the action and advance the model. Returns (new_state, step_record).
function SimOptDecisions.run_timestep(state::MyState, action::MyAction, sow::MySOW, config::MyConfig, t::TimeStep, rng::AbstractRNG)
new_value = state.value * (1 + sow.growth_rate)
return (MyState(new_value), (value=state.value,))
endtime_axis: Define when timesteps occur.
SimOptDecisions.time_axis(config::MyConfig, sow::MySOW) = 1:config.horizonfinalize: Summarize the simulation into an outcome.
function SimOptDecisions.finalize(final_state::MyState, step_records::Vector, config::MyConfig, sow::MySOW)
return (final_value=final_state.value,)
endAdding Optimization
To optimize policy parameters, add these methods:
# Policy with tunable parameters
struct OptimizablePolicy{T<:AbstractFloat} <: AbstractPolicy
threshold::T
end
# Constructor from parameter vector
OptimizablePolicy(params::AbstractVector) = OptimizablePolicy(params[1])
# Extract parameters
SimOptDecisions.params(p::OptimizablePolicy) = [p.threshold]
# Define bounds
SimOptDecisions.param_bounds(::Type{<:OptimizablePolicy}) = [(0.0, 10.0)]Then set up and run optimization:
using Metaheuristics
# Aggregate outcomes across SOWs into metrics
function calculate_metrics(outcomes)
values = [o.final_value for o in outcomes]
return (expected_value=mean(values), worst_case=minimum(values))
end
# Sample many possible futures
sows = [MySOW(rand() * 0.1) for _ in 1:100]
# Set up the problem
prob = OptimizationProblem(
config,
sows,
OptimizablePolicy,
calculate_metrics,
[maximize(:expected_value)],
)
# Run optimization
result = SimOptDecisions.optimize(prob, MetaheuristicsBackend())
# Get the best policy
best_params = result.pareto_params[1]
best_policy = OptimizablePolicy(best_params)Next Steps
→ Tutorial — Learn SimOptDecisions through a complete worked example (house elevation under flood risk)