Vehicle Parameter Sensitivities using a Single Track Model on synthetic tracks¶
This notebook is an educational engineering-style walkthrough of a local sensitivity study in ApexSim.
We use a synthetic circular track and the single-track vehicle model to answer a practical question: which vehicle parameters matter most for lap time and energy in this specific setup?
Why start with a synthetic track?¶
A synthetic track is intentionally simple. It removes many confounding effects and makes it easier to interpret cause and effect in the model.
In this study, we vary four parameters around a baseline operating point:
- Vehicle mass
- Center-of-gravity height
- Yaw inertia
- Drag coefficient
and evaluate two outputs:
- Lap time (
lap_time_s) - Energy consumption (
energy_kwh)
Method: local sensitivities around a baseline¶
ApexSim computes local derivatives
$$ S_i = \frac{\partial y}{\partial p_i} $$
for each selected parameter $p_i$ and objective $y$.
To make this easier to interpret in engineering terms, we also use the first-order estimate for a +10% parameter change:
$$ \Delta y_{+10\%} \approx S_i \cdot (0.10 \cdot p_{i,0}) $$
This gives absolute output deltas in physical units: seconds for lap time and Wh for energy.
Modeling assumptions to keep in mind¶
This example uses the quasi-static speed-profile solver with the torch backend (autodiff by default).
That means the results are local sensitivities of the active quasi-static model path. Parameters that mainly influence transient state dynamics may appear near zero in this workflow.
from __future__ import annotations
from pathlib import Path
import sys
import matplotlib.pyplot as plt
import pandas as pd
def find_repo_root(start: Path) -> Path:
for candidate in (start, *start.parents):
if (candidate / "pyproject.toml").exists() and (candidate / "src").exists():
return candidate
raise RuntimeError("Could not locate repository root from current working directory")
repo_root = find_repo_root(Path.cwd())
if str(repo_root) not in sys.path:
sys.path.insert(0, str(repo_root))
examples_sensitivity = repo_root / "examples" / "sensitivity"
if str(examples_sensitivity) not in sys.path:
sys.path.insert(0, str(examples_sensitivity))
from apexsim.analysis import (
SensitivityRuntime,
SensitivityStudyParameter,
run_lap_sensitivity_study,
)
from apexsim.simulation import build_simulation_config
from apexsim.tire import default_axle_tire_parameters
from apexsim.track import build_circular_track
from apexsim.vehicle import SingleTrackPhysics, build_single_track_model
from common import example_vehicle_parameters, sensitivity_output_root
pd.set_option("display.max_columns", 40)
pd.set_option("display.width", 180)
Step 1: define track, model, solver, and parameter set¶
We build one baseline configuration and then request sensitivities for the four parameters via dot-path targets in the study API.
variation_pct = 10.0
track = build_circular_track(radius=50.0, sample_count=721)
vehicle = example_vehicle_parameters()
tires = default_axle_tire_parameters()
physics = SingleTrackPhysics()
simulation_config = build_simulation_config(
compute_backend="torch",
torch_device="cpu",
torch_compile=False,
max_speed=115.0,
)
parameter_definitions = [
SensitivityStudyParameter(name="mass", target="vehicle.mass", label="Vehicle mass"),
SensitivityStudyParameter(name="cg_height", target="vehicle.cg_height", label="Center of gravity height"),
SensitivityStudyParameter(name="yaw_inertia", target="vehicle.yaw_inertia", label="Yaw inertia"),
SensitivityStudyParameter(name="drag_coefficient", target="vehicle.drag_coefficient", label="Drag coefficient"),
]
model = build_single_track_model(
vehicle=vehicle,
tires=tires,
physics=physics,
)
pd.DataFrame(
{
"parameter": [p.name for p in parameter_definitions],
"target": [p.target for p in parameter_definitions],
"variation_used": [f"+/-{variation_pct:.0f}%"] * len(parameter_definitions),
}
)
Step 2: run the study and inspect numeric results¶
The long-format table reports objective values, raw sensitivities, relative elasticities, and predicted KPI values for +/-10% parameter variation.
study_result = run_lap_sensitivity_study(
track=track,
model=model,
simulation_config=simulation_config,
parameters=parameter_definitions,
label="Synthetic circle (R=50 m)",
)
long_df = study_result.to_dataframe().sort_values(["objective", "parameter"], kind="stable")
long_df["absolute_delta_plus"] = long_df["predicted_objective_plus"] - long_df["objective_value"]
long_df["absolute_delta_minus"] = long_df["predicted_objective_minus"] - long_df["objective_value"]
long_df[[
"objective",
"parameter_label",
"objective_value",
"sensitivity_raw",
"sensitivity_pct_per_pct",
"absolute_delta_plus",
"absolute_delta_minus",
]]
Step 3: plot absolute KPI deltas for +10%¶
To align with engineering decision-making, we plot absolute KPI changes rather than normalized factors.
output_dir = sensitivity_output_root() / "synthetic_circle_single_track"
output_dir.mkdir(parents=True, exist_ok=True)
long_df.to_csv(output_dir / "sensitivities_long.csv", index=False)
study_result.to_pivot().sort_index(kind="stable").to_csv(output_dir / "sensitivities_pivot.csv")
plot_df = long_df[["parameter_label", "objective", "absolute_delta_plus"]].pivot(
index="parameter_label",
columns="objective",
values="absolute_delta_plus",
)
fig, axes = plt.subplots(1, 2, figsize=(12.0, 4.5), constrained_layout=True)
plot_df["lap_time_s"].plot(
kind="bar",
ax=axes[0],
color="#1565c0",
title=f"Lap-time delta for +{variation_pct:.0f}% parameter variation",
)
(plot_df["energy_kwh"] * 1000.0).plot(
kind="bar",
ax=axes[1],
color="#2e7d32",
title=f"Energy delta for +{variation_pct:.0f}% parameter variation",
)
axes[0].set_ylabel("Delta lap time [s]")
axes[1].set_ylabel("Delta energy [Wh]")
for axis in axes:
axis.tick_params(axis="x", rotation=20)
axis.grid(alpha=0.25, axis="y")
plot_path = output_dir / "sensitivity_bars.png"
fig.savefig(plot_path, dpi=160)
plt.close(fig)
print(f"Artifacts written to: {output_dir}")
plot_df
Consistency check: AD vs finite differences for near-zero terms¶
If a sensitivity is close to zero, it is good practice to verify that this is not an autodiff artifact. We therefore compare AD and finite differences for cg_height and yaw_inertia.
zero_check_parameters = [
SensitivityStudyParameter(name="cg_height", target="vehicle.cg_height", label="Center of gravity height"),
SensitivityStudyParameter(name="yaw_inertia", target="vehicle.yaw_inertia", label="Yaw inertia"),
]
fd_check = run_lap_sensitivity_study(
track=track,
model=model,
simulation_config=simulation_config,
parameters=zero_check_parameters,
runtime=SensitivityRuntime(method="finite_difference"),
)
comparison_rows = []
for objective in ("lap_time_s", "energy_kwh"):
for name in ("cg_height", "yaw_inertia"):
comparison_rows.append(
{
"objective": objective,
"parameter": name,
"autodiff": study_result.sensitivity_results[objective].sensitivities[name],
"finite_difference": fd_check.sensitivity_results[objective].sensitivities[name],
}
)
pd.DataFrame(comparison_rows)
Engineering interpretation¶
This synthetic-track result should be read as a local model-path sensitivity statement:
- Mass and drag are active drivers of both lap time and energy in this setup.
- CoG height and yaw inertia are near zero here and are confirmed by both AD and FD.
- This does not mean these parameters are universally irrelevant; it means their effect is not dominant in the current quasi-static objective path.
That is exactly the value of this workflow: clear, model-consistent attribution of KPI sensitivity at a defined operating point.