Developer/operator lane only. Regular users should run onboarding, select profile 1-5, pull approved model revisions from R2, and run trading.
The optimization loop systematically improves model performance without manual intervention. It runs SHAP analysis, Optuna tuning, retraining, and OOS validation in sequence.
Full loop
# One iteration
python scripts/optimize_loop.py --local
# Safe mode: tune + retrain only, no feature dropping
python scripts/optimize_loop.py --local --drop-threshold 0
# SHAP + backtest only, no retrain
python scripts/optimize_loop.py --local --analyze-only
By default, optimize_loop.py drops features with SHAP < 0.003. Always use --drop-threshold 0 unless you intentionally want to reduce the feature set. Protected features (is_news_near, session flags, etc.) are never dropped regardless.
Via Claude agent
TASK="Run full optimization loop: SHAP analysis, tune hyperparams, retrain, validate OOS"
./scripts/run_agent.sh novosky-optimizer "$TASK"
This streams live output and handles local GPU → CPU fallback automatically. See Agents for details.
What the loop does
- SHAP analysis — runs
python train_ml_model.py --shap-only and identifies low-importance features
- Feature pruning — optionally drops features below the SHAP threshold (skip with
--drop-threshold 0)
- Optuna tuning — 50 trials for both signal and position model hyperparameters
- Retrain — full train with
--no-warmstart (required after tuning or feature count changes)
- OOS validation —
backtest_config.py --oos-only and compares Score vs previous snapshot
- Promote or revert — keeps the new models if Score improved, reverts otherwise
Manual optimization workflow
# 1. Run SHAP to identify what matters
python train_ml_model.py --shap-only
# 2. Tune signal model hyperparameters
python ml/tune/hyperparams.py
# 3. Tune position model hyperparameters
python ml/tune/position.py
# 4. Retrain with new params
python train_ml_model.py --ensemble --position --no-warmstart --shap
# 5. Validate OOS
python backtest_config.py \
--balance 500 --no-swap --leverage 500 \
--spread 16.95 --oos-only --no-chart
# 6. Push if improved
python ml/r2_hub.py --push
Parameter sweeps (no retrain)
Use sweep scripts to find optimal config values without retraining. These run many OOS backtests with different parameter combinations:
# Position model threshold sweep
python scripts/sweep.py --target pos --full
# Signal confidence + prob_diff sweep
python scripts/sweep.py --target signal --mode confidence
# TP/SL multiplier sweep
python scripts/sweep.py --target signal --mode sltp
# All signal params (~68 configs)
python scripts/sweep.py --target signal --mode full
Results saved to results/sweep_*.csv, ranked by Score = WR × PF / √MaxDD.
When to optimize
| Trigger | Action |
|---|
| WR drops >5pp over 2 weeks | Run full optimization loop |
| New data available (>2 weeks since last retrain) | --refresh retrain |
| Score < 10 on latest OOS | Full loop with --no-warmstart |
| Config change (SL/TP multipliers, confidence threshold) | Parameter sweep first, then retrain if needed |
| New feature idea | See Three-File Rule |