scripts/weekly_optimize.py is the hands-off weekly maintenance script for NOVOSKY. It replaces the manual optimization workflow entirely. Run it every Sunday and it handles everything: data refresh, retraining, parameter sweeping, backtest validation, model push, git commit, and Telegram notification.
Developer/operator lane only. Regular users should run onboarding, select profile 1-5, pull approved model revisions from R2, and run trading.
Quick start
# Full pipeline (default — runs all 13 phases)
python scripts/weekly_optimize.py
# Dry run — print the plan, make no changes
python scripts/weekly_optimize.py --dry-run
# Skip retrain, only sweep + backtest
python scripts/weekly_optimize.py --skip-retrain
# Resume from a specific phase after a crash
python scripts/weekly_optimize.py --from-phase 7
Cron setup
Wire this on your trading VM to run every Sunday at your configured local time. The example cron below is set to 02:00 UTC, so adjust it to match your config.json timezone:
Add:
0 2 * * 0 cd /path/to/NOVOSKY && .venv/bin/python scripts/weekly_optimize.py >> logs/weekly.log 2>&1
Logs accumulate in logs/weekly.log. Rotate with logrotate if needed.
Pipeline phases
| Phase | Name | What it does |
|---|
| 0 | Pre-flight | Feature count integrity check, baseline metrics snapshot |
| 1 | Refresh | Verify MT5 API reachable; fetch fresh data if --refresh |
| 2 | SHAP | Feature importance analysis — diagnostic only, no dropping |
| 3 | Tune | Optuna signal + position model hyperparameter tuning (local) |
| 4 | Retrain | RF + XGB + LGB signal + position models, plus the LGB risk multiplier model |
| 5 | Sweep | Confidence + SL/TP + risk parameter sweep (first 70% of OOS window only) |
| 6 | Apply | Write best params to config.json + ml_config.json |
| 7 | Evaluate | True OOS backtest with new models + best config (full OOS window) |
| 8 | Decide | Keep new models if Score improved >= 2%, rollback otherwise |
| 9 | Push | Push models to Cloudflare R2 (date-tagged) |
| 10 | Update docs | Rewrite performance metrics in strategy_params.json |
| 11 | Commit | git commit all changed files |
| 12 | Dry run | python trading.py --dry smoke test |
| 13 | Notify | Send Telegram final report |
CLI flags
| Flag | Default | Description |
|---|
--dry-run | false | Print plan, make no changes |
--skip-retrain | false | Skip phases 1–4 (sweep + backtest only) |
--skip-sweep | false | Skip phase 5 (retrain + backtest only) |
--from-phase N | 0 | Resume from phase N (uses checkpoint file) |
--no-commit | false | Skip git commit |
--no-push | false | Skip R2 push |
--no-notify | false | Skip Telegram notification |
--trials N | 50 | Optuna trials per model |
--balance N | 500 | Backtest starting balance ($) |
--profile P | — | Risk profile: 1–5 or name (e.g. balanced). Omit for interactive questionnaire. |
Config authority and strict sync
weekly_optimize.py uses root configs as canonical authority:
config.json
ml_config.json
Before preflight it validates root configs from canonical authority.
To hard-fail on any drift before the run starts, enable strict mode:
NOVOSKY_STRICT_CONFIG_SYNC=1 python scripts/weekly_optimize.py
To validate drift separately (for CI):
python scripts/config_sync.py --check
Risk profiles
Every optimize run is scoped to a risk profile. The profile controls which config values the Phase 5 sweep searches, the weekly drawdown pause threshold, and — most importantly — the hard bot halt that prevents margin calls.
Profile selection
When you run without --profile, the script asks 6 questions and recommends a profile. You can accept it, or override with a number.
# Interactive questionnaire (recommended for first run)
python scripts/weekly_optimize.py
# Skip questionnaire — use profile directly
python scripts/weekly_optimize.py --profile balanced
python scripts/weekly_optimize.py --profile 3 # same as above
# Cron mode: no TTY, loads saved profile from optimize_best.json
# Falls back to Balanced (3) if no saved profile exists
The five profiles
| # | Name | Risk/trade | Confidence | Max weekly DD | Hard halt |
|---|
| 1 | Steady Income | 0.5–1.0% | 0.65–0.70 | 8% | 20% |
| 2 | Conservative | 1.0–1.5% | 0.62–0.68 | 12% | 30% |
| 3 | Balanced | 1.5–2.0% | 0.60–0.65 | 20% | 45% |
| 4 | Growth | 2.0–3.0% | 0.58–0.62 | 25% | 55% |
| 5 | Aggressive | 3.0–4.0% | 0.55–0.60 | 30% | 65% |
All profiles are safe from margin call. The hard halt stops the bot before equity reaches the broker’s margin call threshold. At VT Markets with 1:500 leverage, a 0.01 BTC lot requires ~2margin—evenProfile5haltsat65175 on a $500 account), far above that floor.
What the hard halt does
max_total_drawdown_pct in config.json is a permanent kill switch:
- Live bot (
trading.py): if (starting_balance - equity) / starting_balance × 100 >= limit, the bot logs a critical message, sends a Telegram alert, and calls sys.exit(99). It does not restart automatically.
- Backtest (
backtest_config.py): simulation stops early at the bar where equity crosses the halt threshold. Results reflect the truncated run.
Unlike the weekly drawdown pause (which resets every Monday), the hard halt requires a manual restart after you investigate what went wrong.
What the sweep covers per profile
Each profile confines the Phase 5 sweep to its own parameter ranges. The sweep does not test configs outside the profile bounds. This means:
- Profile 1 never tests 3% risk configs.
- Profile 5 never tests 0.65+ confidence configs.
The best config found within the profile’s ranges is written to config.json. The risk_profile block in config.json records which profile was active and the starting balance used to compute drawdown limits.
Anti-overfitting OOS split
The config sweep (Phase 5) tests ~68 parameter combinations to find the best confidence threshold, SL/TP multipliers, and risk settings. Without a guard, the sweep picks the config that scored best on the OOS window — and Phase 7 then evaluates on that same window. That measures how well the config fits to OOS data, not how well it generalizes.
To fix this, the pipeline splits the OOS window 70/30:
Training data ─────────────────────────────────┤
OOS window ├─── 70% ───┬─── 30% ───┤
sweep uses holdout
this only (never seen
by sweep)
- Phase 5 sweep uses only the first 70% of the OOS window.
- Phase 7 final evaluation uses the full OOS window, including the 30% holdout.
If the winning config from Phase 5 also performs well on the holdout in Phase 7, the improvement is genuine. If it was curve-fitting to OOS data, Phase 7 will expose it and Phase 8 will roll back.
The split is automatic. If the OOS window is shorter than 90 days, the split is skipped and the full window is used for both sweep and evaluation (a warning is logged).
You can use --oos-end manually with backtest_config.py to cap any backtest at a specific date:
python backtest_config.py \
--balance 500 --no-swap --leverage 500 \
--spread 16.95 --oos-only --no-chart \
--oos-end 2026-03-01
Improvement gate
Phase 8 keeps new models only if:
new_score >= old_score * 1.02
If the new Score doesn’t improve by at least 2%, all models and config changes are rolled back to the pre-run snapshot. The Telegram notification always states clearly whether the run improved or reverted.
Checkpoint and resume
If the pipeline crashes mid-run, it writes a checkpoint file at logs/weekly_optimize_checkpoint.json. Resume from where it left off:
python scripts/weekly_optimize.py --from-phase 7
The checkpoint stores the pre-run snapshot path, baseline metrics, and the best sweep result found so far.
Output files
| File | Contents |
|---|
logs/weekly.log | Full run log (append mode) |
logs/weekly_optimize_checkpoint.json | Phase checkpoint for crash recovery |
models/optimize_best.json | Best Score ever achieved + config |
strategy_params.json | Updated performance metrics after each successful run |
Comparison with optimize_loop.py
| Feature | weekly_optimize.py | optimize_loop.py |
|---|
| Designed for | Autonomous weekly cron | Manual / agent-driven runs |
| Training backend | Local only | Local only |
| Parameter sweep | Built-in (phases 5–6) | Separate scripts/sweep.py call |
| Git commit | Automatic | Manual |
| R2 push | Automatic | Manual |
| Telegram | Automatic | Manual |
| Crash recovery | Checkpoint + --from-phase | Manual restart |
Use weekly_optimize.py for the scheduled Sunday cron on developer infrastructure. Use optimize_loop.py (via the Claude agent) for on-demand multi-iteration runs. After accepted runs, publish the approved revision(s) and communicate pull tags to user environments.
Monitoring a running pipeline
# Tail the log live
tail -f logs/weekly.log
# Watch model files update (Phase 4 complete when pkl timestamps change)
watch -n 30 'ls -lh models/*.pkl | awk "{print \$5, \$6, \$7, \$9}"'
# Check current phase from checkpoint
python3 -c "import json; c=json.load(open('logs/weekly_optimize_checkpoint.json')); print('Last phase:', c.get('last_phase'))"