Skip to main content
scripts/weekly_optimize.py is the hands-off weekly maintenance script for NOVOSKY. It replaces the manual optimization workflow entirely. Run it every Sunday and it handles everything: data refresh, retraining, parameter sweeping, backtest validation, model push, git commit, and Telegram notification.
Developer/operator lane only. Regular users should run onboarding, select profile 1-5, pull approved model revisions from R2, and run trading.

Quick start

# Full pipeline (default — runs all 13 phases)
python scripts/weekly_optimize.py

# Dry run — print the plan, make no changes
python scripts/weekly_optimize.py --dry-run

# Skip retrain, only sweep + backtest
python scripts/weekly_optimize.py --skip-retrain

# Resume from a specific phase after a crash
python scripts/weekly_optimize.py --from-phase 7

Cron setup

Wire this on your trading VM to run every Sunday at your configured local time. The example cron below is set to 02:00 UTC, so adjust it to match your config.json timezone:
crontab -e
Add:
0 2 * * 0  cd /path/to/NOVOSKY && .venv/bin/python scripts/weekly_optimize.py >> logs/weekly.log 2>&1
Logs accumulate in logs/weekly.log. Rotate with logrotate if needed.

Pipeline phases

PhaseNameWhat it does
0Pre-flightFeature count integrity check, baseline metrics snapshot
1RefreshVerify MT5 API reachable; fetch fresh data if --refresh
2SHAPFeature importance analysis — diagnostic only, no dropping
3TuneOptuna signal + position model hyperparameter tuning (local)
4RetrainRF + XGB + LGB signal + position models, plus the LGB risk multiplier model
5SweepConfidence + SL/TP + risk parameter sweep (first 70% of OOS window only)
6ApplyWrite best params to config.json + ml_config.json
7EvaluateTrue OOS backtest with new models + best config (full OOS window)
8DecideKeep new models if Score improved >= 2%, rollback otherwise
9PushPush models to Cloudflare R2 (date-tagged)
10Update docsRewrite performance metrics in strategy_params.json
11Commitgit commit all changed files
12Dry runpython trading.py --dry smoke test
13NotifySend Telegram final report

CLI flags

FlagDefaultDescription
--dry-runfalsePrint plan, make no changes
--skip-retrainfalseSkip phases 1–4 (sweep + backtest only)
--skip-sweepfalseSkip phase 5 (retrain + backtest only)
--from-phase N0Resume from phase N (uses checkpoint file)
--no-commitfalseSkip git commit
--no-pushfalseSkip R2 push
--no-notifyfalseSkip Telegram notification
--trials N50Optuna trials per model
--balance N500Backtest starting balance ($)
--profile PRisk profile: 15 or name (e.g. balanced). Omit for interactive questionnaire.

Config authority and strict sync

weekly_optimize.py uses root configs as canonical authority:
  • config.json
  • ml_config.json
Before preflight it validates root configs from canonical authority. To hard-fail on any drift before the run starts, enable strict mode:
NOVOSKY_STRICT_CONFIG_SYNC=1 python scripts/weekly_optimize.py
To validate drift separately (for CI):
python scripts/config_sync.py --check

Risk profiles

Every optimize run is scoped to a risk profile. The profile controls which config values the Phase 5 sweep searches, the weekly drawdown pause threshold, and — most importantly — the hard bot halt that prevents margin calls.

Profile selection

When you run without --profile, the script asks 6 questions and recommends a profile. You can accept it, or override with a number.
# Interactive questionnaire (recommended for first run)
python scripts/weekly_optimize.py

# Skip questionnaire — use profile directly
python scripts/weekly_optimize.py --profile balanced
python scripts/weekly_optimize.py --profile 3       # same as above

# Cron mode: no TTY, loads saved profile from optimize_best.json
# Falls back to Balanced (3) if no saved profile exists

The five profiles

#NameRisk/tradeConfidenceMax weekly DDHard halt
1Steady Income0.5–1.0%0.65–0.708%20%
2Conservative1.0–1.5%0.62–0.6812%30%
3Balanced1.5–2.0%0.60–0.6520%45%
4Growth2.0–3.0%0.58–0.6225%55%
5Aggressive3.0–4.0%0.55–0.6030%65%
All profiles are safe from margin call. The hard halt stops the bot before equity reaches the broker’s margin call threshold. At VT Markets with 1:500 leverage, a 0.01 BTC lot requires ~2marginevenProfile5haltsat652 margin — even Profile 5 halts at 65% loss (~175 on a $500 account), far above that floor.

What the hard halt does

max_total_drawdown_pct in config.json is a permanent kill switch:
  • Live bot (trading.py): if (starting_balance - equity) / starting_balance × 100 >= limit, the bot logs a critical message, sends a Telegram alert, and calls sys.exit(99). It does not restart automatically.
  • Backtest (backtest_config.py): simulation stops early at the bar where equity crosses the halt threshold. Results reflect the truncated run.
Unlike the weekly drawdown pause (which resets every Monday), the hard halt requires a manual restart after you investigate what went wrong.

What the sweep covers per profile

Each profile confines the Phase 5 sweep to its own parameter ranges. The sweep does not test configs outside the profile bounds. This means:
  • Profile 1 never tests 3% risk configs.
  • Profile 5 never tests 0.65+ confidence configs.
The best config found within the profile’s ranges is written to config.json. The risk_profile block in config.json records which profile was active and the starting balance used to compute drawdown limits.

Anti-overfitting OOS split

The config sweep (Phase 5) tests ~68 parameter combinations to find the best confidence threshold, SL/TP multipliers, and risk settings. Without a guard, the sweep picks the config that scored best on the OOS window — and Phase 7 then evaluates on that same window. That measures how well the config fits to OOS data, not how well it generalizes. To fix this, the pipeline splits the OOS window 70/30:
Training data  ─────────────────────────────────┤
OOS window                                       ├─── 70% ───┬─── 30% ───┤
                                                  sweep uses   holdout
                                                  this only    (never seen
                                                               by sweep)
  • Phase 5 sweep uses only the first 70% of the OOS window.
  • Phase 7 final evaluation uses the full OOS window, including the 30% holdout.
If the winning config from Phase 5 also performs well on the holdout in Phase 7, the improvement is genuine. If it was curve-fitting to OOS data, Phase 7 will expose it and Phase 8 will roll back. The split is automatic. If the OOS window is shorter than 90 days, the split is skipped and the full window is used for both sweep and evaluation (a warning is logged). You can use --oos-end manually with backtest_config.py to cap any backtest at a specific date:
python backtest_config.py \
  --balance 500 --no-swap --leverage 500 \
  --spread 16.95 --oos-only --no-chart \
  --oos-end 2026-03-01

Improvement gate

Phase 8 keeps new models only if:
new_score >= old_score * 1.02
If the new Score doesn’t improve by at least 2%, all models and config changes are rolled back to the pre-run snapshot. The Telegram notification always states clearly whether the run improved or reverted.

Checkpoint and resume

If the pipeline crashes mid-run, it writes a checkpoint file at logs/weekly_optimize_checkpoint.json. Resume from where it left off:
python scripts/weekly_optimize.py --from-phase 7
The checkpoint stores the pre-run snapshot path, baseline metrics, and the best sweep result found so far.

Output files

FileContents
logs/weekly.logFull run log (append mode)
logs/weekly_optimize_checkpoint.jsonPhase checkpoint for crash recovery
models/optimize_best.jsonBest Score ever achieved + config
strategy_params.jsonUpdated performance metrics after each successful run

Comparison with optimize_loop.py

Featureweekly_optimize.pyoptimize_loop.py
Designed forAutonomous weekly cronManual / agent-driven runs
Training backendLocal onlyLocal only
Parameter sweepBuilt-in (phases 5–6)Separate scripts/sweep.py call
Git commitAutomaticManual
R2 pushAutomaticManual
TelegramAutomaticManual
Crash recoveryCheckpoint + --from-phaseManual restart
Use weekly_optimize.py for the scheduled Sunday cron on developer infrastructure. Use optimize_loop.py (via the Claude agent) for on-demand multi-iteration runs. After accepted runs, publish the approved revision(s) and communicate pull tags to user environments.

Monitoring a running pipeline

# Tail the log live
tail -f logs/weekly.log

# Watch model files update (Phase 4 complete when pkl timestamps change)
watch -n 30 'ls -lh models/*.pkl | awk "{print \$5, \$6, \$7, \$9}"'

# Check current phase from checkpoint
python3 -c "import json; c=json.load(open('logs/weekly_optimize_checkpoint.json')); print('Last phase:', c.get('last_phase'))"