Overview
scripts/weekly_optimize.py runs every Sunday at 2 am UTC via cron. It takes approximately 2β3 hours on a 4-core VM.
The 13 phases
Pre-flight and baseline
Loads current models, runs a quick backtest on recent OOS data, and records the baseline score. This score is the bar that the new run must beat.
MT5 connectivity check and data refresh
Verifies the MT5 API is reachable, then fetches the latest BTCUSD M15 candles to extend the training dataset.
SHAP analysis
ml/shap_analysis.py computes SHAP feature importances across all three signal models. The output informs which features are contributing and flags anomalies.SHAP results do not automatically drop features. Protected session/news features are never dropped regardless of SHAP values.Optuna hyperparameter tuning
ml/tune/hyperparams.py runs an Optuna study to find the best hyperparameters for the signal ensemble. The objective function is a custom metric combining OOS precision, recall, and drawdown.Typical search: 50β100 trials. Results are saved to results/optuna_study.pkl.Retrain β Signal model
Trains RF, XGB, and LGB signal classifiers with the tuned hyperparameters. Runs
calibrate_models() after training to produce calibrated pkl files.Retrain β Position model
Trains the position model using
ml/tune/position.py hyperparameters. Requires the signal model to be already saved.Retrain β SL/TP model
Trains the LightGBM SL/TP regressors. Requires signal and position models to be saved.
Retrain β Risk model
Trains the LightGBM risk multiplier model. Requires SL/TP models because training runs a full backtest to generate account-state labels.
Config sweep (scoped to risk profile)
scripts/sweep.py sweeps a grid of config parameters: confidence_threshold, risk_percent, sl_multiplier, tp_multiplier, and session filters. The sweep is scoped to your active risk profile β it only tests parameter combinations within the profileβs bounds.The sweep runs on the first 70% of the OOS window only.Full OOS evaluation
A clean backtest runs on the full OOS window including the 30% holdout that the sweep never saw. This is the definitive score.
Commit or rollback
If
new_score >= baseline Γ 1.02: push models to HF Hub, commit config.json, and send a Telegram success report.Otherwise: restore models and config from backup, send a Telegram failure report. The bot continues with the previous version.Running the pipeline
Manual retrain path
To retrain outside of the weekly pipeline:Optuna tuning
Each model has its own Optuna study:| Study | File | Trials | Objective |
|---|---|---|---|
| Signal hyperparams | ml/tune/hyperparams.py | 50β100 | OOS F1 Γ (1 β max_drawdown) |
| Position hyperparams | ml/tune/position.py | 30β50 | OOS accuracy Γ exit precision |
| Config sweep | scripts/sweep.py | Grid search | Score formula |
Backtesting
backtest.py is a config-faithful bar-by-bar backtester in backtest/run.py. It replicates the live trading loop exactly β same feature engineering, same model inference, same risk sizing, same circuit breakers.
--oos-only runs only on the held-out OOS segment. Always prefer this for realistic validation. Generic lookback backtests can be contaminated by in-sample data.