Skip to main content

Workflow

# 1. Create a branch
git checkout -b feature/my-improvement

# 2. Make your changes
#    If touching ML features: follow the Three-File Rule

# 3. Run the compatibility check
python3 -c "
import json
mc = json.load(open('models/model_compat.json'))
ml = json.load(open('ml_config.json'))
assert mc['feature_count'] == len(ml['features'])
print('OK')
"

# 3b. Run config authority drift check
python scripts/config_sync.py --check

# 4. Dry-run to confirm nothing is broken
python trading.py --dry

# 5. If you touched ML features: retrain + OOS backtest
python train_ml_model.py --ensemble --position
python backtest_config.py \
  --balance 500 --no-swap --leverage 500 \
  --spread 16.95 --oos-only --no-chart

# 6. Commit
git add <specific files>
git commit -m "feat: <what changed and why>"

Architecture constraints

These are intentional. Do not propose changing them without a compelling reason.
ConstraintWhy
No PyTorch / LSTM / deep learningHardware: Intel i3, 4 cores, 8 GB RAM. Training is already 2+ hours on CPU.
No generic Exception(...)Use typed exceptions from ml/exceptions.py for clean error handling.
No hardcoded thresholdsEverything reads from config.json or ml_config.json at runtime.
ONNX inference at runtime10–50× faster than sklearn/xgb/lgb native predict.
Warmstart by defaultIncremental training preserves accumulated model knowledge.
model_compat.json as source of truthSingle manifest for feature compatibility across all components.

Code style

  • Python 3.10+ type hints on new functions
  • No docstrings on simple or obvious code
  • No error handling for impossible conditions
  • No backwards-compat shims — just change the code
  • Validate only at system boundaries (user input, API responses)

What not to commit

.env                   # secrets
models/*.pkl           # ML model binaries (on R2)
models/*.onnx
models/*_xgb.json
models/*_lgb.txt
models/_snapshot_*/    # optimization snapshots
trade_log.csv          # runtime logs
state.json             # runtime state
logs/                  # runtime logs

Writing safe backtest changes

When modifying backtest/run.py, always verify parity with trading/bot.py:
  1. The backtester must simulate the exact same logic as the live bot
  2. Any filter added to the live bot must also be in the backtester
  3. SL/TP calculation must match exactly
A backtest that passes but the live bot fails (or vice versa) means parity has drifted. See Three-File Rule for the full change checklist.