Love this approach. But when you develop the strategy in the first place - don't your have priors regarding why it generates (hopefully positive) returns, and under which conditions those returns occur. Shouldn't those logical insights inform whether (if/when) a timing strategy would work?
As a semi-systematic macro investor, I can often look at backtests or hear pitches and deduce which market environments a strategy generates most of its returns under (e.g. early cycle, hiking cycles..). To me, it's a critical thought experiment, because if you have the right macro quant tools to test these deductions and they're right, you can construct hedges (e.g. strategies that perform in different macro environments) or overlays (e.g. only run this strategy early cycle) to 'time' the strategy.
If someone can't predict/explain when, not just why, a strategy generates its performance - I have less confidence in its out-of-sample performance.
*don't your have priors regarding why it generates (hopefully positive) returns, and under which conditions those returns occur. *
Yes, you do. You can fill in the parameters with your beliefs. You can also make them non-deterministic. If you think you're a great predictor, great. I am agnostic. Critique the model, not the priors.
I'm curious how the model scales for idiosyncratic events like earnings sentiment. In that case we know the event timing, and the Sharpe delta is likely massive. Since the approximation assumes small delta, would the exact solution suggest much more aggressive sizing for these high conviction moments or does the signal noise around earnings still wash out the benefits?
Thanks - after posing the question I dug in a bit. If you expand Eq 21 for large delta, the volatility ratio asymptotically saturates at q / (1-q). It ties in nicely with the main point: even with infinite theoretical upside, the optimal sizing remains strictly bounded by signal precision.
Love this approach. But when you develop the strategy in the first place - don't your have priors regarding why it generates (hopefully positive) returns, and under which conditions those returns occur. Shouldn't those logical insights inform whether (if/when) a timing strategy would work?
As a semi-systematic macro investor, I can often look at backtests or hear pitches and deduce which market environments a strategy generates most of its returns under (e.g. early cycle, hiking cycles..). To me, it's a critical thought experiment, because if you have the right macro quant tools to test these deductions and they're right, you can construct hedges (e.g. strategies that perform in different macro environments) or overlays (e.g. only run this strategy early cycle) to 'time' the strategy.
If someone can't predict/explain when, not just why, a strategy generates its performance - I have less confidence in its out-of-sample performance.
*don't your have priors regarding why it generates (hopefully positive) returns, and under which conditions those returns occur. *
Yes, you do. You can fill in the parameters with your beliefs. You can also make them non-deterministic. If you think you're a great predictor, great. I am agnostic. Critique the model, not the priors.
I'm curious how the model scales for idiosyncratic events like earnings sentiment. In that case we know the event timing, and the Sharpe delta is likely massive. Since the approximation assumes small delta, would the exact solution suggest much more aggressive sizing for these high conviction moments or does the signal noise around earnings still wash out the benefits?
The linear approximation assumes small delta. The results and the proof in the pdf holds for any delta.
Thanks - after posing the question I dug in a bit. If you expand Eq 21 for large delta, the volatility ratio asymptotically saturates at q / (1-q). It ties in nicely with the main point: even with infinite theoretical upside, the optimal sizing remains strictly bounded by signal precision.