For some inexplicable reason, folks seem to react strongly to Ralph Vince’s “Leverage Space Trading Model”. It generally seems to me that those who slag it off haven’t actually studied Vince’s proposals. Let me answer the main criticisms:
- “Kelly bet-sizing results in unacceptable draw-downs”. You will get no disagreement from Vince on this. The only link between LSTM and Kelly is the notion that some optimal bet size exists for any given gambling situation; the analytical framework is completely different.
- “Maximizing returns results in unacceptable draw-downs”. Vince would argue this depends on the trader. In any case, Vince proposes a constrained optimization not a simplistic maximization of returns: the analyst is free to target the drawdown probability she considers acceptable. Indeed, one optional framework is to maximize the probability of a target profit rather than maximizing absolute profit itself.
- “Markets are not normally distributed, the fat tails will get you”. Again, no argument from Vince on this, LSTM does not assume normality. LSTM’s analytical foundation is the conditional probability table built from empirical results of (ideally) trading the system(s) or from back-testing.
- “Your largest draw-down is in the future and LSTM cannot predict it”. This is as true for LSTM as it is for any other position sizing algorithm. Completing the LSTM analysis cannot leave one any worse off than adopting some completely arbitrary position-sizing rules.
It is clearly nonsense to size bets for a long term trend following system the same as an intraday system or HFT system. At the very least, LSTM gives you some guidance on position-sizing that you wouldn’t otherwise have. Without this kind of analysis I don’t know how you could provide a rationale for allocating capital to systems run in parallel.
Having said all that in defense of LSTM, I sense a change in the way in which the analysis is intended to be used. I infer this in part from the change in name from “Leverage Space Trading Model” to “Leverage Space Portfolio Model”. The original analytical framework called for building a joint probability table using individual trade results from the trading history. This is fraught with difficulty, as the analysis assumes neat “rounds” of play which begin with the question “how do I allocate my capital to the next round of betting?” Trading doesn’t work that way – trades overlap, there are no “rounds”. I believe that LSPM is more geared towards allocating capital to systems rather than individual trades. Each “round” then becomes a chosen time period such as a month or a quarter, and we are asking the more manageable question “how much capital do I allocate to each system for the coming month?”
In my next post, I will lay out some thoughts on how best to use the R package “LSPM”.
EDIT: changed title
EDIT 2: punctuation / corrections (missing word)
Completely agree with your last points (all of them actually). I have not found a way to use LSPM to derive a position size per trade, but have rather focused on using it to test allocation across a portfolio of strategies (although theoretically, one could consider each market-system combination as a portfolio component and derive allocation to each using LSPM).
One thing I have struggled with is using the R package for optimization with drawdown constraints (the only option really practically usable). It never seems to converge even after allowing 100,000 optimization runs (whereas it converges after about 1000 cycles without the constraints). I am meant to email Josh about this but have not gotten round to doing it yet.
Keen to see your next post…
Thanks
Jez