Prompted by an interesting topic raised by “Sluggo” of the “TradingBlox Trader’s Roundtable”, I took a side trip into the topic of optimizing for multiple objectives. In the context of trading system design the issue is driven by the desire to have as many measures of “goodness” as high as possible.
Imagine we only care about Sharpe Ratio and MAR ratio. Is the parameter set that maximizes MAR ratio better than the set that maximizes Sharpe ratio? What about a compromise? If I have a set of parameters and make changes to it and find both MAR and Sharpe improve, it is obvious which set of parameters is better. If a change to the parameter set improves MAR at the cost of Sharpe or vice versa, what should I do?
The first step is to identify the set of parameter sets where a trade-off is necessary, the so-called “pareto optimum” set. Our ultimate solution must be from the pareto optimum set. Any parameter set outside of the pareto optimum can ALWAYS be changed in some way so that at least one of our goodness measures (objective functions) can be improved and none of the others are made worse – so you would never settle for this solution.
This post addresses the first steps in finding the pareto optimum solution to a multi-objective optimization.
The Problem: Two Objective Functions
Following is an idealized pair of objective functions, let’s say they are MAR ratio and Sharpe ratio. We want to optimize for both. We have a single variable, x, to play with, therefore the x-axis is our decision space. In the real-world we don’t know the shapes of the objective functions, but here I have used parabolas to illustrate the process. The chart shows Sharpe and MAR for x = 1 through 75. I marked in the maxima for MAR (1.5) and Sharpe (1.0). For future reference, I displayed the value of MAR (-1.44) for the value of x (48) that maximizes Sharpe (1.0) and vice versa (Sharpe = 0.23 at x = 27 which maximizes MAR = 1.5).
Searching The Objective Space
To find the solution in a multi-objective, multi-parameter problem we would use a genetic algorithm. So, to get a flavor for the first step in using a genetic algorithm, I randomly selected 100 values of x [20, 60] and ‘generated’ values of MAR and Sharpe. I plotted those values on the next chart. I titled the chart “Objective Space” because we are plotting a pair of MAR and Sharpe values arising from a common value of x.
|Thanks to “Sluggo” at the TradingBlox Trader’s Roundtable |
for inspiring the annotations
The Pareto Optimum Frontier
The points in red are on the pareto frontier; the points in blue are not. The red points are said to “dominate” all other points. That is, for any red point, you cannot find another point that has a greater value for BOTH Sharpe and MAR. This is not true for any of the blue points. Guess what values mark the boundaries of the pareto frontier. Notice a point outside of the ideal pareto optimum – this is the nature of a random search, the point IS currently dominant, but future generations of our genetic search should ultimately eliminate it.
I went back to the first chart and plotted the random set of x values on the x-axis, coloring the values corresponding to the points on the Ojective Space plot.
This has helped our search for ideal values of x by narrowing it down to the red points. In this simple example, they are the points that lie between the maxima of the MAR and Sharpe plots. You can easily see that, for any points in blue, you can always find another solution that improves BOTH MAR and Sharpe – so you would never use one of those points.
As an exercise, imagine if, say, Sharpe had a dip where its maximum now is, so there were 2 maxima in that curve one on either side of the current maximum- what would happen to our pareto solution?
Extensions To More Dimensions
Bear in mind that this is a trivial example (the kind I like best because it helps you see what is going on). I can picture well enough what would happen with a 2-dimensional decision space (a plane with the objective functions describing surfaces above it, and the pareto optimum being 1 or more lines on the plane). I can even manage a 3-D decision space without the objective functions, where the pareto optimum would be lines threading their way through a 3-D space. With three objective functions, the objective space would look like a lobed surface with the pareto optimum “capping” some of the lobes. Beyond that, forget about it.
In the near future I will expand this topic to a two dimensional decision space (i.e. a parameter set with 2 members). After that, at some point, I hope to talk about how to use an appropriate genetic search technique to build the set of pareto-optimum parameter sets.