LSPM

Timely Portfolio: LSPM Examples

Timely Portfolio has been doing some interesting work with Ralph Vince’s Leverage Space Model via the LSPM R package. Here’s a short list of his most recent LSPM-related posts:

  1. The Leverage Space Trading Model
  2. Bond Market as a Casino Game Part 1
  3. Bond Market as a Casino Game Part 2
  4. Slightly Different Use of Ralph Vince’s Leverage Space Trading Model
  5. Another Use of LSPM in Tactical Portfolio Allocation

I encourage those of you who are interested in LSPM and/or R to check out his blog.  I personally love how much code he shares!

Leverage Space Indexes Announced

PRESS RELEASE

The Leverage Space Portfolio (LSP) strategy seeks to maximize the probability of equity portfolio profitability by employing a risk-control process focused on capital preservation and drawdown management. Compared to a traditional buy-and-hold portfolio, an LSP-based portfolio aims for more consistent returns with lower risk.

The indexes, scheduled to be launched in the second half of 2011, can serve as the basis of both passive and active investment funds, including exchange-traded funds, mutual funds, and institutional accounts, around the world.

Risk-Opportunity Analysis: Houston

I will be attending Ralph Vince’s risk-opportunity analysis workshop in Houston this weekend.  I’ll be in town Friday-Monday.  Drop me a note if you’re in the area and would like to meet for coffee / drinks.

Margin Constraints with LSPM

When optimizing leverage space portfolios, I frequently run into the issue of one or more f$ ([Max Loss]/f) being less than the margin of its respective instrument.  For example, assume the required margin for an instrument is $500, f$ is $100, and $100,000 in equity.  The optimal amount to trade is 1,000 shares ($100,000/$100).  However, that would require $500,000 in equity, while you only have $100,000.  What do you do?

Estimating Probability of Drawdown

I’ve shown several examples of how to use LSPM’s probDrawdown() function as a constraint when optimizing a leverage space portfolio.  Those posts implicitly assume the probDrawdown() function produces an accurate estimate of actual drawdown.  This post will investigate the function’s accuracy.

Calculation Notes:
To calculate the probability of drawdown, the function traverses all permutations of the events in your lsp object over the given horizon and sums the probability of each permutation that hits the drawdown constraint.  The probability of each permutation is the product of the probability of each event in the permutation.

LSPM Joint Probability Tables

I’ve received several requests for methods to create joint probability tables for use in LSPM’s portfolio optimization functions.  Rather than continue to email this example to individuals who ask, I post it here in hopes they find it via a Google search. ;-)

I’m certain there are more robust ways to estimate this table, but the code below is a start…

# `x` is a matrix of market system returns
# `n` is the number of bins to create for each system
# `FUN` is the function to use to calculate the value for each bin
# `...` are args to be passed to `FUN`

jointProbTable <- function(x, n=3, FUN=median, ...) {

  # Load LSPM
  if(!require(LSPM,quietly=TRUE)) stop(warnings())

  # Function to bin data
  quantize <- function(x, n, FUN=median, ...) {
    if(is.character(FUN)) FUN <- get(FUN)
    bins <- cut(x, n, labels=FALSE)
    res <- sapply(1:NROW(x), function(i) FUN(x[bins==bins[i]], ...))
  }

  # Allow for different values of 'n' for each system in 'x'
  if(NROW(n)==1) {
    n <- rep(n,NCOL(x))
  } else
  if(NROW(n)!=NCOL(x)) stop("invalid 'n'")

  # Bin data in 'x'
  qd <- sapply(1:NCOL(x), function(i) quantize(x[,i],n=n[i],FUN=FUN,...))

  # Aggregate probabilities
  probs <- rep(1/NROW(x),NROW(x))
  res <- aggregate(probs, by=lapply(1:NCOL(qd), function(i) qd[,i]), sum)

  # Clean up output, return lsp object
  colnames(res) <- colnames(x)
  res <- lsp(res[,1:NCOL(x)],res[,NCOL(res)])

  return(res)
}

# Example
N <- 30
x <- rnorm(N)/100; y <- rnorm(N)/100; z <- rnorm(N)/100
zz <- cbind(x,y,z)

jpt <- jointProbTable(zz,n=c(4,3,3))
jpt
##                     x           y            z
## f         0.100000000  0.10000000  0.100000000
## Max Loss -0.009192644 -0.03090575 -0.006942066
##            probs            x            y            z
##  [1,] 0.06666667 -0.002152201 -0.030905750 -0.006942066
##  [2,] 0.06666667 -0.002152201 -0.006480683 -0.006942066
##  [3,] 0.03333333  0.024304901 -0.006480683 -0.006942066
##  [4,] 0.03333333 -0.009192644  0.001963339 -0.006942066
##  [5,] 0.06666667  0.008308007  0.001963339 -0.006942066
##  [6,] 0.03333333  0.024304901  0.001963339 -0.006942066
##  [7,] 0.03333333 -0.009192644 -0.006480683  0.001678969
##  [8,] 0.03333333  0.008308007 -0.006480683  0.001678969
##  [9,] 0.20000000 -0.009192644  0.001963339  0.001678969
## [10,] 0.06666667 -0.002152201  0.001963339  0.001678969
## [11,] 0.13333333  0.008308007  0.001963339  0.001678969
## [12,] 0.03333333  0.008308007 -0.006480683  0.013314122
## [13,] 0.03333333 -0.009192644  0.001963339  0.013314122
## [14,] 0.10000000 -0.002152201  0.001963339  0.013314122
## [15,] 0.06666667  0.008308007  0.001963339  0.013314122

Thoughts on LSPM from R/Finance 2010

I just got back from R/Finance 2010 in Chicago. If you couldn’t make it this year, I strongly encourage you to attend next year. I will post a more comprehensive review of the event in the next couple days, but I wanted to share some of my notes specific to LSPM.

  • How sensitive are optimal-f values to the method used to construct the joint probability table?
  • Is there an optimizer better suited for this problem (e.g. CMA-ES, or adaptive differential evolution)?
  • How accurate are the estimates of the probability of drawdown, ruin, profit, etc.?
  • What could be learned from ruin theory (see the actuar package)?

These notes are mostly from many great conversations I had with other attendees, rather than thoughts I had while listening to the presentations. That is not a criticism of the presentations, but an illustration of the quality of the other conference-goers.

Maximum Probability of Profit

To continue with the LSPM examples, this post shows how to optimize a Leverage Space Portfolio for the maximum probability of profit. The data and example are again taken from The Leverage Space Trading Model by Ralph Vince.

These optimizations take a very long time. 100 iterations on a 10-core Amazon EC2 cluster took 21 hours. Again, the results will not necessarily match the book because of differences between DEoptim and Ralph’s genetic algorithm and because there are multiple possible paths one can take through leverage space that will achieve similar results.

LSPM with snow

My last post provided examples of how to use the LSPM package. Those who experimented with the code have probably found that constrained optimizations with horizon > 6 have long run-times (when calc.max >= horizon).

This post will illustrate how the snow package can increase the speed of the probDrawdown() and probRuin() functions on computers with multiple cores. This yields nearly linear improvements in run-times relative to the number of cores. (Improvements are nearly linear because there is overhead in setting up the cluster and communication between the nodes.)