Author Topic: Jim Simons rennaisance technologies - is value învesting not the only way ?  (Read 7694 times)

cameronfen

  • Hero Member
  • *****
  • Posts: 670
Especially if you hire the smartest people you can find, take a scientific approach, have great data sets, super fast market access, an unlimited research budget, excellent risk management and constantly work on improving your game. I think that that is basically what Renaissance is doing.

Don't forget another factor - using extreme leverage.  They seem to generate single digit annual returns across their invested capital (90%+ of which is borrowed money and only 10% is their capital).  So lots and lots of extreme leverage.

So that's it.  Extreme leverage ... and, uh...basket options. 

According to this recent article, Rentech appears to use a scheme that masks short-term trading gains by turning them into long-term capital gains via basket options held by their investment banks.  "Rather than owning securities directly and booking gains and losses from trading activity, RenTech would buy a [bespoke] option from a bank tied to the value of a securities portfolio it held".  Rentech would then direct the bank to buy and sell securities in the portfolio and hold it for a year+.

https://www.bloomberg.com/news/articles/2019-11-13/irs-decision-is-bad-omen-for-rentech-tax-dispute-worth-billions

wabuffo

So if you trade forex which is mostly what Rennissance trades AFAIK, leverage of 40x is relatively normal (but much to high for a firm with so much capital) but 10x leverage isn’t that high.  The reason is the forex market is the deepest market so even if you trade tens of millions of dollars you are in no danger of being gapped by your stop loss not to mention that you won’t move the market. 


scorpioncapital

  • Lifetime Member
  • Hero Member
  • *****
  • Posts: 2018
    • scorpion capital
How do they not blow up with extreme leverage? Is it like someone gives them no margin call account or much higher limits ? If so they are lucky. Heck with no calls one can get rich just wait to recover.
But if this is the case they must have a huge draw down from time to time. I guess leverage and tax havens could have made all of us richer faster. I wonder if being a high capital gains Investor with no big drawdown potential isn't a huge handicap. Maybe their advantages should be outlawed )

Spekulatius

  • Hero Member
  • *****
  • Posts: 4771
How do they not blow up with extreme leverage? Is it like someone gives them no margin call account or much higher limits ? If so they are lucky. Heck with no calls one can get rich just wait to recover.
But if this is the case they must have a huge draw down from time to time. I guess leverage and tax havens could have made all of us richer faster. I wonder if being a high capital gains Investor with no big drawdown potential isn't a huge handicap. Maybe their advantages should be outlawed )

I also was surprised by the high leverage employed 7x+ as mentioned in the book. It looks to me that perhaps where they really shined is risk management, as they apparently survived for 30 years now without blowing up. A few times , it was mentioned in the book that they started to lose money because of bugs in their computerized trading system and it took them time to find out the issue l because the code is so complex. Thats a real risk, Imo.

FWIW, what these guys do is not value investing. They have no clue about value and the system doesn’t care. There is this funny passage in the book where Mercer explains how they trade Chrysler stock for example,  not knowing that Chrysler has been taking out years ago by Daimler.

What they figured out however is how the stock market voting machine likely is going to work in the near term based on statistical signals. Fascinating stuff.
Life is too short for cheap beer and wine.

wabuffo

  • Sr. Member
  • ****
  • Posts: 412
    • Twitter
I also was surprised by the high leverage employed 7x+ as mentioned in the book.

Anytime I see a track record with exceptionally high returns over many years - I always assume leverage is involved.  Either outright margin or implicit margin via options.  It appears RenTech used both margin and options (including dubious tax "saving" strategies).   

wabuffo

LC

  • Hero Member
  • *****
  • Posts: 4583
Somewhat on topic here:
http://news.mit.edu/2019/model-beats-wall-street-forecasts-business-sales-1219

Quote
Tasked with predicting quarterly earnings of more than 30 companies, the model outperformed the combined estimates of expert Wall Street analysts on 57 percent of predictions. Notably, the analysts had access to any available private or public data and other machine-learning models, while the researchers’ model used a very small dataset of the two data types.

 In a paper published this week in the Proceedings of ACM Sigmetrics Conference, the researchers describe a model for forecasting financials that uses only anonymized weekly credit card transactions and three-month earning reports.

“Alternative data are these weird, proxy signals to help track the underlying financials of a company,” says first author Michael Fleder, a postdoc in the Laboratory for Information and Decision Systems (LIDS). “We asked, ‘Can you combine these noisy signals with quarterly numbers to estimate the true financials of a company at high frequencies?’ Turns out the answer is yes.”

Anyone who is interested needs to understand how proxy data is used. Proxy data can be risky depending on the strength of the relationship. If it breaks down, your model breaks down. And it must make sense therefore human judgement is still required. Using credit card data as a proxy for consumer spending seems reasonable. Using this same data as a proxy for R&D spending in a nuclear energy company may not be reasonable.
"Lethargy bordering on sloth remains the cornerstone of our investment style."
----------------------------------------------------------------------------------------
brk.b | goog | irm | lyv | net | nlsn | pm | t | tfsl | v | wfc | xom

Jurgis

  • Hero Member
  • *****
  • Posts: 5259
    • Porfolio
Somewhat on topic here:
http://news.mit.edu/2019/model-beats-wall-street-forecasts-business-sales-1219

Quote
Tasked with predicting quarterly earnings of more than 30 companies, the model outperformed the combined estimates of expert Wall Street analysts on 57 percent of predictions. Notably, the analysts had access to any available private or public data and other machine-learning models, while the researchers’ model used a very small dataset of the two data types.

 In a paper published this week in the Proceedings of ACM Sigmetrics Conference, the researchers describe a model for forecasting financials that uses only anonymized weekly credit card transactions and three-month earning reports.

“Alternative data are these weird, proxy signals to help track the underlying financials of a company,” says first author Michael Fleder, a postdoc in the Laboratory for Information and Decision Systems (LIDS). “We asked, ‘Can you combine these noisy signals with quarterly numbers to estimate the true financials of a company at high frequencies?’ Turns out the answer is yes.”

Anyone who is interested needs to understand how proxy data is used. Proxy data can be risky depending on the strength of the relationship. If it breaks down, your model breaks down. And it must make sense therefore human judgement is still required. Using credit card data as a proxy for consumer spending seems reasonable. Using this same data as a proxy for R&D spending in a nuclear energy company may not be reasonable.

I was just going to post this.  8)

The other interesting part of that article:

Quote
Counterintuitively, the problem is actually lack of data. Each financial input, such as a quarterly report or weekly credit card total, is only one number. Quarterly reports over two years total only eight data points. Credit card data for, say, every week over the same period is only roughly another 100 “noisy” data points, meaning they contain potentially uninterpretable information.

This paragraph shows why it's tough to build models that do fundamental analysis. Even if you take 500 companies in SP500 and take 10 years of financial reports, you have 5K data points. Even if you take quarterlies, you still have only 20K data points. Compare that to image classification datasets that run into million examples.
Since you likely can't build a single model for companies in different sectors (e.g. it's unlikely that E&P and retailer will share characteristics), the data point number drops even more way down.

That's one of the reasons quants don't do a lot of fundamental analysis.

Value investors tend to think that they have an edge over algos because machines can't handle business fundamentals. It's quite possible that machines can handle business fundamentals just fine - given enough data. If quants can figure out how to get enough data or build models with less data required, look out.  8)
"Human civilization? It might be a good idea." - Not Gandhi
"Before you can be rich, you must be poor." - Nef Anyo
"Money is an illusion" - Not Karl Marx
--------------------------------------------------------------------
"American History X", "Milk", "The Insider", "Dirty Money", "LBJ"

LC

  • Hero Member
  • *****
  • Posts: 4583
I would actually disagree with you there. I think the problem is these modellers are requiring precision and not accuracy.

The entire point statistical modelling is to use a sparse number of datapoints to create a generalized model. Go back to stats 101 and the sample size problem. What is the generally accepted minimum number of samples? It is 25 or 30. At 200 points you can get to significance at 99% confidence.

I have some coworkers from medical research - we used 50, 100 samples to draw medical conclusions back then...and now portions of the bank claim they can't build a sufficiently accurate model due to lack data when they have datasets in the thousands.

The tradoff is you can use 500 data points to create a generalized model but it will lack precision. Or you can build a model with 500,000,000 datapoints (we have them - do not believe anyone who says they lack data unless it is risk-specific) but it lacks the ability to generalize over time.

Modellers try to take the best of both worlds with various methods to reduce overfitting (you can google the regularization methods) but IMHO there is only one true method, which is intuition - and this currently cannot be modelled or at least I am not aware how.
"Lethargy bordering on sloth remains the cornerstone of our investment style."
----------------------------------------------------------------------------------------
brk.b | goog | irm | lyv | net | nlsn | pm | t | tfsl | v | wfc | xom

cameronfen

  • Hero Member
  • *****
  • Posts: 670
I would actually disagree with you there. I think the problem is these modellers are requiring precision and not accuracy.

The entire point statistical modelling is to use a sparse number of datapoints to create a generalized model. Go back to stats 101 and the sample size problem. What is the generally accepted minimum number of samples? It is 25 or 30. At 200 points you can get to significance at 99% confidence.

I have some coworkers from medical research - we used 50, 100 samples to draw medical conclusions back then...and now portions of the bank claim they can't build a sufficiently accurate model due to lack data when they have datasets in the thousands.

The tradoff is you can use 500 data points to create a generalized model but it will lack precision. Or you can build a model with 500,000,000 datapoints (we have them - do not believe anyone who says they lack data unless it is risk-specific) but it lacks the ability to generalize over time.

Modellers try to take the best of both worlds with various methods to reduce overfitting (you can google the regularization methods) but IMHO there is only one true method, which is intuition - and this currently cannot be modelled or at least I am not aware how.

You can model things very close to intuition but you need highly non-linear models that requires a lot of data points.  For trading on info from 10-Q, I can assure you a huge problem is data issues.  I’m not sure what you mean when you say lacking precision versus generalization.  Are you talking about the bias variance tradeoff?  The bias variance trade off is a more nuanced model requires more data to generalize well, so a 500 million data model should generalize very well unless your model complexity is very high. 

Additionally idk if they just use deep learning (I’m pretty confident they do use deep learning), but traditional regularization are not used as often any more.  The big thing is fancy data augmentation techniques.  It’s not your parents linear regression with regularization that Rentech is using and the ability to use these models gives them a huge advantage over models that can train on 500 to 5000 data points like regularized regression. 

Jurgis

  • Hero Member
  • *****
  • Posts: 5259
    • Porfolio
I would actually disagree with you there. I think the problem is these modellers are requiring precision and not accuracy.

The entire point statistical modelling is to use a sparse number of datapoints to create a generalized model. Go back to stats 101 and the sample size problem. What is the generally accepted minimum number of samples? It is 25 or 30. At 200 points you can get to significance at 99% confidence.

I have some coworkers from medical research - we used 50, 100 samples to draw medical conclusions back then...and now portions of the bank claim they can't build a sufficiently accurate model due to lack data when they have datasets in the thousands.

The tradoff is you can use 500 data points to create a generalized model but it will lack precision. Or you can build a model with 500,000,000 datapoints (we have them - do not believe anyone who says they lack data unless it is risk-specific) but it lacks the ability to generalize over time.

Modellers try to take the best of both worlds with various methods to reduce overfitting (you can google the regularization methods) but IMHO there is only one true method, which is intuition - and this currently cannot be modelled or at least I am not aware how.

I believe we are talking about different things. You are talking stat curve fitting. I am talking DNNs that can model and generalize real world info and deal with high number of factors influencing the corporate results going forward. Curve fitting is the reason why Wall Street analyst predictions are subpar and also why most investors underperform. Most of these expect the future to look like the past - which is what curve fitting is.

People who outperform are:
1. People who have higher accuracy model (whether hand built or ML/automatic).
2. People who have longer term predictions than others

If the future looks like the past, nobody can outperform simple curve fitting for 1. or 2. So people can outperform only if curve fitting is wrong. Determining that it is wrong can be based on real world knowledge, second order thinking, intuition, whatever. And these can be ML/DNNed if sufficient data were available. And sufficient data here is way larger than what's needed for curve fitting.

I am not sure what you are talking about when you say "you can build a model with 500,000,000 datapoints" - no you cannot. There are not enough companies on Earth to have that many datapoints. You can do that for price data, but not for fundamental data like yearly sales/profits/etc. There's a reason people build DNNs based on data that's available daily or even better every (nano/micro/milli)second. But that excludes most fundamental data.

* People can also outperform by choosing an area where competition is low and their models don't have to compete with competent curve fitters.
** People and algos can also outperform by exploiting (psychological/emotional/technical/etc.) drawbacks of other actors. I'm not talking about this now though, even though it's a fascinating area on its own.

Edit: For fun and clarity, I'll classify how I see some investors:
- Graham cigar butt investing: Mostly expecting future to differ from the past.
- Growth investing: Mostly expecting company to grow longer than others.
- Buffett: higher accuracy model and longer prediction than others.
- Writser  ;) : choose area where competition is low and you don't have to compete with ...
All of the above (may) exploit the drawbacks of other actors:
- Graham cigar butt investing: exploit others giving up on underperforming company.
- Growth investing: exploit others undervaluing the growth company even when growth is known.
- Buffett: exploits the heck of irrationality of other actors.
- Writser  ;) : exploits the behavior of limited set of actors in special situations.
« Last Edit: December 28, 2019, 09:58:58 PM by Jurgis »
"Human civilization? It might be a good idea." - Not Gandhi
"Before you can be rich, you must be poor." - Nef Anyo
"Money is an illusion" - Not Karl Marx
--------------------------------------------------------------------
"American History X", "Milk", "The Insider", "Dirty Money", "LBJ"

hasilp89

  • Newbie
  • *
  • Posts: 12
I read the book recently and have been thinking about the same thing. Clearly more than one way to skin a cat. Good point by someone on here that his returns are fueled by leverage. One way the book lays out their strategy is its flipping a coin with 50.5% odds and betting heavily so that 0.5% pays off.

In regards to the comparison of returns with Buffet and others I think they are incorrect (but would like to be held to account in this). From what I’ve read they return money every year, their returns aren’t compounded, they can’t do what they’re doing with a larger capital base if they did their returns would diminish significantly. Then they’d be inline or lower than everyone else’s.

Buffet has always said if he had a couple million he could do 50% a year. By returning money every year that is what Renaissance’s doing.