Episode 008 – Dr. Tom Starke

Dr. Tom Starke – From Physics PhD to Quant Trading Virtuoso – AlphaCast Ep.8

Have you ever wondered what happens when a physics PhD collides with the high-stakes world of algorithmic trading? Join us as Tom, a former academic turned quant trading guru, shares his extraordinary pivot into finance. With a foundation in computer simulations, he has reshaped the stock market landscape using automated strategies, and he’s here to peel back the curtain on his journey from skepticism to success.

Key Takeaways

  • Simplicity often beats complexity in trading strategies. The best strategies are often simple concepts combined thoughtfully, not overly complex black boxes.
  • Diversification is key – combining multiple uncorrelated trading strategies improves risk-adjusted returns more than trying to find one perfect strategy.
  • When building strategies, separate the signal generation, portfolio construction, and trade execution components for clarity. Don’t mix everything together.
  • Platforms can be useful but building systems from scratch in Python forces you to deeply understand each component.
  • AI and machine learning are powerful tools but need to support an investment thesis, not drive the whole strategy. Models have limits.

Quotes

“Rather than trying to find the one strategy that rules them all, you may consider actually, you know, not building or not trying to find that amazing strategy, but actually combining a few strategies and getting better performance that way.”

Dr. Tom Starke

“The simplicity comes from your, from your understanding of all the things that don’t work.”

Dr. Tom Starke

Timestamps

  • [0:00] Algorithmic Trading and Trading Strategies
  • [17:51] AI Trading Strategies and Portfolio Management
  • [32:32] Algorithmic Trading 37:24 Financial Markets and Human Ingenuity
  • [41:43] Quantopian’s Return and YouTube Channel
  • [49:08] Discussion on Interesting Papers and Value

Alpha Drop

Tom discusses a recent discovery in machine learning called “double descent.” As you add more features to a model, performance first declines due to overfitting, but then as you add even more features, beyond the size of the dataset, performance surprisingly improves again.

This counterintuitive finding has also been shown recently with financial models – large models with lots of not very descriptive features can outperform smaller models.

The reasons for this are not fully understood yet. But it goes against conventional wisdom that simpler models tend to perform better.

For those interested in AI/ML, double descent is an intriguing new concept worth looking into further.

Links/Resources

Leave a Comment