Skip to main content

Paper Review - A Census of the Factor Zoo

· One min read
Hinny Tsang
Data Scientist @ Pollock Asset Management

Just see a post oh threads suggesting this paper, and I found it very interesting. The author claim that almost all of the past research fails tickle the multiple testing problem, probably appear 'significant' by chance.

The author listed 382 factors published in top journals, and point out that

  • Papers with positive results tend to be cited more.
  • Journal with high quality (aka impact factor) usually needs positive results to be published.

Key points of the paper:

  1. More factors involved, more likely to be chance.
  • This is known as multiple comparisons problem.

The more you look, the more likely you are to find something that looks like a signal, even if it is not a signal.

  1. File drawer effect:
  • Researcher skip papers they are not excited about (negative results).
  1. Review the target goal of acceptance rate, but the complexity is need to be considered.
  2. Academic publication sometimes ignore transaction costs.