Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.
Thanks for the informative reference. I enjoyed it.
The author identified yet another source of forecaster incompetence. In this instance, CNBC was the source of the incompetence when it attempted to list the 10 most likely mutual fund winners for 2012. Although only at mid-term, the CNBC staff joined the ranks of average prognosticators if I generously assess their forecast results to date.
No surprise in that outcome. Market and investment forecasters have a difficult challenge to achieve mediocre prediction accuracy. They certainly never attended the campfire lectures at fictional Lake Woebegone. The struggle to reach mediocrity seems to be ubiquitous within that industry. Your reference contributes added evidence that makes the mosaic more complete. It reinforces a theme that I have been lately ranting about in recent postings.
For example, see my posting on “Illusive Performance Persistence”:
Most recently, I posted on a DALBAR report that demonstrated how badly the average private investor performed relative to simple benchmarks. See my submittal titled “DALBAR Reveals Investor Shortcomings”:
Your mutual fund reference source correctly summarizes that forecasters avoid scorekeeping. But others do. I particularly like CXO Advisory Group for that necessary chore. CXO continuously assesses the accuracy of a broad spectrum of market gurus and experts. For convenience, here is the address to the CXO ongoing task:
The Guru scores are all over the map. On average they do no better than a fair coin-toss. To be a true market wizard, the standard should be at least two-thirds correct predictions. Almost all the listed experts fail that test; a few marginally exceed the criterion and a few more come tantalizingly close. Many have been near the bottom of the heap for as long as I have accessed this listing.
I am under-whelmed by their dismal performance record, even before persistence is added as the next needed measurement criteria. Wisdom and foresight is a nonexistent commodity among the market wizard cohort.
Thank you for providing extra data to bolster my position that we individual investors serve ourselves as well as self-proclaimed gurus do when forecasting future market turns. Nobody really has a crystal ball; we all work from an incomplete database with biased perceptions and flawed instincts. The best we can do is to control wild impulses and be prudent money disciplinarians. Luck is a major player in this uncertain arena.
Comments
Thanks for the informative reference. I enjoyed it.
The author identified yet another source of forecaster incompetence. In this instance, CNBC was the source of the incompetence when it attempted to list the 10 most likely mutual fund winners for 2012. Although only at mid-term, the CNBC staff joined the ranks of average prognosticators if I generously assess their forecast results to date.
No surprise in that outcome. Market and investment forecasters have a difficult challenge to achieve mediocre prediction accuracy. They certainly never attended the campfire lectures at fictional Lake Woebegone. The struggle to reach mediocrity seems to be ubiquitous within that industry. Your reference contributes added evidence that makes the mosaic more complete. It reinforces a theme that I have been lately ranting about in recent postings.
For example, see my posting on “Illusive Performance Persistence”:
http://www.mutualfundobserver.com/discussions-3/#/discussion/comment/12243
Most recently, I posted on a DALBAR report that demonstrated how badly the average private investor performed relative to simple benchmarks. See my submittal titled “DALBAR Reveals Investor Shortcomings”:
http://www.mutualfundobserver.com/discussions-3/#/discussion/3401/dalbar-reveals-investor-shortcomings
Your mutual fund reference source correctly summarizes that forecasters avoid scorekeeping. But others do. I particularly like CXO Advisory Group for that necessary chore. CXO continuously assesses the accuracy of a broad spectrum of market gurus and experts. For convenience, here is the address to the CXO ongoing task:
http://www.cxoadvisory.com/gurus/
The Guru scores are all over the map. On average they do no better than a fair coin-toss. To be a true market wizard, the standard should be at least two-thirds correct predictions. Almost all the listed experts fail that test; a few marginally exceed the criterion and a few more come tantalizingly close. Many have been near the bottom of the heap for as long as I have accessed this listing.
I am under-whelmed by their dismal performance record, even before persistence is added as the next needed measurement criteria. Wisdom and foresight is a nonexistent commodity among the market wizard cohort.
Thank you for providing extra data to bolster my position that we individual investors serve ourselves as well as self-proclaimed gurus do when forecasting future market turns. Nobody really has a crystal ball; we all work from an incomplete database with biased perceptions and flawed instincts. The best we can do is to control wild impulses and be prudent money disciplinarians. Luck is a major player in this uncertain arena.
Best Regards.