Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.
And no, I don’t read Zero Hedge. A trader friend linked this to me. Some bit of a disclaimer: Mr. England and I have presented a number of investment seminars and college lectures together.
Howdy AKAFlack. It’s good to hear from you once again. I hope your marketplace charting methods are still generating superior returns and/or superior Sharpe Ratios for you.
Somehow I am not shocked, nor even mildly surprised by the Kosmo (yes Kosmo and secretly withheld for a time) Cramer findings that you reported. It is consistent with the longer period scorecard of his forecasting accuracy assembled in the CXO Advisory Group study. In that study, he registered a middle of the road 47% accuracy rating.
I like to divide forecasters into three groups. In round numbers (the data, its consistency, and its interpretation don’t warrant more accuracy), CXO discovered that 10% of forecasters demonstrate some talent, 20% are simply false prophets at that task, and the largest 70% of that cohort are no better than monkeys tossing darts at a target. Cramer is a proven member of that dart throwing cohort. Personally, I don’t like the man because he is reminiscent of the Old West’s snake oil salesmen. He does harm.
Super forecasters do exist. The CXO data support that conclusion as does far more extensive work in this arena by researcher Phil Tetlock. Here is a Link to a brief summary of the Tetlock studies:
Note that the number One task is to find the right forecasting people. Integrating a number of these prescient forecasters into a functional team unit will improve the forecasting scorecard.
Typically, these “right people” are humble, do not predetermine positions, are reflective, and are somewhat mathematically inclined (think in terms of probabilities). They also are persistent folks who don’t abandon ship quickly when challenged. They are diggers.
I suppose many MFOers satisfy those qualifications. So, how come many of our forecasts don’t make it successfully across the finish-line? One shortcoming is that many of our forecasts do not include a specified closure date. Without that date stamp, the forecast is incomplete and cannot be scored.
Edit: In forming my 3 groups of forecasters, I used a 60% accuracy or above to identify super forecasters and an under .40% cutoff to classify failed forecasters. I considered those in the 40% to 60% range as simply coin tossers. Sorry for my omission.
It’s good to hear from you as well. Thanks for the Tetlock studies.
I’m doing well… and still charting the markets, although with less interest since I’m not on the glide path and have been enjoying retirement for quite a while. These days it’s more about arranging my sock drawer and counting age spots.
Comments
Regards,
Ted
Right off the top of my head, I don't know
what "Vols" means.
Regards,
Flack
Regards,
Ted
Howdy AKAFlack. It’s good to hear from you once again. I hope your marketplace charting methods are still generating superior returns and/or superior Sharpe Ratios for you.
Somehow I am not shocked, nor even mildly surprised by the Kosmo (yes Kosmo and secretly withheld for a time) Cramer findings that you reported. It is consistent with the longer period scorecard of his forecasting accuracy assembled in the CXO Advisory Group study. In that study, he registered a middle of the road 47% accuracy rating.
I like to divide forecasters into three groups. In round numbers (the data, its consistency, and its interpretation don’t warrant more accuracy), CXO discovered that 10% of forecasters demonstrate some talent, 20% are simply false prophets at that task, and the largest 70% of that cohort are no better than monkeys tossing darts at a target. Cramer is a proven member of that dart throwing cohort. Personally, I don’t like the man because he is reminiscent of the Old West’s snake oil salesmen. He does harm.
Super forecasters do exist. The CXO data support that conclusion as does far more extensive work in this arena by researcher Phil Tetlock. Here is a Link to a brief summary of the Tetlock studies:
http://www.validea.com/blog/15147-2/
Note that the number One task is to find the right forecasting people. Integrating a number of these prescient forecasters into a functional team unit will improve the forecasting scorecard.
Typically, these “right people” are humble, do not predetermine positions, are reflective, and are somewhat mathematically inclined (think in terms of probabilities). They also are persistent folks who don’t abandon ship quickly when challenged. They are diggers.
I suppose many MFOers satisfy those qualifications. So, how come many of our forecasts don’t make it successfully across the finish-line? One shortcoming is that many of our forecasts do not include a specified closure date. Without that date stamp, the forecast is incomplete and cannot be scored.
Edit: In forming my 3 groups of forecasters, I used a 60% accuracy or above to identify super forecasters and an under .40% cutoff to classify failed forecasters. I considered those in the 40% to 60% range as simply coin tossers. Sorry for my omission.
Best Wishes.
It’s good to hear from you as well. Thanks for the Tetlock studies.
I’m doing well… and still charting the markets, although with less interest
since I’m not on the glide path and have been enjoying retirement
for quite a while.
These days it’s more about arranging my sock drawer and counting age spots.
My best wishes to you and your family,
Flack