I would be interested to hear what people here think of Morningstar's new analyst ratings. As of 12/31/11 they have rated 410 funds. However, only 17 have received "negative" ratings. Seems there is a massive positive bias. Now Morningstar response is that it will even out over time......however, they have 21 index funds with "metal" ratings which are supposed to be reserved for funds with "sustainable advantages over a relevant benchmark". Now if somebody can explain how an index fund can have a sustainable advantage over the benchmark it is tracking I am all ears. I think the cheapest index funds should get their "Neutral" rating by their own definition "Fund that isn’t likely to deliver standout returns, but also isn’t likely to significantly underperform". Yet, not one index fund has been rated lower then bronze?
You can see their full listing of rated funds (as of 12/31/11), as well as a breakdown of their current distribution of ratings here
http://www.wallstreetrant.com/2012/01/morningstar-ramps-up-analyst-ratings.html
Comments
Thanks, glad you liked it!
To-date, of the 167 funds we currently track closely, only 34 have analyst ratings, with 20 rated gold, 11 silver, and 3 bronze. A few are surprising, a couple are very strange. How the individual M* Pillar ratings equate to the overall analyst rating is certainly not considtent. Five positive ratings does not guarantee Gold, and one neutral or negative rating might still yield a gold. So, on the surface at least, the star rating appears to be more consistent (even if a fund is stuck in the wrong asset class, which is more common than we might think). Clearly personality of the analysts and their biases will influence their ratings. That will be good or not so good. My experience is that some of the analysts really do not have a good grasp of some of the funds they review. But I guess that will happen whether it's M* or someone else. But make no mistake, this will be used for marketing purposes.
That said, forward looking ratings must be subjective at some point.
The 5+ pillars not equaling gold doesn't bother me as much as it does others. Suppose the +/0/- ratings equate to "scores" of 67-100 (+), 35-66 (neutral), 0-34 (-). Then its easy to see a 5+ rated fund getting a "score" of 350 (5 x 70), and a fund with four pluses and a negative getting a score of 400+, way above that.
I'm pretty sure I read somewhere in M* (not going back to look for it now) that they were looking at the fund overall, and it wasn't a matter of adding up the number of plusses. It sounded to me more like what I described above - dividing into thirds (or tertiles if one prefers) not providing adequate granularity to infer total "score".
"Beware of false knowledge; it is more dangerous than ignorance." -- George Bernard Shaw
Also, in regards to the marketing......Yes I think it will definitely be used for marketing purposes, especially for the fund companies receiving the ratings. I think it will be interesting to see how this affects flows. I think it will be rather dramatic unfortunately.....Although, I think for those with good ratings it will help retain assets that otherwise would leave just because the "star" rating went down.
Fortunately there aren't actually 26,000 individual mutual funds---unless you were to count each individual share class of every fund, But point taken! You mean you can't monitor the inner workings of over 8,400 different funds? But why not
Given how poorly most funds performed during the downturn, think that they are simply being rolled out now to provide - as suggested above - another metric that M* can market (and funds can include in advertising) even if the "traditional" star ratings are poor or unexceptional.
I am a subscriber to M*'s print products, prompting an additional critique. On the detailed print products, the new ratings replace several blocks of descriptions, which summarized the people or process at/followed-by each fund. Frankly, found the old (displaced) information much more useful than the "new ratings".
Also - the "new ratings" could have been presented compactly on print page - but M* chose NOT to do so - wasting a lot of space. Think that they did this to make the whole process more opaque and mysterious.
But key observation is that they have produced an apparently wise metric, that can be sold or licensed to funds, and used in advertising, even if the funds historical "risk adjusted" performance is unremarkable.
"When the Lord hands you lemons, make lemonade." - This is Morningstar's "lemonade". Drink up!
First an honest disclaimer: I am not an expert on any Morningstar rating system.
However, I have used the earlier Morningstar “Star” ratings as a partial negative input into my mutual fund buying decisions. I am only loosely familiar with their evolving Olympic metal award system.
I used the Star ratings to eliminate candidate funds when these funds sported a one or two star rating; I retained for further assessment, those funds that were given a three to five star rating. Accumulated practical data from Morningstar user studies reported that the lower rated funds indeed produced lower future returns, whereas the upper echelon of rated funds tended to deliver average to above average rewards but freely migrated within the upper three ratings.
The Star system has matured over time. When it was originally introduced, it only compared equity returns to the S&P 500 Index, a very flawed benchmark for international and small cap funds. Over the years, Morningstar has significantly improved their product.
The Star system is one-dimensional and formulaic in construction. Morningstar has always cautioned users that it is backward looking. The number ratings were assigned assuming a Bell curve distribution of past performance. Remember that the five stars represent a Bell curve that has been segmented into 5 sections. These sections are symmetric, just like the Bell curve. So the number of 5 stars granted should equal the number of 1 star mutual fund outfits that suffer fund outflows as a penalty for that rating.
I suspect Morningstar recognized these limitations, and their Olympic award scheme is an attempt to enhance their overall system’s robustness. The Olympic formula is less rigorous since it incorporates evaluators opinions and has five evaluation dimensions. For example, fund management tenure is now an integral part of the Morningstar assessment.
Morningstar will deploy their Olympic awards as a supplement to their Star rankings; it will not replace the Stars. Yes, it should contribute to the firms bottom-line.
Morningstar has always had a difficult time explaining why their 5-star funds did not remain 5-star in subsequent periods. Performance persistence is a critical shortcoming in the history of most mutual funds. That dismal finding suggests that mutual fund managers are often more lucky than skilled in assembling their portfolios and in their timing decisions. Morningstar has suffered the same faith in their top-tier selections, and in their manager-of-the-year awards. Predicting the future is hazardous to your wealth and your reputation.
Standard and Poors has assessed fund management persistence and the relative returns delivered by active management contrasted to their passive counterparts for many years. Their reports on the matter are published several times each year and are titled Persistence scorecards and SPIVA documents. Here is a Link to the S&P website and their documents:
http://www.standardandpoors.com/indices/spiva/en/us
Historically these reports conclude that fund managers struggle to beat their benchmarks, and have difficulty maintaining performance persistence. Investment category winners change in the marketplace and managers fail to adapt and adopt quickly enough. The fund cost hurdle is too high to overcome in many instances. The percentage of winners is usually below those expected from random occurances assuming a Bell curve distribution.
Costs matter greatly. That is why Morningstar has historically granted more than 3-stars to Index funds with low costs. The cost structure of actively managed funds shifts the percentage of winners in the active category below the average market return that the Index fund earned. Hence, in a comparative Bell curve environment comparison (the Morningstar 5-star system), the Index mutual fund can and does get an above average ranking.
The same will be true in the new Olympic award system since performance is one of the 5 evaluation categories. Also, since some of the inputs to the new system are highly subjective, I imagine the rankings will be even more distorted.
However, I do not object to an attempt to make the assessments more forward looking. The warts in the old system have been exposed; only time will tell if the new approach will be more prescient. Overall, I welcome Morningstar’s more embracing experiment. I wish them well.
Best Regards.
Please check your link re:S&P. It brings up a blank page when I attempt to retrieve it.
prinx
Hi Prinx,
Thanks for the heads-up. The Link worked when I originally submitted it. I checked. I checked just now and experienced a problem also.
I independently visited the S&P site and gained access with the following:
http://www.standardandpoors.com/indices/spiva/en/us
This appears to be identical to the original address. I'm puzzled why the original seems to be contaminated.
Good luck with this repost. It is slow, but it does get you there.
http://www.newconstructs.com/nc/fundscreener/fund-screener-premium.htm
But here is the gist of it
The Wall Street Ranter
Hi WSR. From David's February commentary: You and Mr. Jaffe see things the same way.
Another thing that irks me a bit with the Olympic system is the inconsistency. For example...
How can you possibly have an awful 2.5 ER, be "Neutral" on process (unless it's meant to be a pun...but it's not) and still receive a Gold?
Compare with:
Apparently that 1.0 ER at Fairholme causes the downgrade.
Finally...
See the "Positive" for performance?
(I know, it's really about long, long term performance...hard to argue.)
Dodge & Cox funds are good examples of a high-profile funds receiving 5 stars (based only on relative performance) leading up to 2008. They have been 2-3 stars ever since, but with the Olympic system they still hold Gold ratings (which is subjective and only partially based on performance).