Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Morningstar's Top Rated Funds Unlikely To Give Investors Best Returns t.maddell monthly read

Comments

  • I have minor issues with his methodology and definitions. Nevertheless, his conclusions are reasonably valid. Though rather than saying that there's an inverse relationship (negative correlation) between analyst rating and actual future performance, I'd be more inclined as to describe the relationship as none or random.

    M* says that "The Analyst Rating is based on the analyst's conviction in the fund's ability to outperform its peer group and/or relevant benchmark on a risk-adjusted basis over the long term."

    The author looked at raw performance, not risk-adjusted performance. So that's one issue. Another is related to the definition of analyst ratings. Obviously M* isn't really looking at a fund's ability to outperform its relevant benchmark. Else how could it give VFINX a gold rating, when with near certainty it will underpeform its relevant benchmark over any time period? M* is really rating funds against their peers.

    With that in mind, a better (or at least different) analysis would be to look at the percentage of gold, silver, bronze, neutral, and negative funds that outperformed their peers over his selected five year period (1/1/2012 - 12/31/2016). Using this metric, the results are virtually independent of rating.

    6/27 (22%) of large cap gold funds failed to beat their category peer average. (Two additional funds were merged out of existence: Vanguard Tax-Managed G&I, and Morgan Stanley Focus Growth).

    3/15 (20%) of large cap silver funds failed to beat their peer average.
    3/11 (27%) of large cap bronze funds failed to beat their peer average.

    No neutral funds (out of 11 survivors) failed to beat their peers, though Columbia Value and Restructuring (UMBIX) and Putnam Voyager (PVOYX) were merged away.

    The author reviews only one of the two negatively rated funds. I assume that's APGAX. Aside from a grouping of just one fund not being meaningful, this fund changed management almost immediately (Feb 2012). Curiously, M* still refuses to rate this now five star fund above neutral, because it says that five years history isn't long enough. The other negatively rated fund, the one I think the author disregarded, is LMGTX. This one also changed in 2012, even more significantly. It changed from being classified domestic to being classified foreign.

    Another problem with the analysis is that because of the funds M* selected, there is a tendency to double count. It's as if M* gave medals in 1998, and awarded gold to half a dozen Janus funds, all virtual copies of each other.

    That's what happened with at least a couple of families: Yacktman (where gold Yacktman and silver Focused both underperformed), and Weitz (silver Value and gold Partners Value both underperformed).

    Overall, I think the best (worst?) you can say is that the analyst ratings are a better indication of which funds are popular (M* doesn't pay much attention to lesser-followed funds) than they are any indication of how funds will perform.
  • If you think M-Star's current ratings are shaky, you are going to love this -

    "The number of funds that receive an Analyst Rating is limited by the size of the Morningstar analyst team. To expand the number of funds we cover, we have developed a machine-learning model that uses the decision-making processes of our analysts, their past ratings decisions, and the data used to support those decisions. The machine-learning model is then applied to the "uncovered" fund universe and creates the Morningstar Quantitative Rating™ for funds (the Quantitative Rating), which is analogous to the rating a Morningstar analyst might assign to the fund if an analyst covered the fund. These quantitative ratings predictions make up what we call the Morningstar Quantitative Rating. With this new quantitative approach, we can rate nearly 6 times more funds in the U.S. market."

    As far as I can tell, it is not satire.

    corporate1.morningstar.com/ResearchLibrary/article/813568/morningstar-quantitative-rating-for-funds-methodology/
  • @Wholly Terrier: In the future , would appreciate if would link article .
    Regards,
    Ted
    https://corporate1.morningstar.com/ResearchLibrary/article/817415/morningstar-prospects---q2-2017/
  • Nice writeup. Pretty basic stuff, but a lot of effort. (I've worked with a company off and on that does a similar form of machine learning.)

    Good results for a prototype, though I'm not sure it's ready for prime time.

    M* acknowledges that the model has a hard time distinguishing between gold, silver, and bronze. Grouping these all together into one bucket (called "Recommended"), the model still doesn't do a great job. See Exhibit 4 in the paper.

    Of funds that analysts actually rated Recommended (gold, silver or bronze), the model only thought 78% of them should be recommended. The model said that 21% of these Recommended funds should be rated Neutral, and 1% Negative.

    Of the funds that analysts rated Neutral, the model said 59% should be Neutral, it Recommended 32% of them, and was Negative on 9%.

    Of the funds that analysts rated Negative, the model rated only 55% Negative, it was Neutral on 41% and even Recommended 4%.

    Given the questionable value of analyst ratings in the first place, an automated rating that is itself wrong 22% to 45% of the time seems to render this first stab as useless.

    I've no doubt the accuracy can be improved. Will take a lot of time and effort though.
Sign In or Register to comment.