Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.
Support MFO
Donate through PayPal
Lipper apparently begins its 1-5 ratings at 3y of performance, like M* [edited]
"Lipper Ratings for Total Return reflect funds' historical total return performance relative to peers. Lipper Ratings for Consistent Return reflect funds' historical risk-adjusted returns relative to peers. Lipper Ratings for Preservation are relative, rather than absolute. Lipper Ratings for Tax Efficiency reflect funds' historical ability to postpone taxable distributions. Lipper Ratings for Expense reflect funds' expense minimization relative to peers. Lipper Ratings DO NOT take into account the effects of sales charges. *Overall Lipper Ratings are based on an equal-weighted average of percentile ranks for each measure over 3-, 5-, and 10-year periods (if applicable)." (Source: Lipper)
I've always preferred Lipper. M* impresses me as too cute by half. While technically Lipper doesn't use stars, their bar graph appears to be a similar apparition. Of course it's best to consult a number of ratings organizations. I try to look at 5. I buy new funds rarely. When Oppenheimer closed one of mine several months ago, I was forced to look for an alternative with them or move the money elsewhere.
I misspoke in using the word 'star'; Lipper does a 1-5 rating, no star graphic. If you go to www.lipperleaders.com now, you indeed find results for the two DoubleLine Cape Enhanced, so their ranking start point must be 3y, same as M*. Don't know why it did not turn up on Marketwatch, should doublecheck. Ah, today they do show up. So 3y is there answer to my query, and never mind .
Until this decade, Lipper said that a fund that fell within the top quintile was in the first quintile. Apparently too many people thought that quintiles were the same as stars, and concluded that first quintile performance was lousy. So Lipper inverted its rankings a few years ago.
Also, Lipper's rankings are linearly distributed (1/5 in the top quintile, duh), while M*'s are more bell shaped (10% get 1 or 5 stars, 22.5% get 2 or 4 stars, and 35% get 3 stars).
None of this speaks to the methodology, just the scoring.
msf said: "Also, Lipper's rankings are linearly distributed (1/5 in the top quintile, duh), while M*'s are more bell shaped (10% get 1 or 5 stars, 22.5% get 2 or 4 stars, and 35% get 3 stars.)
That's fascinating. Appears to be a type of front-loading or levering-up by M* similar to how a gambler (or investor) might use leverage to magnify a correct wager. And if he calls it incorrectly, the error is also magnified. Now I understand why M* sometimes makes so little sense to me when I try to match their ratings with my own perceptions after researching a fund's history.
Edit: The larger issue, IMHO, remains the difficulty of classifying funds so that meaningful comparisons can be drawn. Than again: What would we have to talk about here?
Comments
(Source: Lipper)
I've always preferred Lipper. M* impresses me as too cute by half. While technically Lipper doesn't use stars, their bar graph appears to be a similar apparition. Of course it's best to consult a number of ratings organizations. I try to look at 5. I buy new funds rarely. When Oppenheimer closed one of mine several months ago, I was forced to look for an alternative with them or move the money elsewhere.
Also, Lipper's rankings are linearly distributed (1/5 in the top quintile, duh), while M*'s are more bell shaped (10% get 1 or 5 stars, 22.5% get 2 or 4 stars, and 35% get 3 stars).
None of this speaks to the methodology, just the scoring.
That's fascinating. Appears to be a type of front-loading or levering-up by M* similar to how a gambler (or investor) might use leverage to magnify a correct wager. And if he calls it incorrectly, the error is also magnified. Now I understand why M* sometimes makes so little sense to me when I try to match their ratings with my own perceptions after researching a fund's history.
Edit: The larger issue, IMHO, remains the difficulty of classifying funds so that meaningful comparisons can be drawn. Than again: What would we have to talk about here?