Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.
Support MFO
Donate through PayPal
The Shocking Truth Mutual Funds Don't Want You To Know
FYI: The vast majority of mutual funds have more in common with one-hit wonders than they would want you to know. The shocking truth is that most mutual funds that rank in the top performance quartile one year don’t do it again the next year nor the following year. And no funds stay in the top quartile over five years. Even worse, about a third of mutual funds die or get merged with another after five years. Regards, Ted http://www.forbes.com/sites/trangho/2015/06/23/the-shocking-truth-mutual-funds-dont-want-you-to-know/print/
I did not see what research she used to write this article. "Most" mutual funds could be, what, 51% or 65% or ? This is all stuff we have heard for many years, so it is strange to me that Forbes would bother with the article. I think most of us can agree that domestic index funds provide the broadest coverage for the lowest expenses. Whether an investor chooses to add selected active-share funds is another decision. The shocking truth to the writer may be that there are more than a few active-share funds that have very strong 3 & 5 year records. Frankly, whether the fund lands in the top quartile every year for five years during a strong bull market is of little importance to me. On the other hand, as we already know (old news), there are a lot of crappy funds out there that exist only because marketing arms of fund companies continue to push them. That does not mean there are not some great active-share options. Investors just need to do their homework and accept that a great fund will not likely be in the top of its group every year. Just not possible. Add to that how funds are categorized by Lipper and Morningstar, often in what I think are the wrong asset classes, and rankings and comparisons become more difficult.
I agree with BobC on the broad points, but have nits to pick with the details.
Broad points: 1. Period-by-period "consistency", a la Bill Miller/Legg Mason Value, doesn't matter. What matters is long term performance. 2. Many (dare I say most, whatever that means ) financial writers are either poor writers, don't understand their subject well, or both.
On that second point, the writer strategically omits mention of bond funds (also included in the S&P report), perhaps because they would undercut her thesis - no persistence of performance.
"Performance persistence levels have tended to be higher among the top-quartile fixed income funds over the past three years ending March 2015." (From the S&P report.) Not surprising, since for bond funds, cost is a huge determinant of performance, much more so than for equity funds.
Details: 1. The source of the material was stated in the second paragraph - S&P Dow Jones Indices’ Persistence Scorecard. (I've linked to the S&P Scorecard.) The research was S&P internal research. Raw data came from CRSP.
2. "Most", unless otherwise stated, may be taken to mean over half. If you look at Exhibit 2, under 1/3 of top quartile domestic funds in 2011 repeated in 2012. Exhibit 1 looks at top quartile funds from 2013; 1/4 or less repeated in 2014.
3. The consistency sought by S&P was for yearly, not quarterly performance. They just started their years in March.
4. The only consistency that one may reasonably expect is that index funds will consistently underperform their benchmarkts. Not by much, but that's the only consistent performance I expect to find anywhere.
5. All classification systems have their limitations; we've been over this ground many times. However, Morningstar and Lipper are irrelevant here, as this is an S&P report, and S&P uses its own classification system.
From S&P's Mutual Fund Guide: "Standard & Poor’s ... analyz[es] fund behavior, then classif[ies] funds into 67 different styles". This is a different methodology from M* and Lipper, in that S&P classifies based more on behaviour than on portfolio.
A very nice ongoing discussion here. Please allow me to contribute a few thoughts.
The Persistency Scorecard has been a part of the mutual fund industry for many years. In each of its now semi-annual reports, although the specific numbers change, its overarching findings have remained the same. Persistent mutual fund performance in terms of consecutive quartile rankings is an elusive goal.
In large part, what goes up all too often comes down. One exception is that the poorest quartile of funds linger in their desperate positions until they disappear from the scene.
This consist finding opens the manager skill/luck issue once again. As noted in the referenced report: “Demonstrating the ability to outperform repeatedly is the only proven way to differentiate a manager’s luck from skill.” Far too many active managers are failing this test.
I don’t remember the source, but from memory, about 75% of active managers hover near their benchmarks, sometimes generating positive Alpha, sometimes producing negative Alpha. After fees and trading costs, their performance relative to a benchmark is mostly a wash. Roughly 24% of active managers are persistent losers relative to their benchmarks. The residual 1% consistently contribute positive Alpha.
That’s a thin cohort. The trick is to identify these rare souls. But if the fund population is roughly 8,000 strong, that means that maybe 80 such funds exist and are waiting to be discovered.
In general, I like the Persistency Scorecard. But it does have its shortcomings. The Persistency Scorecard is not focusing on the most useful fund selection criteria. Consecutive quartile rankings are not nearly the best sorting tool.
Individual investors are much more interested in cumulative outperformance relative to some benchmark. The single best criterion to satisfy that target is a positive Alpha. And its not single year Alpha. It is positive Alpha over several extended timeframes.
Morningstar provides precisely the requisite data in their Ratings and Risk MPT Statistics section. Morningstar computes Alpha for 3, 5, 10, and 15-year periods using several benchmark comparisons. Alpha is a dynamic parameter and depends on timeframe and comparison standard.
No easy answers here since much depends on your specific portfolio management style and measurement choices. It helps if management has been stable over time and if costs are minimal.
The hunt is to seek positive Alpha scores over multiple timeframes. They exist.
Comments
Broad points:
1. Period-by-period "consistency", a la Bill Miller/Legg Mason Value, doesn't matter. What matters is long term performance.
2. Many (dare I say most, whatever that means ) financial writers are either poor writers, don't understand their subject well, or both.
On that second point, the writer strategically omits mention of bond funds (also included in the S&P report), perhaps because they would undercut her thesis - no persistence of performance.
"Performance persistence levels have tended to be higher among the top-quartile fixed income funds over the past three years ending March 2015." (From the S&P report.) Not surprising, since for bond funds, cost is a huge determinant of performance, much more so than for equity funds.
Details:
1. The source of the material was stated in the second paragraph - S&P Dow Jones Indices’ Persistence Scorecard. (I've linked to the S&P Scorecard.) The research was S&P internal research. Raw data came from CRSP.
2. "Most", unless otherwise stated, may be taken to mean over half. If you look at Exhibit 2, under 1/3 of top quartile domestic funds in 2011 repeated in 2012. Exhibit 1 looks at top quartile funds from 2013; 1/4 or less repeated in 2014.
3. The consistency sought by S&P was for yearly, not quarterly performance. They just started their years in March.
4. The only consistency that one may reasonably expect is that index funds will consistently underperform their benchmarkts. Not by much, but that's the only consistent performance I expect to find anywhere.
5. All classification systems have their limitations; we've been over this ground many times. However, Morningstar and Lipper are irrelevant here, as this is an S&P report, and S&P uses its own classification system.
From S&P's Mutual Fund Guide: "Standard & Poor’s ... analyz[es] fund behavior, then classif[ies] funds into 67 different styles". This is a different methodology from M* and Lipper, in that S&P classifies based more on behaviour than on portfolio.
A very nice ongoing discussion here. Please allow me to contribute a few thoughts.
The Persistency Scorecard has been a part of the mutual fund industry for many years. In each of its now semi-annual reports, although the specific numbers change, its overarching findings have remained the same. Persistent mutual fund performance in terms of consecutive quartile rankings is an elusive goal.
In large part, what goes up all too often comes down. One exception is that the poorest quartile of funds linger in their desperate positions until they disappear from the scene.
This consist finding opens the manager skill/luck issue once again. As noted in the referenced report: “Demonstrating the ability to outperform repeatedly is the only proven way to differentiate a manager’s luck from skill.” Far too many active managers are failing this test.
I don’t remember the source, but from memory, about 75% of active managers hover near their benchmarks, sometimes generating positive Alpha, sometimes producing negative Alpha. After fees and trading costs, their performance relative to a benchmark is mostly a wash. Roughly 24% of active managers are persistent losers relative to their benchmarks. The residual 1% consistently contribute positive Alpha.
That’s a thin cohort. The trick is to identify these rare souls. But if the fund population is roughly 8,000 strong, that means that maybe 80 such funds exist and are waiting to be discovered.
In general, I like the Persistency Scorecard. But it does have its shortcomings. The Persistency Scorecard is not focusing on the most useful fund selection criteria. Consecutive quartile rankings are not nearly the best sorting tool.
Individual investors are much more interested in cumulative outperformance relative to some benchmark. The single best criterion to satisfy that target is a positive Alpha. And its not single year Alpha. It is positive Alpha over several extended timeframes.
Morningstar provides precisely the requisite data in their Ratings and Risk MPT Statistics section. Morningstar computes Alpha for 3, 5, 10, and 15-year periods using several benchmark comparisons. Alpha is a dynamic parameter and depends on timeframe and comparison standard.
No easy answers here since much depends on your specific portfolio management style and measurement choices. It helps if management has been stable over time and if costs are minimal.
The hunt is to seek positive Alpha scores over multiple timeframes. They exist.
Best Wishes.