Updated Feb 9 with SCG grading
A follow up from my earlier preliminary study that I have named Rolling Alpha Returns Exposé (tm) RARE analysis of mutual funds to identify if mutual fund returns are sensitive to when an investor holds them (more sensitive they are, less likely most investors would have realized the fund's best returns even if they held it for a long time).
Having finished the remaining programming to find the distribution of returns, I am presenting a grading system for mutual funds using the following grades. Selected funds with at least 8 years of history from The Great Owls and the top 20-30 ranked funds at US News Fund ratings. They show significant differences in the sensitivity to time periods.
The grading scale:
A - Over-achievers : Significant alpha over index returns (1% or more per year) and for most investment periods regardless of which 3 or 5 yr period you pick in the last 8 years. So most investors would have seen that performance.
B - Steady achievers : Respectable alpha over index returns (0.5 to 1% per year) for most investment periods
C - Closet indexers : No statististically significant difference from index returns for most investment periods.
D - Pretenders : While some cherry picked returns look good, less than index returns for most investment periods
E - Lottery tickets : Significant alpha for specific periods but underperformance for most investment periods
F - Failures : Statistically significant under performance for most investment periods
X - Toxic : Very poor returns relative to index for most investment periods except for an insignificant percentage of intervals so unless investors caught that interval would have suffered significantly relative to the index
Selected funds can be index funds themselves but using a different index or a narrower index than the sector index but measuring themselves againt it.
Grades for selected funds:
(*) indicates Great Owl FundsLarge Cap Core/Blend US Domestic Funds
POSKX(*), AWEIX(*), VTCIX, NMIAX
FLCEX, VQNPX, VPMIX
PRDGX(*), TRISX(*), PRCOX
CAPEX, SLCAX, SCPAXLarge Cap Value US Domestic Funds
FDSAX(*), DFLVX, DPDEX, BRLVX
NOLCX, BPAIX, DDVIX(*), TRVLX, VWNDX, LSVEX
Comments on LCV funds in the post
below.Large Cap Growth US Domestic funds
FDGRX, NICSX, GTLLX
TRBCX, TPLGX, TRLGX, FBGRX
PLGIX, FNCMX, JIBCX
POGRX, VHCOX, RDLIX(*)
TILGX, FDSVX, TLIIX,TILIX
Comments on the LCG fund grading in my comment belowSmall Cap Growth US equity funds
PRDSX, HSPGX, BCSIX, RSEGX, TRSSX, JGMAX, WFSAX, JANIX
TISEX, GSXAX, SGPIX, TSGUX
BRSGX, MPSSX, TCMSX
PS: I do realize it is egg on my face for criticizing SMVLX and starting this analysis to prove my hunch while it came out on top of all the other funds. Sometimes intuitions can be wrong and hence the need for analysis. This is why startups pivot when intuitions about their markets aren't supported in numbers later on. Shows I didn't design the analysis to prove my hunch as can be easily done with statistics!
It appears that this analysis could be applied between any two similar funds to decide which one is less likely to disappoint if you weren't lucky enough to be in it for its best periods. This can be ONE fund selection criterion say for example if you wanted to choose between POSKX and VTCIX. This would help even more when there is no good index to compare a fund to like allocation funds or multi sector funds. More of such results later.
As I mentioned in the first post, an average is just half the story. It is easily affected by outlier values, hence the distribution is important. I didn't put all the numbers in this post so people's eyes wouldn't glaze over.
The only difference between an A grade and an E is the rarity of that outperformance while both have good average expected values. Most investment periods in YAFFX history over last 8 years wouldn't benefit from the small period in which it outperformed.
To give some concrete numbers, while YAFFX had an expected average outperformance of 12%+ over 5yr periods, the median was actually -12%+. So half the periods not only saw negative alpha but also by a significant amount.
In terms of percentile, 0 or getting the same returns as an index happened at 57 percentile so if investors had invested spread uniformly across those 8 years, 57% of those investors would have seen negative alpha after 5 years in the fund. Hence the term lottery ticket for this grade and lower than a B which would have a lower alpha expected average but majority of people would at least see a positive alpha if not the maximum. Mainly because YAFFX outperformance was in such a narrow period. This is what gets hidden in cumulative calendar year returns listed everywhere.
In contrast, SMVLX had a long enough outperformance periods that any 5 yr period in its history would have enough overlap with an outperformance period to enjoy positive alpha. 0 is at 0 percentile for 5yr returns. So, investing on ANY day in the last 8 years would have given you a positive alpha between zero and the best, so even with worst luck in timing, you would not have suffered relative to the index and may have enjoyed respectable positive alpha. Hence the highest grade.
What is surprising to me is how poorly highly rated T Rowe Price funds in this category have done in this metric. Haven't looked at them closely yet to see what the explanation might be.
At a very high level, those ratios answer questions like "Is this fund going to go up and down a lot for what I can get from it that may be a problem". RARE analysis answers questions like "If I buy this fund over an index fund (or another similar fund) at any time, am I (as opposed to the fund) likely to do better than the latter".
Regarding the ratios you mentioned, a quantitative study of correlation at this point wouldn't make sense because the sample space is so small and the selected funds are already skewed towards funds that are rated high over several metrics.
But qualitatively, they do measure very different things. Those ratios along with Sharpe ratio are volatility measures that may affect your ability to achieve a certain return or the deviations they make in reaching that return that may give you ulcers. So, they are affected by market volatility. A sector index fund that faithfully follows a highly volatile sector would fare poorly in those measures.
RARE grades on the other hand measure the ability of an investor to realize an alpha or excess return over the market using a particular fund even if you are losing money because markets are headed down. It normalizes with respect to market volatility.
So, it is more related to alpha measures for a fund. It differs from alpha measures in that it is also a measure of the volatility of the alpha generation as a rolling metric. Just as volatility of price in a fund affects the probability of returns you get based on when you bought that fund (near a peak or a trough), volatility of alpha determines the probability that any individual investor will realize the alpha that is measured for the fund in discrete steps.
If the underlying market is very steady and the fund is very volatile with respect to it, it may be correlated with volatility measures. But if the fund is also steady, then it isn't correlated. A fund that generates alpha uniformly will have good RARE grades and a fund that is equally steady but loses with respect to the index say with high ER, will have poor RARE grade while the volatility measures might be the same for both.
The reason this distinction to alpha measure is important is because of timing and the incentive fund managers have to manage returns for a calendar year. If the alpha performance of a fund is streaky or lumpy as may happen with a focused fund, then the alpha you would see will be very dependent on when you bought into that fund and may be much less or negative compared to the alpha reported for that fund in some fixed time period.
Fund managers are in the asset gathering business not fund returns business, so the latter is only important as to help in that asset gathering. Because of the year to year metrics used which determine how a fund fares in rankings and comparisons, it sets up an incentive for a manager to manage the returns for a calendar year rather than manage the returns for the investors coming in and out.
The activity of fund managers trying to juice up returns towards the end of the year by increasing risk or beta exposure especially if they are trailing the index is much talked about. If you sold the fund before they did this (or have been selling regularly in retirement), you may not get anywhere close to the returns for the fund for the year.
What is not talked about much is the opposite, when fund managers get a windfall earlier in the year. Depending on that gain, it may be safer for the managers to reduce risk for rest of year and coast than risk losing that earlier gain. So if you are doing monthly buys or buy later in the year, you may not see anywhere close to the gains for the fund for the year.
RARE grades try to separate funds that have streaky or lumpy gains in narrow periods from those that steadily (not calendar year consistent as some metrics try to do) beat the market over longer periods. While it may protect you from poor timing of when you got into the fund, it also has the benefit of potentially avoiding funds that look good on paper but that may be from one or two lucky streaks and unlikely to be repeated. Funds with steady gains over longer periods may indicate better management and/or strategy.
Should complement other fund evaluation criterion.
>> Fund managers are in the asset gathering business not fund returns business,
? Help one understand how this cynical take follows from your work. Also of course the two motives are correlated, right? Would you assert this re the supposed good guys, D&C, Danoff/Tillinghast, CGM, Oakmark, Parnassus, and the many others considered noble?
The particular scenario I mentioned for the context is one of those where money coming in later in the year may face less beta exposure and hence less returns because the manager decides to play safe to preserve annual return advantage over peers that makes the fund look good. If returns were the primary criterion, Rob Arnott funds would have closed long time ago. On the other hand, funds that fail to get sufficient assets for the revenue needs shut down regardless of what the returns were.
I only mentioned it to say that an analysis like this might catch any such decisions made where managers try to game calendar year returns, not to say every manager is ignoble even when asset gathering that provides revenue is still the main purpose of the mutual fund business.
Anyway, not to get too much on a tangent on this minor point.
Will update this thread shortly with gradings on top large cap value funds.
While noting that the funds being selected for this grading are what would be considered highly rated funds and so they should all have done reasonably well, I found many more funds to have realized consistently high alpha in the LCV category than in the blend/core.
There was also a problem in that the funds follow similar but different indexes to benchmark.
I chose the ETF IWD and the mutual fund VIVAX following two different indices for benchmarking alpha against. The tracking difference between these two is the margin for error.
There are very few great owls in this category and even fewer with the required 8 years of history. So most of the selected funds come from the top 30 at US News & World Report Money guide.
This list has a number of DFA funds at the top but since they were all managed by the same team with very similar portfolio, I just picked one of them.
The only stinker was the Fidelity Blue Chip Value in this analysis.
In the case of PRBLX, the 5yr expected alpha (over performance) across all possible 5 yr periods was 1.36% (cumulative not annualized). This is in the index hugger region. But the median was -0.94%. In other words, if investors invested uniformly in that period, half the people would have seen slight underperformance relative to index. Hence the C-. Or if you were hypothetically adding to it every day, half of your lots would be worse off than an index after 5 years from their purchase. Not intuitive from looking at a snapshot at any one time but this is what happens.
What that means is that PRBLX overperformed in some period over the last 8 years and if your investment period happened to coincide with sufficient part of it, you would benefit otherwise you would see the same as or worse than an index fund. The current available metrics/tools do not expose this aspect of the performance. RARE does.
PS: I am using the category index returns to benchmark that being the S&P 500 for large blend. The category average that M* uses is not very useful since it is a simple average over the funds in that category and subject to distortions by outliers. Besides, you cannot buy a fund that gives you category averages as the alternative. You can buy an index fund instead of the fund being studied, hence the comparison more useful.
If you had instead bought hindsight winner SMVLX 1, 2, or 8y ago, you would have done worse than PRBLX. But not for the other 02/06/2nnn 1-year periods. Interesting.
The RARE funds presumably all have same management for the last 8y?
About the management changes, it is an interesting point. As you know, conventional metrics may be made irrelevant by manager changes. I have not considered manager changes at all. It is my conjecture that the RARE analysis would automatically adjust for any significant change in manager performance by lowering the higher grade of the two if the two performances differed significantly to be meaningful. As consistency measured across the two periods would suffer. If the manager change had no significant effect on the performance, then the grade wouldn't change.
I would pay no or at least much less attention to RARE unless delimited for manager tenure. Otherwise you're not measuring a real entity, other than generic sector choice as flavored by some random people, I would suggest. (Some would say that is a definition of investing . )
However, the above comment seems bizarre. None of the metrics on this site or on M* are partitioned to a manager's tenure. The RARE metric is no different from say volatility measures that aggregate over a period of time regardless of whether there is a management change. It does not measure any "entity" real or imagined, not anymore than the alpha and beta metrics of a fund.
Perhaps you are conflating the consistency of performance of a fund with consistency of a manager. Obviously, the latter can affect the former. But when you measure the performance of a fund, that is what you are measuring, not the performance of a single manager which doesn't even make sense to talk about, for example, if it is managed by a team. Lot of teams evolve incrementally.
If a fund measures well here, then it wouldn't matter if it was a single manager or multiple. All that means is that the different managers have maintained the strategy of the fund. This is what a fund aims to do with manager changes. It is a good thing to know that a manager change has not changed the steady performance of a fund. This metric is a good one to measure how the transition is affecting that by measuring across time not by delimiting it.
If a fund measures poorly and it has a single manager for the periods used, then it means the manager's strategy isn't consistent. If the manager has changed, then you see if the measure has changed within those tenures once the new manager has been around for a whole. This is what you do with any metric (for example return or volatility measures) if they measure bad. Such a situation is the exception than the norm.
So unless you are misunderstanding the interpretation of this measure, don't see why the measure necessarily needs to limit to a single manager tenure and not planning to go in that direction.
But whether you don't use it for the above reason or for any reason including dissonance with other metrics or your picks, that is your choice and prerogative, of course.
Let us discuss the Pats or the Celts as entities and not makeup by individuals. Not for me as a bettor on their games or season.
>> If a fund measures poorly and it has a single manager for the periods used, then it means the manager's strategy isn't consistent.
Really, that is what it means? Your definition of strategy may be tautological. A good manager is one whose strategy works in all circumstance, and vice-versa.
Without knowing who makes the decisions, measuring 'the performance of a fund' is meaningless. Thinking they somehow have a life of their own is something everyone rejects, innit?
>> None of the metrics on this site or on M* are partitioned to a manager's tenure.
Say what? I want to take over CGMFX, or SMVLX, and keep up the good work!
There is a fair amount of literature on this, team vs individual, which perhaps you are not familiar with.
>> not planning to go in that direction
Whatever will you do if a fund goes from hot to cold or changes from a 85yo veteran to young whippersnapper? After all, it keeps the same letters. Wait for 8y of finely parsed cycles to see?
Ah, you already are dealing w la creme only. Perhaps that answers all my questions here. Grading among various prime cuts only.
Still, new understanding of 'bizarre' and 'obviously' for me.
So I will let it sit there unless someone else feels there is a point I am missing and explains it to me.
Moving on to other sectors...
@MikeM, thanks for the kind words.
Benchmarked to index funds IWF and V I G R X (cannot post with this symbol whole, seems like a bug in the software) for measuring alpha.
Most funds come from US News Money Top Funds and contain well known names. Most funds in this category that are top rated put in very strong numbers in this metric.
Note that the Large Growth Category in the Metrics at this site and everywhere else also includes funds that use Nasdaq 100 stocks as the benchmark or sector. I have intentionally removed them from this list as their performance compared to a broader Large Cap Growth index fund used here was very lopsided and unfair to funds that use a broader sector. Perhaps, they can be evaluated in the future on their own compared to the relevant index like QQQ.
The surprise stinkies in this metric for this category were FCNTX and PRGIX both of which returned negative alpha for most periods in the last 8 years. Individual returns with these funds really do depend on when you invested in them.
Just a reminder to readers that what this metric is measuring is the alpha or the over/under performance relative to the index not absolute performance so all of these funds may have good cumulative returns over specific periods (or poor returns in down markets). What the analysis exposes is whether the actual returns for an investor in these funds are sensitive to when you invested in them and the grading reflects that sensitivity. Less sensitive a fund is, more likely that an investment at any time in such funds can benefit, not just a few lucky ones who got in at the right time.
It isn't meant to be the one and only grade that decides what a good fund is. It measures one (so far unexplored) dimension of fund selection and quantifies it.
Did not check LCG after I saw Danoff got an X:
'Toxic : Very poor returns relative to index for most investment periods except for an insignificant percentage of intervals so unless investors caught that interval would have suffered significantly relative to the index'
I'm not the only buy-hold investor who likes to know who's running a fund and if there is a change. Investopedia points out that manager "performance data that goes back only a few years is hardly a valid measure of talent. To be statistically sound, evidence of a manager's track record needs to span, at a minimum, 10 years or more." An extension of what I was querying in this RARE methodology. I cannot find exactly how M* accounts for tenure, only that they do.
Investopedia do go on however to note a study that individual-manager added value accounts for less than a third of performance, which I hadn't read before, and if true speaks more to your take re persistence / consistency, institutional approach / method. (Maybe this is a claim you are not making explicitly; don't mean to misattribute based on my inferences.)
Is it reasonable also to conclude that reduced volatility is being measured here, so if you are not buy-hold you are advised to look for lower volatility?
The answer to your last question is in the response to @davfor earlier. So, I am not sure I can help further than to perhaps suggest re-reading and further rumination (or not depending on how much it is worth to you). PM me if you have basic questions on understanding the concept.
The quote you have about FCNTX is a factual measurement of the results of the fund, not an inference (perhaps the label is too strong). So, one would be better off trying to understand how that metric can be reconciled with the concept one has of the fund or its manager than ignoring it. But that is just me.
@old-skeet, of course rankings are going to differ based on the metrics they use. What is surprising about that?
You asked ... "What is surprising about that?"
For me what is puzzling is that you rated FDSAX with an A+ rating while US News & Report rates it #66 in the LCV category. This fund has out performed most LCV funds hands down but is rated down in the stack. That's count one. Count two. SPECX has been the best LCG fund performer over a ten year period with better than a 12% annualized return ... but, again US News & Report rates it #142. Heck, they even rate AGTHX (another fund I own) in front of it at #116 while SPECX has trounced it. I could continue ...
I don't mean to take away from your fine "Rare Analysis" work. But, just perhaps your are not rating the best funds ... Just funds that US News & Report are reporting to be the best. Maybe, that is why some of the funds that they have rated highly have turned out to be really dogs by your rating system.
With the above being stated ... US News & World Report mutual funds ratings seem, to me, to indeed be of great suspect.
Comparing such rankings to RARE grades is even worse. I have taken a lot of pains to explain that this ranking is an evaluation of how the fund stacks up in one dimension, not a ranking of best fund or even an indicator of what a fund might return in some period of time. Yet people seem to be thinking that.
Lipper ranks funds in 5 dimensions. Would you say a fund is a Lipper 5 in tax efficiency but it is #256 in some other rankings and so the latter ranking is suspect? That is the same as comparing to this grading.
@davidrmoran, you said "I do realize that RARE is necessarily about investor patience,". This is absolutely not the case. It is, in some sense, a ranking of how investors with the same patience level will fare with a fund depending on which period of the fund they were patient in. Along with an observation that they will see very different returns even if they waited for the same period whether it is 3 yrs 5 yrs or if one were to calculate it for 8 or 10 years. Some funds do better at rewarding all similarly patient investors but investing at different times (as would naturally happen) better than others. This is what the grading here is about. This difference really needs to be understood.
That seems to me like a distinction without an effective difference, but I fear you will tell me again I am not getting it. What is RARE's utility to one who holds a fund longterm? By patience I did not mean for 4y periods each with a different start point; I meant authentic holding patience, like for 8y. Or coming from the other side, can you speak further to the relationship (or the differences) b/w RARE and low volatility?
It is a valid question to ask as to how one would use this. So I will answer that as it would be useful to many to consider
The way I would use it in the future would be as one more input into fund selection. Typically, fund selection uses a number of factors, performance, tax efficiency, manager reputation, etc. So you land up with a number of potential candidates or a short list for most asset classes.
Out of these I would look at RARE grades if available or I can calculate them to see if any of them scored low. If they did, then I would suspect that perhaps the manager was streaky in his performance or only did better in specific market conditions. If the fund seems like a good one otherwise, I would take a look again at the performance charts to see if he had some small periods where he did well but not so much otherwise. This is different from just looking at 5yr or 10yr results as people who have read should have understood by now. If that was the case, I would skip it even if it was highly rated because none of these ratings consider this fact about the fund. Or if I really wanted that fund for other reasons or didn't have a choice say in a 401k, then I would wait for a time when the manager was underperforming the index than overperforming to get in with a big lumpsum. This is different from whether the market was down. If my plan was a regular DCA, I would not get into such a fund with low RARE grading because, it is likely that a lot of my purchases would likely do better elsewhere missing the streaky performance.
This is more about minimizing opportunity costs in getting into a better fund not in minimizing losses as people who have understood the metric would clearly see.
If the rating was good - B or better - then it would reassure me further about the fund that I had shortlisted that I wouldn't get shortchanged because I happened to jump in at the wrong time. So I wouldn't heistate to get into such a fund if other factors about the fund checked out.
If the only potential candidates for that asset class from other considerations and availability were in the C region or below in RARE grades, I would just buy an index fund or ETF instead for that allocation.
Every year or so when I take a look at my funds, I would reassess the funds in the same way as if I were selecting them and see if the original assumptions still hold. I am not in the camp of hanging on to a fund for decades to give the manager a chance. It is too late then if the manager has failed. If a fund has not met my realistic expectations of the fund for selection in 3-5 years, then the fund is out. My expectation if I have done the selection carefully is that I really would not need to make any changes but I am not one to believe I should not change and just hang on.
But that is just me adopting this additional information to make a more informed choice. Some people throw darts or chase returns or pick the latest fad fund and pray their selection was good. My effort here is to share that additional information with people who think more like the former than the latter.
Benchmarked to VISGX and IWO. A problem with Small Cap category is that there are multiple indices tracked by different funds. These two benchmarks track different indices so the substantial difference between them is the margin of error.
Even so, this category showed a pronounced bar bell distribution with a lot of funds overperforming in this metric (PRNHX blew the scale) and a lot of stinkers at the other end. Finding a good consistent alpha generating fund in this catergory should be easy except so many of them are closed.
Perhaps some of them will reopen after this correction.