Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.
Support MFO
Donate through PayPal
Knowledge @ Wharton: 'Scale And Skill': Why It's Hard For Managed Funds To Beat The Indexes
They keep using the same flawed average statistics over ALL funds to cast this same debate in different ways. I am not arguing the opposite of their conclusions but that the conclusions don't follow from their methodology unless they address the flaws.
The thing I find fascinating about business school or economics research is the difference in approach to science disciplines.
Two remarkable differences. Science worries about "anomalies" that don't fit the hypothesis and goes out of the way to find anomalies or to modify the hypothesis to explain the anomalies. Economists, especially those associated with U of Chicago do the opposite, they try to ignore or smooth out anomalies.
The anomaly they should be addressing is how some of these factors don't seem to apply to some active managers or whether lack of accountability for some funds which have no trouble raising assets despite mediocre performance (and hence have no incentive for skill) skew the average results away from skilled managers.
The other remarkable difference is being blind to alternate hypothesis that can also fit the same data. Even the F&F article linked earlier, contained an admission from them that they chose an explanation that fit a hypothesis (strictly speaking it is not even an hypothesis, just a postulate) rather than find the most probable hypothesis that explains the data.
For example, consider the following statement from this article:
In theory, if all managed funds followed the same strategy, buying or selling the same stocks at the same time, the funds’ combined “industry” size would matter more than their individual sizes. The same amount of stock would be bought or sold regardless of whether the industry was composed of a lot of small funds or a few large ones, and it is the total amount of a specific stock bought or sold that has the effect of moving the price up or down.
A smart or objective scientist would think before publishing - wait, this is exactly the nature of the control I am using for this experiment as index funds do exactly this. So, inflows of money into index funds should similarly affect the performance of index funds.
Is the increase in indexing over the same period affecting the conclusions on average performance of active funds? If so, by how much? Is this symmetric in both up and down markets? What happens to the skill hypothesis if this rising tide "distortion" is factored out as may happen in a saturated assets market or negative in a declining assets market? Does the latter explain the over performance in bear markets? In other words, can I make a conclusion regarding increasing asset base or competitiveness on active fund performance without studying the effect of this variable in the control I am using.
Clearly the requirements in peer review in these fields do not follow the same rigor required in hard sciences, so it encourages this kind of "sloppy" research followed by people that are more vested in the conclusion than any search for truth or knowledge. The same as some of the debates in climate change.
The obvious flaws and holes are left as items for further research but never get done. Scientific peer reviews would send the paper back as first conducting those studies before making meaninful conclusions ( been there, done that). Not in economics.
For the record, I don't think Knowledge@Wharton is a peer-reviewed article as much as it is one of these new varieties of pseudo-academic "Popular Psychology" journals that keep popping up from time to time.
Comments
The thing I find fascinating about business school or economics research is the difference in approach to science disciplines.
Two remarkable differences. Science worries about "anomalies" that don't fit the hypothesis and goes out of the way to find anomalies or to modify the hypothesis to explain the anomalies. Economists, especially those associated with U of Chicago do the opposite, they try to ignore or smooth out anomalies.
The anomaly they should be addressing is how some of these factors don't seem to apply to some active managers or whether lack of accountability for some funds which have no trouble raising assets despite mediocre performance (and hence have no incentive for skill) skew the average results away from skilled managers.
The other remarkable difference is being blind to alternate hypothesis that can also fit the same data. Even the F&F article linked earlier, contained an admission from them that they chose an explanation that fit a hypothesis (strictly speaking it is not even an hypothesis, just a postulate) rather than find the most probable hypothesis that explains the data.
For example, consider the following statement from this article: A smart or objective scientist would think before publishing - wait, this is exactly the nature of the control I am using for this experiment as index funds do exactly this. So, inflows of money into index funds should similarly affect the performance of index funds.
Is the increase in indexing over the same period affecting the conclusions on average performance of active funds? If so, by how much? Is this symmetric in both up and down markets? What happens to the skill hypothesis if this rising tide "distortion" is factored out as may happen in a saturated assets market or negative in a declining assets market? Does the latter explain the over performance in bear markets? In other words, can I make a conclusion regarding increasing asset base or competitiveness on active fund performance without studying the effect of this variable in the control I am using.
Clearly the requirements in peer review in these fields do not follow the same rigor required in hard sciences, so it encourages this kind of "sloppy" research followed by people that are more vested in the conclusion than any search for truth or knowledge. The same as some of the debates in climate change.
The obvious flaws and holes are left as items for further research but never get done. Scientific peer reviews would send the paper back as first conducting those studies before making meaninful conclusions ( been there, done that). Not in economics.
For the record, I don't think Knowledge@Wharton is a peer-reviewed article as much as it is one of these new varieties of pseudo-academic "Popular Psychology" journals that keep popping up from time to time.