Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.
I really, really hate to be the guy who has to tell you folks this, but the site is acting up again in a very similar way to the past few episodes.
I’ve been watching for a couple of hours, and if it's any help at all in analysis, the issue seems to run in spurts- the site will behave normally for maybe five minutes or so, and then comes a longish period where it gets very slow in response, and then deteriorates further to no response at all.
It’s almost as if there is some process which is running intermittently which is using so much of the server resources that everything else either runs very slowly or even not at all. I’d think that if it was an electronic (equipment) problem it likely would just break and not work at all. That suggests that perhaps some sort of intermittent data process occurs which is almost more than the server can handle, or possibly a bandwidth issue where just too many users are trying to access the site all at the same time.
There must be some sort of analytic record which documents various parameters of server activity. I surely hope so, anyhow.
I've emailed David and Chip, and they're aware of the situation. A major issue is the fact that back in the beginning the fellow who did the adaptation of the Vanilla software to satisfy the requests of the MFO users made a number of modifications to the basic Vanilla programming, and while that may not be the cause of our problem it makes it very difficult to sort out exactly what might be going on now.
We have to realize all of this takes a significant toll on the time resources of David and Chip, who do all of this without significant compensation. Viewing that, I suggested to David that perhaps we might explore the possibility of hiring a programmer who specializes in Vanilla to see if the existing platform can be debugged.
If David chooses that option (and he hasn't responded to my suggestion, so I have no idea what he might think of it) we users will need to seriously consider coming up with some financial resources to cover that expense.
Chip has responded to my emails, and she is very receptive to "hiring somebody who can take on some of the programming, and possibly even complete the server migration."
Chip also asks that "If you have any sense of where to find programmers skilled in WordPress, vanilla board, PHP, and MySQL, running on a CentOS server with cPanel for controls (our current tech stack), please let me know . I'd be happy to chat with them."
If anyone can be of any technical help at all on this, including possible leads to programmers meeting Chip's needs, please mention that here, and I'll inform Chip. Let me clarify something regarding my relationship with David and Chip: I have absolutely no "official" position of any sort with the management or operation of MFO.
I do, however, have a certain degree of communication access because of some minor early work in the design of the MFO website that I was involved with. Because there is no way via the website for any of us to communicate with David or Chip if the site is in trouble, I try to keep them informed via email of any serious malfunctioning.
You are wonderful people--- all concerned. Computer ANYTHING is beyond ME. But I could certainly contribute, and hope that others might likewise be willing. Thanks for the update!
FWIW - I could not access the MFO site for about an hour this morning. The web message I received said that my browser (Safari) was working, the host (MFO) server was working but Cloudflare (some kind of go between I guess) was not functioning properly. This all went down roughly between 7:30am and 8:40am, November 18, 2025.
Cloudflare is a "content delivery network." It mirrors website content on thousands of servers worldwide. Estimates indicate that 24.03 million active websites use Cloudflare globally. When a website is protected by Cloudfare, users connect to the nearest Cloudflare server. Primary benefits include faster response times and website protection from a flood of traffic.
The company wrote an email statement to Mashable which read: "Many of Cloudflare's services experienced a significant outage today beginning around 11:20 UTC. It was fully resolved at 14:30 UTC. The root cause of the outage was a configuration file that is automatically generated to manage threat traffic. The file grew beyond an expected size of entries and triggered a crash in the software system that handles traffic for a number of Cloudflare's services.
To be clear, there is no evidence that this was the result of an attack or caused by malicious activity. We expect that some Cloudflare services will be briefly degraded as traffic naturally spikes post incident but we expect all services to return to normal in the next few hours. A detailed explanation will be posted soon on blog.cloudflare.com. Given the importance of Cloudflare's services, any outage is unacceptable. We apologize to our customers and the Internet in general for letting you down today. We will learn from today's incident and improve."
From the article linked by @yogibearbull we learn that AI is not an attacker or a malicious actor.
We have a new form of the "computer did it" from the 90s. Bolding mine.
@Old_Joe Tell me something. Does this read that the AI configuration generator has a private war with what used to be DOS type attacker or does it mean that the new DOS attacks are also AI generated. (Or, more likely, Anna is going batty.)
...automatically generated configuration file used to manage threat traffic that “grew beyond an expected size of entries....
There’s no evidence that the outage was a result of an attack or caused by malicious activity, the spokesperson added.
@Anna- I'm sorry, but I have absolutely no significant expertise in the wonderful world of software. But I do believe that I've read that software can be designed to automatically initiate processes which are normally quiescent, but which can be activated under certain specific operating circumstances.
I've mentioned this with respect to the recent intermittent problems with the MFO software. For instance, consider that the database containing all of the MFO posting commentary has gotten bigger each and every day since the first day of MFO.
Can that database simply increase exponentially in perpetuity? Is it possible that it has now reached a size that some aspect of the operating software cannot handle easily? Is it possible that the operating software tries to protect the situation by running an automatically generated configuration file which is designed to compress (or otherwise handle) certain files? Is it possible that (if and when) that happens it slows down the MFO response capability?
I'm way out of my knowledge area here, but those seem to be reasonable questions under the circumstances.
@Old_Joe You are probably right. I probably got misled by the phrase "threat traffic that grew" which sounded like rapid external input rather than an internal data base.
@Anna- my comments were strictly limited to "automatically generated configuration file", using recent MFO problems as an example.
With respect to "threat traffic that grew" as used by Cloudflare, I wouldn't hazard a guess as to the exact definition of "threat traffic". Seems to me that it could mean either traffic that deliberately threatened Cloudflare operation, or possibly just innocent traffic that exceeded Cloudflare's operational parameters.
I will give the benefit of doubt that 'Cloudflare' is keeping the network system up to speed by whatever is needed to 'exceed' anticipated traffic. This is indicated at their website. Perhaps one of their customers had an attack that moved down the line.
This reminds me of the 'old buffer overflow' or too many 'tasks' running in a 'task manager' look for a system. Much of this over the years are related to upgrades in 'systems' and older devices no longer capable of being compatible. A whole different circumstance, but of the type my old brain relates for 'failures'.
Hi @Old_Joe et al I forgot to mention a tiny piece of the messaging I saw early when attempting to go to MFO. The screen indicated both Chicago and Toronto as the failure locations for a Cloudflare connection. These locations are expected and normal, for our connection from Michigan.
Comments
Additional note, at 20:35 PDST: I just noticed that the time stamp on this post is also wrong- it's showing 4:50PM.
I've attempted to alert Chip and David to the site problems via email.
10 minutes for this post, to post.
We have to realize all of this takes a significant toll on the time resources of David and Chip, who do all of this without significant compensation. Viewing that, I suggested to David that perhaps we might explore the possibility of hiring a programmer who specializes in Vanilla to see if the existing platform can be debugged.
If David chooses that option (and he hasn't responded to my suggestion, so I have no idea what he might think of it) we users will need to seriously consider coming up with some financial resources to cover that expense.
That's all that I know at this time. Stand by.
OJ
Chip also asks that "If you have any sense of where to find programmers skilled in WordPress, vanilla board, PHP, and MySQL, running on a CentOS server with cPanel for controls (our current tech stack), please let me know . I'd be happy to chat with them."
If anyone can be of any technical help at all on this, including possible leads to programmers meeting Chip's needs, please mention that here, and I'll inform Chip. Let me clarify something regarding my relationship with David and Chip: I have absolutely no "official" position of any sort with the management or operation of MFO.
I do, however, have a certain degree of communication access because of some minor early work in the design of the MFO website that I was involved with. Because there is no way via the website for any of us to communicate with David or Chip if the site is in trouble, I try to keep them informed via email of any serious malfunctioning.
Thanks for your attention to this serious issue.
OJ
https://www.cnbc.com/2025/11/18/cloudflare-down-outage-traffic-spike-x-chatgpt.html
It mirrors website content on thousands of servers worldwide.
Estimates indicate that 24.03 million active websites use Cloudflare globally.
When a website is protected by Cloudfare, users connect to the nearest Cloudflare server.
Primary benefits include faster response times and website protection from a flood of traffic.
The company wrote an email statement to Mashable which read:
"Many of Cloudflare's services experienced a significant outage today beginning around 11:20 UTC.
It was fully resolved at 14:30 UTC.
The root cause of the outage was a configuration file that is automatically generated to manage threat traffic.
The file grew beyond an expected size of entries and triggered a crash in the software system
that handles traffic for a number of Cloudflare's services.
To be clear, there is no evidence that this was the result of an attack or caused by malicious activity.
We expect that some Cloudflare services will be briefly degraded as traffic naturally spikes post incident
but we expect all services to return to normal in the next few hours.
A detailed explanation will be posted soon on blog.cloudflare.com.
Given the importance of Cloudflare's services, any outage is unacceptable.
We apologize to our customers and the Internet in general for letting you down today.
We will learn from today's incident and improve."
We have a new form of the "computer did it" from the 90s. Bolding mine.
@Old_Joe Tell me something. Does this read that the AI configuration generator has a private war with what used to be DOS type attacker or does it mean that the new DOS attacks are also AI generated. (Or, more likely, Anna is going batty.)
I've mentioned this with respect to the recent intermittent problems with the MFO software. For instance, consider that the database containing all of the MFO posting commentary has gotten bigger each and every day since the first day of MFO.
Can that database simply increase exponentially in perpetuity? Is it possible that it has now reached a size that some aspect of the operating software cannot handle easily? Is it possible that the operating software tries to protect the situation by running an automatically generated configuration file which is designed to compress (or otherwise handle) certain files? Is it possible that (if and when) that happens it slows down the MFO response capability?
I'm way out of my knowledge area here, but those seem to be reasonable questions under the circumstances.
With respect to "threat traffic that grew" as used by Cloudflare, I wouldn't hazard a guess as to the exact definition of "threat traffic". Seems to me that it could mean either traffic that deliberately threatened Cloudflare operation, or possibly just innocent traffic that exceeded Cloudflare's operational parameters.
This reminds me of the 'old buffer overflow' or too many 'tasks' running in a 'task manager' look for a system. Much of this over the years are related to upgrades in 'systems' and older devices no longer capable of being compatible. A whole different circumstance, but of the type my old brain relates for 'failures'.