It looks like you're new here. If you want to get involved, click one of these buttons!
I suppose we will end up finding a number of answers to that question. Will some ChatGPT-style large language model manage to do some insider trading? Will it, like, ingest some corpus of data that includes inside information, and then use it to answer questions like “which stocks should I buy?” Will some big company’s corporate development department use the model to answer questions like “should we buy Company X and what price should we pay for it,” and will the developers of the model use that information nefariously? Will the model itself become sentient, get a Robinhood account and use that information nefariously? Etc.
Or I mean will someone ask ChatGPT a question like “what companies are frauds,” and ChatGPT will cheerfully and confidently say “oh Company Y is a total fraud, their revenues are super inflated,” and then people will short the stock, and then it will turn out that Company Y is fine and ChatGPT is just really good at confidently making stuff up? Is that securities fraud? By whom? (Or: Ask “what stocks are good,” ChatGPT says “Company Z is great, they have found a cure for cancer and their revenue will double,” people buy the stock, it was all made up.) Will sell-side analysts or journalists use ChatGPT to do their work, and will ChatGPT introduce market-moving factual mistakes?
Lots of fun possibilities. But the most immediate way in which ChatGPT is going to be securities fraud is the usual “everything is securities fraud” way:ChatGPT risk factors are starting to be included in securities offering documents, and the risks are starting to be realized:• 1) ChatGPT is going to be disruptive to some number of businesses and industries.
• 2) Some companies will lose money because ChatGPT disrupts their business.
• 3) This will be bad, for them, and their stocks will drop.
• 4) Every bad thing that happens to a public company can be characterized as securities fraud: “You didn’t sufficiently warn us about the bad thing, so we bought the stock thinking it was good, but then the bad thing happened and the stock dropped, so we were defrauded.”And I tell you what, when I see that a public company has announced bad news and its stock dropped, I look for the lawsuits. We are early yet — the stock dropped yesterday — but lawyers move fast; I have not yet seen any lawsuits filed, but at least two law firms have announced “investigations” of Chegg and are looking for clients.Chegg Inc. plummeted 42% after warning that the ChatGPT tool is threatening growth of its homework-help services, one of the most notable market reactions yet to signs that generative AI is upending industries.
The company, which offers online guidance for students taking tests and writing essays, also gave revenue and profit forecasts for the current quarter that fell well short of analysts’ estimates. Chegg makes much of its money from subscriptions, which start at $15.95 a month, a revenue source that’s in peril if students see AI chatbots as an alternative to paying.
The impact of ChatGPT, an OpenAI tool that surged in popularity last year, began to be felt this spring, Chief Executive Officer Dan Rosensweig said in prepared remarks accompanying Chegg’s first-quarter earnings Monday.
At this point the lawsuits seem a bit far-fetched: “You should have warned us months ago that artificial intelligence would hurt your business” is unfair given how quickly ChatGPT has exploded from nowhere to become a cultural and business phenomenon. But now everyone is on notice! If you are not warning your shareholders now about how AI could hurt your business, and then it does hurt your business, you’re gonna get sued.
© 2015 Mutual Fund Observer. All rights reserved.
© 2015 Mutual Fund Observer. All rights reserved. Powered by Vanilla
Comments
Stuff would have been B- to C+ back in the days when I took English classes in Jr High and High School. Lots of padding to get the word count up.