Firstly, what is this hallucinating? "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem."
So, the take-home message is that whatever AI generates needs to be tested/verified -- but not just blank discarded.
Second, whether we like it or not AI is here to stay. Satya Nadella, CEO of Microsoft, recently said that as much as 30% of Microsoft's code is written by AI. Beyond the help that LLMs like ChatGPT offer for routine tasks, there are serious, important advances in many fields from AI - see Quanta's recent issue on AI in science at
https://www.quantamagazine.org/series/science-in-the-age-of-ai/New technologies are rarely fully formed at the start and there is always some hesitancy to adopting them. If it makes you feel better, Socrates (no less!) objected to writing because he felt that would undercut our ability to remember things - see
https://williamdare.com/2012/04/26/socrates-oral-and-written-communication-or-why-socrates-never-wrote-anything-downThere are lots of important questions about how we 'deal with ' AI -- philosophical, political, economic, computational and cybersecurity questions, as well as implications for investing. But dismissing it as 'stupid' is burying your head in the sand.
Comments
Firstly, if all AI did was copy, consolidate and dump verbatim the data it was using for input, it would essentially be a search engine or a machine-based plagiarist. Just regurgitating. The notion of "intelligence" suggests that it is taking data and drawing "conclusions". So it is taking the raw data and making qualified conclusions about what the data may imply or suggest.
So, in making "assumptions" and drawing conclusions, it doesn't know enough to detect its own BS. It is not advanced enough to know when it is babbling. By default, some of the babbling is correct, most information cannot be properly digested/processed by the program.
Google "AI hallucinations" for more details.
Humans do something similar, sometimes called Dunning-Krueger. Characterized by the maxim, "Just enough information to be dangerous". A good example is the word-salad that Trump produces because he doesn't understand 1/10 of what he is told, but must simply make do. And needs to act authoritative. lol
To be fair, it was a bit OCD. Maybe Socrates was OCD? He certainly had no idea how much information that we would need to keep track of, at this point in time.
In my photo library, LightRoom took awhile to sort even I enter keywords (subject and location) when I processed the RAW files. The dates embedded in the meta files does not always work. Ai is enabled in Adobe Creative Suite and these is lots of complaints of artifacts being introduced. For now, I manually turned of the Ai capability for my use.