Firstly, what is this hallucinating? "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem."
So, the take-home message is that whatever AI generates needs to be tested/verified -- but not just blank discarded.
Second, whether we like it or not AI is here to stay. Satya Nadella, CEO of Microsoft, recently said that as much as 30% of Microsoft's code is written by AI. Beyond the help that LLMs like ChatGPT offer for routine tasks, there are serious, important advances in many fields from AI - see Quanta's recent issue on AI in science at
https://www.quantamagazine.org/series/science-in-the-age-of-ai/New technologies are rarely fully formed at the start and there is always some hesitancy to adopting them. If it makes you feel better, Socrates (no less!) objected to writing because he felt that would undercut our ability to remember things - see
https://williamdare.com/2012/04/26/socrates-oral-and-written-communication-or-why-socrates-never-wrote-anything-downThere are lots of important questions about how we 'deal with ' AI -- philosophical, political, economic, computational and cybersecurity questions, as well as implications for investing. But dismissing it as 'stupid' is burying your head in the sand.
Comments