Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Reports say new model Open AI "Q*" fueled safety fears

edited November 2023 in Off-Topic
Following is an excerpt from a current report from the Guardian:

OpenAI ‘was working on advanced model so powerful it alarmed staff’... "new model Q* fueled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking"
OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.

The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.

The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.

The reports followed days of turmoil at San Francisco-based OpenAI, whose board sacked Altman last Friday but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back. Altman also had the support of OpenAI’s biggest investor, Microsoft.

Many experts are concerned that companies such as OpenAI are moving too fast towards developing artificial general intelligence (AGI), the term for a system that can perform a wide variety of tasks at human or above human levels of intelligence – and which could, in theory, evade human control.

Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said the existence of a maths-solving large language model (LLM) would be a breakthrough. He said: “The intrinsic ability of LLMs to do maths is a major step forward, allowing AIs to offer a whole new swathe of analytical capabilities.”
Note- Text emphasis was added to the above.

Comments

  • Beyond these long-term issues, there is a near-term issue for #OpenAI & $MSFT.

    As their current contracts include sharing IP up to #AGI only, the issue for OpenAI becomes whether to more commercially develop #GPT4, or shift focus to #GPT5 (& leave GPT4 dev & billions for others).
    TwitterLINK
  • edited November 2023
    ”Many experts are concerned that companies such as OpenAI are moving too fast towards developing artificial general intelligence (AGI), the term for a system that can perform a wide variety of tasks at human or above human levels of intelligence – and which could, in theory, evade human control.”

    Two-minute clip from 1968 film: ”Space Odyssey 2001”

  • For subscribers, the NY Times published today the third installment of its series on AI. Very well done. Today's article treated various government attempts to keep a lid on AI, or at least to keep it safe. Most of the officials quoted admitted that they are way behind the eight-ball because of rapid technological advances. I think one European said anything we do today in the realm of legislation is likely to be perceived as "prehistoric."
  • edited December 2023
    @BenWP: Thanks much for your post on this- I had overlooked that interesting series from the NY Times. For subscribers who might want to take a look at those recent NY Times reports on AI, here are all of the links.

    11/22/23: Five Days of Chaos: How Sam Altman Returned to OpenAI
    Years before OpenAI’s near meltdown, there was a little-publicized but ferocious competition in Silicon Valley for control of the technology that is now quickly reshaping the world, from how children are taught to how wars are fought...

    12/3/23: Musk again! Ego, Fear and Money: How the A.I. Fuse Was Lit

    12/3/23: Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement

    12/5/23: Inside the A.I. Arms Race That Changed Silicon Valley Forever

    12/6/23: Five Ways A.I. Could Be Regulated


  • Thanks, @Old_Joe, for tracking down those links.

    I remain alarmed about AI without really understanding how it could get out of control. Maybe even more troublesome is the amounts of money the big tech firms are throwing around. I’ve never read about an arms race that ever made the world a safer place, just the opposite.
  • edited December 2023
    BenWP said:

    Thanks, @Old_Joe, for tracking down those links.

    I remain alarmed about AI without really understanding how it could get out of control. Maybe even more troublesome is the amounts of money the big tech firms are throwing around. I’ve never read about an arms race that ever made the world a safer place, just the opposite.

    @BenWP - You are correct to be alarmed. As bad as modern warfare / nuclear weapons are, there has always been at least some sort of human “check” to limit, diminish or prevent their use. We are about to lose that human element. Probably not in our lifetimes. But, ”coming soon to a planet near you.”

    This probably doesn’t deserve being posted, but some might find it mildly amusing (or alarming).
    https://fortune.com/2023/12/08/ai-pps-undress-women-photos-soaring-in-use/

  • Hate to have AI used in my doctor’s office for diagnosis instead of highly trained doctors. I can see it is coming to speed up the search based on the results, but the human factor still holds the key.
  • I would disagree @Sven. A second opinion from AI seems to be a great use of artificial intelligence for diagnosing a medical problem. Actually, AI could be the primary opinion and the human doctors would have the secondary, after reviewing AI's conclusion. Humans are infallible and doctors often have differing opinions. But in any case, both views would be a great combination, seems to me.

    I see AI as a collection of highly trained doctors. Not just 1 opinion, no matter how highly trained that one opinion is.

  • "Humans are infallible"...

    While some humans (a certain ex-president easily comes to mind) may believe this, it's obviously not true. I suspect that Mike meant to say "not infallible".
  • edited December 2023
    I posted a great quantum computing article a few years ago, but that is no longer available. Quantum computing is a major key. The 'newer' method is the major jump from the old processing technology.
    A newer view is this 13 minute video. Select the top video in the list from CBS 60 Minutes (13 minutes, 15 seconds).
    NOTE: hover the cursor in the lower video area to select (turn on) the 'closed caption' icon for those with hearing impairment.
  • @catch22- Wow! Thanks so much for that link- I had absolutely no idea that quantum computing had made such an advance. I thought that we were still at the point that every once in a great while they could get the damned thing to actually work right, but that for the most part it was still unreliable and unpredictable.

    Really interesting... and a very well-done piece by 60 minutes.

    Thanks, catch-
    OJ
  • @catch22 - Ditto what OJ said. And nice catch. I hope. I'm around long enough to see it happen.
  • edited December 2023
    @Old_Joe and @Mark , et al. I forgot to include on a personal note, that while we know the possible evil side of this technology, one may dream about the possible wonderful uses available, especially in medical diagnostics, treatments and related innovations.
  • I suspect that Mike meant to say "not infallible".
    I did @Old_Joe. Thanks for the correction.

    What's always screwed me up with that word is that I was indoctrinated in grammar school by the nuns saying that the Pope was infallible. Even at a young age I knew that was ridiculous, so hence the confusion:) Did they mean he couldn't be wrong or that they have been wrong.
  • edited December 2023
    "What's always screwed me up with that word is that I was indoctrinated in grammar school by the nuns saying that the Pope was infallible. Even at a young age I knew that was ridiculous"

    @MikeM - You too, h'mm? In about the sixth grade I asked how it could be that murdering someone was a "mortal sin" and eating meat on Friday the same, so if you did either one you were sure to go to hell forever. Seemed a bit disproportionate. The nuns sure didn't like that question.

    As a teenager my Irish mother was giving me hell for something and I didn't even think it was wrong. I told her that she must surely be related to the Pope because she believed that she was infallible too. And she didn't like that observation at all either.

    Now you know why I am sometimes of little patience with a few (very few) of our MFO brothers and sisters. Never did like BS.
Sign In or Register to comment.