Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Any insights into safe use of ChatGPT?

edited January 3 in Off-Topic
I’ve been reluctant to leave any application (ie: Bing’s website or related app) connected or installed on my various devices for long - meticulously closing out / erasing any past history after each limited use. That limits its use and functionality. Am I being overly cautious? Am awed by how ChatGTP is able to answer everyday questions to which answers are hard to come by. Today, after a web search came up empty, I checked with Chat on whether a venue I’ll be attending next week had a coat room / coat check. No problem. in a matter of seconds It wrote out in complete sentences all the relevant details like price, exact location and any restrictions.

I haven’t yet used it for investment guidance or information, but imagine its powers in that regard to be great.. Blackrock’s Rick Rieder has mentioned his use of AI for scouring the bond universe. ISTM there’s a downloadable GTP app available for IOS users at Apple that might be a better way to go than accessing Bing / Microsoft online.

So ….. What are the security concerns - if any - and how does one go about using this new marvel in the safest way possible?

Comments

  • For one thing, you need to know that it will lie, and engage in unethical and illegal behavior, such as insider trading.

    https://duckduckgo.com/?t=ffab&q=chat+gpt+lies+to+human&atb=v298-1&ia=web

    I would call the venue to confirm they have a coat check, and how it operates.
  • edited January 1
    @WABAC said, ”I would call the venue to confirm they have a coat check, and how it operates.”

    :) You are likely correct. Not very important in this case. (I actually checked on 3 venues using Chat, all of which supposedly have coat checks, which itself is suspicious. Maybe I’ll keep score and report back.)

    Back to Chat - It’s really impressive. But, as I’ve noted, am a bit reluctant to use it to full capacity until I understand the safety aspect better. Everything I’ve read has been positive. Nothing has rang rung any alarm bells other than the sheer power of the thing. If you log into BING it will likely direct you to an option to try it out.

    (That’s not to say there aren’t horror stories out there. Am sure there are.)
  • hank said:


    Back to Chat - It’s really impressive. But, as I’ve noted, am a bit reluctant to use it to full capacity until I understand the safety aspect better. Everything I’ve read has been positive. Nothing has rang rung any alarm bells other than the sheer power of the thing. If you log into BING it will likely direct you to an option to try it out.

    My first link leads to a variety of contrary stories. Since this is an investing forum, this particular research might be of special interest. @Old_Joe has likely beat meto the punch in linking to it.
    We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.
    In the AI world, lies are known as hallucinations. I am reminded of the financial industry describing borrowing as leverage.
  • edited January 1
    LOL - I amended my earlier comment with a footnote too late.

    I hope this thread can be positive and helps folks make use of a great technology. If someone prefers not to use ChatGPT or another (maybe safer?) AI tool that’s their prerogative. AI is here whether we like it / want it or not. I remember how in the late 80s / early 90s some parents thought they were doing their kids a service by forbidding them to access the internet. (They thought it better for them to have to memorize dates and other facts than to be able to access them so easily.) .
  • hank said:

    LOL - I amended my earlier comment with a footnote too late.

    I hope this thread can be positive and helps folks make use of a great technology. If someone prefers not to use ChatGPT or another AI tool that’s their perrogative.

    You couldn't have stopped me anyway. Just ask my family.;)

  • "Old_Joe has likely beat me to the punch in linking to it."

    @WABAC - No sir, not this time. A great link, though- a real "warning shot across the bow". It looks like not only will AI tools be used by humans for nefarious purposes, but evidently AI itself may be able to jerk us around if it feels in the mood to do so. "Good morning, Hal."

    What a great way to start a new year.
  • edited January 1
    Has man ever invented anything that wasn’t eventually turned to illegal, evil or destructive use?

    The depth of thought here far exceeds my humble “technical” query re: best practices.
  • edited January 1
    Somewhat OT, but this is a huge problem in academia over the past year. Where I am, we're seeing a noticeable increase in these incidents in the STEM fields, often by international students trying to cut corners b/c that's apparently acceptable back home. :(

    To wit: this past semester I failed 1/3 of my graduate capstone class for submitting extremely substandard papers that were likely drafted using such technologies. Since their papers failed based on their one-dimensional, generic, formulatic, horrific content that was devoid of context, analysis, or discussion (plus fabricated sources and in 2 cases, nearly identical bibliographies despite papers on different subjects) I didn't bother doing a formal academic-integrity report since it would be a moot point anyway --- though our graduate school does know their names in case they get caught trying to pull the same stuff again during their re-take of the capstone. *sigh*

    I have a bunch of ideas I'll be doing differently in the spring to make things a bit more anti-GPT. Will I nail everyone/everything? No. But I'm sure it'll help reduce problems on my end.

    I'll be playing with a few such systems next mo--er, this month both to learn more about them and also see if/where/how they might be useful in my own life. As I've told reporters, we're in the first inning of the 'AI Revolution' and there's a lot of stuff to be learned and mistakes to be fixed as we go along.....so the old adage remains true today: if you use such systems, "trust, but verify. And think!"
  • "think!"

    A very tall order these days for many Americans.
  • Well, if the students cut corners and were punished, what about this? (https://www.thecrimson.com/article/2023/12/31/dissent-gay-plagiarism/)
  • @FD1000, +1

    You wonder how long Harvard can hold onto to their incompetent DEI hire.... really damaging to their brand not launching her ass. Six more allegations of plagiarism.....

    Screw her and screw Harvard. Make those SOBs pay taxes like the rest of us
  • @Baseball_Fan - You might be surprised how close our opinions are on that one.
  • ....annnddd....she gone...good riddance, don't let the door hit you on the ass on the way out....

    Context....depends on context....sheesh, who elects these idiots into leadership roles??
  • "Has man ever invented anything that wasn’t eventually turned to illegal, evil or destructive use?"

    @hank - I'm still thinking about that... not doing too good.
  • edited January 4
    Old_Joe said:

    "Has man ever invented anything that wasn’t eventually turned to illegal, evil or destructive use?"

    @hank - I'm still thinking about that... not doing too good.

    Along that same vein …

    I’ve amended the linked article title a bit to read:

    ”Scientists expand search for signs of intelligent life”

    https://www.reuters.com/science/scientists-expand-search-signs-intelligent-alien-life-2023-05-31/

  • If there's any to be found it won't be here on earth.
  • Guest essay by Elizabeth Spiers. in the NY Times.

    I Finally Figured Out Who ChatGPT Reminds Me Of
Sign In or Register to comment.