Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

“Automated Insights”

edited December 2022 in Off-Topic
Checking market data as I normally do a few times a day, the blurb excerpted below popped-up today on Market Watch.

”Shares of Apple and Dow are trading lower Wednesday morning, sending the Dow Jones Industrial Average into negative territory. Shares of Apple AAPL, -2.26% and Dow DOW, -1.64% have contributed to the blue-chip gauge's intraday decline, as the Dow DJIA, -0.57% was most recently trading 156 points (0.5%) lower. Apple's shares are down $2.47, or 1.9%, while those of Dow are down $0.71 (1.4%), combining for a roughly 21-point drag on the Dow. Chevron CVX, -1.47%, Walmart WMT, -1.33%, and Walgreens Boots WBA, -1.31% are also contributing significantly to the decline. A $1 move in any of the Dow's 30 components results in a 6.59-point swing.
Editor's Note: This story was auto-generated by Automated Insights, an automation technology provider, using data from Dow Jones and FactSet. See our market data terms of use.
While not totally surprising, the issue of automated press stories raises other questions: (1) Will good quality journalism suffer / disappear as robots play a larger role in what we read and hear? (2) Is this how the “talking heads” on Bloomberg, CNBC, FOX are fed the scripts they read? (3) Can the robot be programmed to to embellish the day’s script with a psychological, social or political tone ? (Can it be directed to sound “bearish”, “bullish,” “panicked,” “gleeful” or “alarmist”? Can it be directed to inject a right-wing conservative political bent or a left-wing liberal slant to the day’s market news?) Kinda scary ISTM. Who’s running the show?

Comments

  • At least Market Watch is disclosing that the information is generated using an artificial intelligence software. As one can see, it reported the changes of a pre-set numbers of stocks and report them along with few words. Rest be sure that is not much quality financial journalism relating on the market conditions.

    At universities, professors are watching for student cheating as AI-based chatbot softwares such as Chatpt, are used to write essays. So scripts can be manipulated for the talking heads and deliver them to the audience. Personally I like to read multiple sites to verify the market. Market Watch is NOT one of them.
  • edited December 2022
    As someone who has employed personnel with the sole purpose of adapting bureaucratic reports to the issue at hand, I find this potentially very useful. As in - Every office should have at least one. Could it displace all activity in this category? Eventually, maybe. Could it displace a lot of repetitive office jobs? Time will tell.

    I admit I have an excellent example of a young colleague that was doing rather on-the-edge development of new instrument methods. A side result was new data on chirality in metal complexes. Publication of the data part of his research was routine. Many of the publications could have been written on new complexes using the structure of the previous reports. Indeed, turning out publications on the data was so routine that in just a few years he had accumulated ten times more publications than most young researchers. He could have done the measurements and development of techniques and left his computer to draft the reports for him to review and add humanity to.

    (Of course, over my life I have learned to keep my teeth in my mouth. Some people think I suggest they can be replaced by AI when all I suggest is to allow AI do things that AI can do. What people hear me say is that their learned profession is easily replaced and not so impressive after all.)
  • Hmmmmm. I read the news on the internet from my local tv stations and back at my hometown. Who needs A.I. drivel when you can demonstrate with human reporters that they don't even know the English language! (Nevermind CONTENT!) Proofread, much? Not hardly.
  • edited December 2022
    What I’m wondering about is the potential for a hedge fund or other big player to move markets near-term for their own benefit - especially by slanting the 24-hour news cycle. Albeit, a human reporter could also do it. But there seems something particularly distasteful to think that by tinkering with the algorithms inside a robotic script writer it might be accomplished. Taking it a step farther, if a particular stock could be touted or disparaged in this way, folks playing with a lot of leverage could make out like bandits on either the long or short end. A few cents one way or another in a share price doesn’t mean a lot to most of us, but for a big player employing leverage it could result in a very significant gain or loss.

    Language often carries connotations stretching beyond their literal meaning or adding additional emphasis. From the article: “sending the Dow Jones Industrial Average into negative territory” / “21-point drag” / “contributing significantly to the decline”
  • @hank : I'm not sure if this isn't going on currently ! A (few) years back employees were front running stocks mentioned in magazines heading out the door to readers ! Somewhat the same , just another way to make a buck !
    Happy New Year , Derf
  • Thanks! I’m waiting for the first mutual fund managed solely by AI to open. He, she, or It? should make for a fascinating interview.
  • @LewisBraham- That VOX report is truly scary. It details how even the American developers of this technology are perhaps not being as careful as they should be in the constant development and improvement of this technology- "there’s a growing consensus that things could go really, really badly."

    But the article only briefly alludes to the next logical step in the evolution of AI: the deliberate development and use of AI as a weapon. Having spent a long life observing human behavior, I have no doubt at all that weaponizing this technology will occur sooner rather than later, and not necessarily by nations that are the usual suspects in this sort of thing.

    Once AI is turned loose in the weapons arena how would it be possible to counteract or neutralize it?
  • 4:8 Cain said to Abel his brother, "Let us go out to the field." And when they were in the field, Cain rose up against his brother Abel, and killed him.

    ...Since forever. Since it all began.
  • Exactly. More and more my wife and I are glad that we are as old as we are. "This place" is going downhill really fast.
  • edited December 2022
    @Old_Joe : Are you thinking they, AI, will hold the launch codes ? From what I remember it would take two Robo's to launch a missile.
  • @Derf- You know, advanced AI wouldn't need to resort to anything as crude as that. Why blow everything up when you can develop a few viruses that would get rid of 95% of humans and leave everything else standing and alive? It's for sure that the few humans left wouldn't have the resources to disable the AI then running the world.

    The world would probably be better off anyway.
  • The movie, “I, robot” (2004) starring Will Smith is quite enlightening on AI. The robots have gained self-consciousness of themselves and their surroundings, and they turn on the programmer and the human race.
    https://en.wikipedia.org/wiki/I,_Robot_(film)

    Similar themes have been used in original Star Trek series.
  • edited January 2023
    Topic has certainly broadened. First experience in a 100% robotic car wash yesterday. Payment is via touch screen with a remarkably human-like voice. Damn good wash, wax & dry for $10. It’s cold here. So the big doors open and close automatically allowing entry and exit. Spooky inside - especially at night. Possibly the genesis for a Hitchcock like film … ?

    Yes, warfare by AI is a terrifying conjecture. I harbor some “far out” thoughts on the subject:

    - I believe life is prolific in the universe.

    - I also believe there is / or has been a wealth of what we term “intelligent life” out there.

    - The reasons our attempts to confirm the latter have proven futile to date are twofold. First, it’s due largely to the vastness of space and the limited speed at which light / radio waves travel. But another reason is the inevitability of advanced civilizations to self-destruct not long after acquiring advanced technology (about where we are today). If an advanced species can’t totally extinguish its existence through war, the robotic / AI war machines they have in place will finish the job. To wit - the existence of an advanced species may appear but a brief momentary luminescence viewed through the vastness of time.

    HAPPY NEW YEAR
  • edited January 2023
    I wouldn’t count humanity out just yet: https://en.m.wikipedia.org/wiki/Human_extinction#Risk_estimates
    Simply by discussing the extinction risks with as clear a head as possible— being neither foolishly optimistic nor grotesquely pessimistic—we take one small step towards preventing extinction from happening. The problem is getting a clear head about the issues. But the fact that people are discussing the dangers of AI before anything terrible happens is a good thing. It creates the possibility of safeguards being imposed.

    I feel the futurists and enforced optimists generally gloss over the risks of their envisioned technological utopia because they have something to sell, usually related to capitalist markets they are trying to create and/or corner. The folks behind the original World’s Fair never told us of all the pollution that would be created by their automated and automobile-fetishizing visions of the future. I think such folks should be put in stocks in the public square and have tomatoes thrown at them for long periods of time until they admit what they’re really up to—that they care more about profit than people.

    Of course, fear and pessimism are also for sale and are another means of control, of maintaining the status quo, the goal being to keep the existing market and industry leaders in charge. So maybe the pessimists need to be in stocks too until they admit they were just trying to prevent any changes from happening just so they could stay in power even if some of those changes would benefit humanity.
  • Would quantitive driven algorithms used in today investing be considered a form of AI ? If it is, Vanguard, BlackRock, and other firms have already deploy it as a tool to guide their factor-based investing. Perhaps part of this discussion should be repost in “Other investing”.
  • I think given the wide ranging nature of this conversation it belongs here. That said, a new discussion started about AI investing might make sense in the Other Investing category. I would say, though, that algorithms written by human programmers in a fixed way are a little different from AI. The difference comes from the concept of “machine learning.” Can the machine or algorithm teach itself to invest and improve upon what the human programmers wrote, adapting to changing markets?

    So, I wouldn’t think of a strict rules-based algorithm used by a shop like Vanguard to buy stocks with low price-book values as AI. If the machine could learn by itself that price-book value is no longer working as a value metric and it needs to thus add, price-cash flow instead, that would begin to look more like AI—at least to me. The machine must teach itself.
  • Yes, I think that your perspective regarding a difference between machine-learning AI and investing algorithms is a good one.
  • edited January 2023
    AIEQ seems closer to AI and so far results have been meh: https://etfmg.com/funds/aieq/
    Another one is AIVL, also a poor performer so far: https://wisdomtree.com/investments/-/media/us-media-files/documents/resource-library/investment-case/the-case-for-ai-enhanced-value.pdf The question is will they get better? Will they learn faster than we do or at least faster than the human managers who routinely get paid more than we ever have for lagging the market?
  • edited January 2023
    Abstracts written by ChatGPT fool scientists (Article: The Guardian)

    PBS News Hour on Saturday aired a segment in which they interviewed a college professor in the language arts specifically about the problem of receiving fake essays from students. No really good answers. Lots of concerns.

    To reduce it to its simplest form, I ask - ”Will AI someday be capable of duplicating the literary eloquence of Scott Fitzgerald?” Oh horrors! One stems from the soul. The other possibly from silicon computer chips,

    ISTM - The more technologically advanced society becomes the harder it is to differentiate truth from fiction.
  • edited January 2023
    "ISTM - The more technologically advanced society becomes the harder it is to differentiate truth from fiction."

    That is for absolutely certain. And it's also certain that politicians and political parties will use and abuse this situation until we no longer have any idea what is a fact and what is not.
  • Hi @Old_Joe
    That is for absolutely certain. And it's also certain that politicians and political parties will use and abuse this situation until we no longer have any idea what is a fact and what is not.
    Perhaps, too often now; we're mislead about fact or fiction from the human desire to bend the facts to suit the purpose.
    No software program required to suit that purpose. Surely a part of political positioning today; and especially from social media. Hell, the GOP candidates for high offices in the State of Michigan during the recent mid-terms didn't have an ability to present a 'why someone should vote for them', but only that the other party is evil, and likely belong to some form of a 'communist party'; versus just being regular folks who have beliefs that are not full 'libertarian' or more 'right' in nature.
    A few of the right sided folks I still have some conversation with fully know that the 'inlfation' battle cry presented during the mid-terms and now; have little to do with the party in power, but was needed to be able to present a cause for the problem. AND, then there are those who are fully clueless, about the causes of inflation upon their lives. Too many of these folks don't have much in the way of critical thinking skills.........sadly. Their journey through life may not be very pleasing.
  • edited January 2023
    Building on @catch22’s observations.

    One distortion (err perhaps “half truth”) a decade or two ago was the belief somehow conveyed to workers - especially the non-unionized - that foreign workers were taking their jobs away and lowering their standard of living. Likely in that appeal were echos of racism / nationalism though not overtly stated.

    As a consequence of those over-hyped claims the U.S. imposed restrictions and higher tariffs on imports and cut back on legal immigration. But now the same manufactured goods cost more money because someone has to pay the import tax and because in many cases it costs more to make something here at home than in Mexico or China. Our reactionary measures contributed to supply chain issues, labor shortages here at home, scarcity of some products. Generally all this has pushed inflation higher for those who can least afford it - the lower income and those subsiding solely on SS.

    The UK also shot itself in the foot with BREXIT - another populist agenda. You’d be hard pressed to find an economist who’d claim the country is better off now than before. (Larry Summers compares the UK’s financial situation today to that of an undeveloped “emerging markets” economy.)
  • edited January 2023
    Worth a read if you can access the story: https://nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html
    ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees....

    ....Platforms have gotten better at removing “coordinated inauthentic behavior.” Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. Rather than flooding legislators’ inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an A.I. system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage.

    When we humans do these things, we call it lobbying. Successful agents in this sphere pair precision message writing with smart targeting strategies. Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. A.I. could provide techniques for that as well.

    A system that can understand political networks, if paired with the textual-generation capabilities of ChatGPT, could identify the member of Congress with the most leverage over a particular policy area — say, corporate taxation or military spending. Like human lobbyists, such a system could target undecided representatives sitting on committees controlling the policy of interest and then focus resources on members of the majority party when a bill moves toward a floor vote.

    Once individuals and strategies are identified, an A.I. chatbot like ChatGPT could craft written messages to be used in letters, comments — anywhere text is useful. Human lobbyists could also target those individuals directly. It’s the combination that’s important: Editorial and social media comments only get you so far, and knowing which legislators to target isn’t itself enough.

    This ability to understand and target actors within a network would create a tool for A.I. hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of A.I. may be so hard to detect — particularly if it is being used strategically to guide human actors.
  • edited January 2023
    Purchases of imitation ChatGPTs soar on App stores.

    ”A sketchy app claiming to be the bot ChatGPT has soared up App Store charts, charging users a $7.99 weekly subscription to use a service that is entirely free to use on the web and seemingly has no affiliation to the actual bot.”

    https://www.macrumors.com/2023/01/09/chatgpt-app-store-apps/

    This thing (the real one) is both enticing and scary at the same time. Anyone who earns a living thinking and writing has to be intrigued by the potential capabilities to lessen the load.
Sign In or Register to comment.