Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Watson Health layoffs, IBM's problems with A.I.; Nvidia and healthcare advances

edited June 2018 in Fund Discussions
Sometimes, management can't get a clear focus on the proper path; or the battles with others and their pathways.
IBM seems to remain stuck, IMHO. WATSON super computer was quite the introduction for such a device in the A.I. world.
Apparently, something fully related to the "human" aspect of IBM remains to get in the way.
---I suggest they program WATSON with at least 5 scenarios and let it decide the pathway for the company.
What greater demonstration of the company's insight and future value, than this.

https://spectrum.ieee.org/the-human-os/robotics/artificial-intelligence/layoffs-at-watson-health-reveal-ibms-problem-with-ai

Disclosure: no direct investments in IBM at this house

Comments

  • Add:
    Nvidia GPU speeds and advances into healthcare applications. Beyond IBM's difficulties presented in the associated post above; Nvidia has and will have bangs and bumps along the path, too; but is currently finding some success to the benefit of health diagnostics and we humans.
    The entrance point to real time, in depth, diagnostics of the human body condition approaches. "Bones" from Star Trek would be impressed.

    https://medcitynews.com/2018/06/meet-the-preeminent-ai-company-on-earth-but-can-it-succeed-in-healthcare/
  • On the other hand, IBM seems to be doing well with their "Summit" operation:

    "the United States has regained the lead thanks to a new supercomputer built for the Oak Ridge National Laboratory in Tennessee by IBM in a partnership with Santa Clara’s Nvidia."

    "The Summit computer, which cost $200 million to build, is not just fast — it is also at the forefront of a new generation of supercomputers that embrace technologies at the center of the friction between the United States and China. The machines are adding artificial intelligence and the ability to handle vast amounts of data to traditional supercomputer technology to tackle the most daunting computing challenges in science, industry and national security."



    Link to article in the SF Chronicle
  • edited June 2018
    Can it simply be the BS around AI is catching up all around? We see too much money spent on Machine Learning, BlockChain and other crap with no real explicitly stated goal other than to sound "cool" and "investable". The world of IT such as it is - just like investment management - can tell a good story in a PowerPoint deck to receive funding. Beyond that, they can sometimes declare the effort a "success" as long as the next clueless executive can keep bankrolling the effort.

    This is nothing new. AI / BlockChain are just the new hype that "most people don't understand". The point is not whether they have potential. The point is the countless $s that will be wasted in the name of "progress" citing stories such as how we wouldn't have been able to land on the moon without such an attitude. The difference is we are talking "IT" here, not moon exploration. You can't make shit up with the latter.

    I'm still waiting for the geniuses to do something with these new technologies to benefit society at large. No, the iPhone doesn't count. Neither does Alexa. Not if neither can get people out of poverty. No one was seeking profit for landing on the moon. IBM sought profit where none was to be found. Layoffs.
  • "Can it simply be the BS around AI is catching up all around?"

    @VintageFreak- Funny you should ask,,, I've been wondering exactly the same thing for a while now.
  • Very perceptive observations here. There was a lot of AI and neural net hype in the mid 1980's. (I worked in the field back then) Not a coincidence that its been a generation since then, after those disillusioned by the hype have retired.

    The only main differences since the 1980's is that computers are faster and we have more data from the internet. Some incremental advances in machine learning, but no major breakthroughs on solving fundamental problems regarding actual intelligence. Machine learning algorithms are 'black boxes'.
  • It's not just that machines have gotten faster, but also memory dirt cheap.

    A common response back then to the question about how Lisp machines would handle memory management is that you could wheel in another rack of memory. Sure enough, since then the memory issue's been "fixed".

    Remember Lamda machines?
    http://www.computerhistory.org/revolution/artificial-intelligence-robotics/13/290

    With additional processing power and memory has been a shift from rule based systems to learning systems, but as @Johann noted, they're not something new.
  • @msf is correct IMO. It's because of cheaper memory why so many "stupid" ideas are now "genius". Nothing really changed in the "intelligence" behind the idea. It's not like a car from 1920 was improved upon in the year 2020. The idea behind neural nets has not changed.

    I think we are at a point where we have advances in technology but we either can't find applications OR can't find applications that can benefit people OR can't find applications where we can make money but let's get some investment while we fool everyone around - so BlockChain anyone?
Sign In or Register to comment.