What Happens When Artificial Intelligence Works Too Good – Costs Profits?

Most corporations today are looking at ways to leverage Artificial Intelligence (AI) Technologies to better their bottom line, to bring in more revenue, suggest purchases to new and loyal customers, and to help streamline the supply chain. AI of course, works good for all those things, unfortunately sometimes it works too good. What happens when AI determines that the customer should be spending less money or not purchasing the product or services at all? Hmm, you think – well, now that I've piqued your curiosity, let me give you one of the first big examples of this that I've run across this preparation article.

You see, there was a pretty interesting piece in Forbes on February 19, 2017 titled; "MD Anderson Benches IBM Watson In Setback For Artificial Intelligence In Medicine," by Matthew Herper which stated:

"The partnership between IBM and one of the world's top cancer research institutions is falling apart. The project is on hold, MD Anderson confirms, and has been since last year. MD Anderson is actively requesting bids from other contractors who may replace IBM in and report from auditors at the University of Texas – project cost MD Anderson more than $ 62 million and yet did not meet its goals. basis or functional capabilities of the system in its current state. "

Hmm? Well, how should we interpret it? Does it work or not? It turns out it really does work well, too well maybe. After all if the Artificial Intelligent Watson diagnosis areas as non-cancerous or offers simple procedures to stop it, or renders it a benign and unimportant tumor, then the hospital can not charge for expensive procedures like Chemo, etc. Think about that for a second, it works too well and hurts profits.

Further, IBM's Watson costs a lot of money and now there are other market entrants that can do the same thing for far less, all those other computer company vendors need the data to put in, to do the same thing, and the AI ​​medical realm is getting competitive it looks. The accuracy of IBM Watson was right 90% of the time, far better than human doctors, and with human doctors working with IBM's Watson, the accuracy rate goes up to 95% plus.

Where else might this happen? Well, what if a company is leasing AI services and then management requests the AI ​​system where they can save costs, and the AI ​​system tells the executives to use less AI services or switch vendors to save money? Or recommends a solution for a large transportation company – a supply chain streamlining strategy which no longer needs AI, as it is already the most efficient way possible? If the AI ​​is honest it may find itself deleting the need for its services, and it if avoids suggesting something like that, it may misrepresenting the best interests of its customers – think on this.