Do you remember to worldwide banking and real estate crash in 2008? One reason it happened was because an independent firm of economists, mathematicians, and IT people had developed a market predictor tool. It could gather huge amounts of data about buy/sell market patterns, and so easily predict what to expect for the future. It was so amazingly next-generation and intellectually sound that Lehman Bros. or somesuch bought the product. And leaked the word out. “Oy investors, we’ve got this thing that incorporate chaos theory and ■■■■, and those other guys still relying on a bunch of sweaty and greedy Wall Street human market traders listening for bells to chime and such. Who are you going to trust? A mob of human hustlers and their 5 senses, or an insta-reacting algorithm?”
Well after that, most of the competition jumped onboard and bought the algorithm market predictor system too. Problem was, the math behind it was so hard, even the salesman couldn’t understand it or properly explain it. Of course the IT department hearing about it couldn’t understand it either, much less the board members hearing the presentation. But, “If they have it, no way around it, we have to have it too. Here’s 4mil or 10mil or whatever to keep us up to speed and in the race.”
Well, what happened was, the front line guys approving mortgages in offices with mums and dads in front of them did not use the tool. That’s a given. They got commissions per each mortgage application received and loan given. Possibly, those applications eventually went through the chaos theory market predictor human behavior trends predictor tool, before being approved. What do you think? Did they? And did the results make their way back down to the unit that decided on Approve or Deny? Seems not… thus 2008. And the 10 year aftermath.
So… AI. On a theoretical level, the challenge is so intellectually clean. But… can there ever be an AI project that won’t involve some human self-interest behind the project and it’s funding?