I wouldn’t be worried about AI, if people would only use it for handling complex things that computers could be very good at. For example, Air traffic control, analyzing huge volumes of data on global weather and making micro and macro predictions. But I worry if people start wanting AI to learn human behavior, emulate it, predict it, and control it. Which is probably the first thing someone will want to do-- have AI analyze human behavior and thereby make loads of money by “intelligent” marketing, investing, etc.
The problem with that is, people themselves don’t even understand human behavior. So because of assumptions and prejudices, the AI is designed based on false premises. We learn that the answer is 42, but it turns out we didn’t know what the question was.
Consider this: Somebody recently won the Nobel Prize in economics for proving that people make irrational decisions with money. It turns out that they will make illogical decisions based instant gratification, value perceptions based on emotion, etc. Duh-uh! Any roulette wheel operator, car salesman, or carnival barker could have told you that. But heere’s an example of one of the experiments that proved the theory:
-Volunteers A and B will get a total of $20 for participating. If they both play fair, each gets $10.
-If A cheats, he gets $12 and B gets $8.
-B can take the $8, or punish A. If he chooses to punish A, they each get $5.
During the experiments, something like 80% of people choose to punish the cheater, even though it meant getting less money. The economics world was dumbfounded. That makes no sense! It does not fit into Nash’s and others’ game theory, in which people always choose to maximize real benefits to themselves. But I’d bet any old grandma around the world, urban, rural, rich or poor could have predicted the results of that experiment. “It’s worth more to me to punish that jerk than to get a few more dollars.” Said another way, “Receiving dollars is not the only benefit to the participant. The satisfaction of punishing the wrongdoer is also a benefit to the participant. The participants therefore did maximize the benefits to themselves.” It’s rational, but the new field of behavioral economics calls it “irrational.” And they just now figured out that people behalf “irrationally” and awarded a 2017 Nobel Prize for the discovery, after how many millennia of people doing just that. (Behavioral economics came after the 2008 real estate crash and the cascading effects, by the way. The sellers and consumers did not behave the rational way that game theory and orthodox economics predicted they should).
So, the point of that tl;dr is I think humans are just too dumb and unaware of themselves to be trusted to develop any AI that emulates or predicts human behavior. There’s a big risk that they will ask the wrong question. Or alternately, deep learning AI, through trillions of iterations of tests, learn human behavior better than humans themselves do.
The other thing is that we live in a capitalist world, and it will be for-profit corporations with deep pockets that will develop it. Will the motive be to benefit all of humanity? Or will it be to maximize profits for the developers/owners? If the latter, then there are Darwinistic “winners and losers” factors involved.