AI and the bot problem

I dont know if its improved for everyone, we produce a lot of stuff and then it becomes trash in few years poluting water and soil, but its for sure different, and it took much problem solving to implement some technology in a manner that will not damage us extensively, only a little bit, while we can eat french fries with mayo and grow big bellies. :rofl:

Every new technology is like another life changing experience, but there is no gurarantee it wil be for better. Your job taken by robot, what with your credit card? That is a lot of stuff to shift around mentally also.

1 Like

First, it would have to be an AGI type, general artificial intellgence. Basically a human level reasoning, not a parrot like being. Artificial general intelligence - Wikipedia

Only a few decades ago over 80% of people worked in food production. Now it is <5% because of machines and automation. That does not mean we have nothing to do today. We just do different things and get more value for it bevause more work is automated.

People forget that the current model where you have a job etc is not a god given thing, it is also a very recent thing and it can be changed. If production gets more automated basic needs will get cheaper or even free because that would be the reasonable thing to do.

It will not change tomorrow or in a week, but slowly over time. We already see this where I live, many people at least in good paying jobs work only part time because they still earn enough and free time is more valuable for then than more money.

I noticed it here also. :thinking:

Specialist jobs are few, the rest is just bull :poop: jobs. They even hire people to test mobile games and they pay very well for playing few shitty games. :stuck_out_tongue:

I guess the market bots have gotten so smart even security teams are unable to detect them anymore.

The fact is it is actually underhyped, the current developments are known to change everything in our daily life forever in a very near future.
But the end of this development which is utilizing artificial computer intelligence is not in sight.
And this for example is what the OP is concerned about.
(skip to 4:00 for clickbait title)

This is like from 3 years ago, before the latest industrial revolution which is with AI and robotics which started at the end of 2016, they could only dream back than about all the current capabilities of AI we have now.

1 Like
1 Like

Humans adapt, tho at a slow rate and are very emotional sometimes, but should humans adapt to AI? Or AI to humans? Or maybe AI evolution path could start on its own, independently from human input?

With humans its about their lives, their beliefs, their future, its conservativeness sometimes. If minds with AI can adapt faster than humans to technology surrounding it, technology that could consist of a very advanced AI at some point (and things made by this AI), then this AI, lacking the conservativeness of society, could maybe even start evolving independently at some point, not affected by human dictate.

Nobody knows where it could end. Philosophy books by AI? Mathemathic books by AI? Science by AI, or just plans for future to realize? How they could look? :thinking:

1 Like

I agree that it will change a lot. It is probably as significant as the invention of the computer itself. But there are many questions open what the limits of this systems are and people somehow are always scared and already think about extinction with everything they don’t understand… probably human…

I see those AI more as an extension of humanity and not a contender. Those who will use the AI technology will have a significant advantage. But they don’t function on it’s own.

And I have to say this again, we still don’t fully understand intelligence and that is important to know the limitation of a technology. Even if computers get faster and AI can learn more, that does not mean intelligence can scale indefinitely.

It can be both ways. Our childrens could suddenly become our contenders.
The merge of both if will happen is rather hard to be in favor of human part, human mind would hamper the AI, chain is as weak as its weakest part, so ultimately the AI would take over if it would grow too strong. And it have potential, looking at those GO wins and chess wins, data capacity, stuff that matters. Bigger is better, untill it starts dwarfing human with its potential. 99% AI mind, human as a body and small plugin having fun with its life?

Or just connect mind to big computer like in Matrix.

1 Like

On the topic of ethics, there’s been much silliness. Many people have spent a lot of time and effort considering the ethics of self-driving cars with increasingly complex scenarios involving to harm someone in order to avoid harm to someone else, and all these problems have been essentially F-U by basic coding: Self-driven cars don’t crash on purpose. Period. That’s exactly what they’re coded to. They don’t crash on A to avoid crashing on B, they don’t effin crash, and if they crash, that’s an accident, not something the machine did on purpose to rub some philosopher’s ego.

“The car reaches an intersection. Suddenly a little girl shows from behind a parked truck and on the other side…” WAIT A MINUTE. “Suddenly”? Why didn’t the car sense the girl behind the truck? Back to the drawing board!

Machines don’t make ethic decisisons. They are designed to work in a certain way, and is that way what might be ethical (sensing hidden pedestrians to avoid accidents) or not (dismissing the chances of someone hdden behind a vehicle and just roll out the car as is).

There are some dillemas, conflicts when judging is compromised. Also there are situations where not reacting would have consequences. The medical problems, problems including death, sharing of finances, limited resources. If AI will need to decide for human, how to resolve these situations?

Even now people are bamboozled, so maybe we should first work on that laws before implementing them.

That is not my concept or something new. At some point AGI would have to get laws that at least replace need for ethics, to resolve some dillemas. Asimow laws or different. Else it would be really irresponsible and non ethical in relation to human beings that will have to deal with AGI.

:thinking:

Ogh an over one hour long docu about people talking about possible theories.
Who is in control? depends on the time frame and decisions being made, we like to give the AI the ability to give us a list of options and we (humans) say yes or no.
So who’s in control? The humans, at least that is the direction.
To make a long answer short, Who is in control? The humans as long nothing goes “wrong”
Here some random news articles just from this week. (only have to read the headlines, speaks for them self)




http://www.foxnews.com/tech/2017/10/20/googles-artificial-intelligence-computer-no-longer-constrained-by-limits-human-knowledge.html
https://www.channelnomics.com/channelnomics-us/news/3019530/artificial-intelligence-an-essential-security-tool


https://au.news.yahoo.com/a/37551093/google-s-artificial-intelligence-has-learned-to-replicate-itself/



I just picked a ‘few’ from only this week alone, everything is going to change, whether it be good or bad.

image

1 Like

:roll_eyes: seriously…

All those technologies are interesting and will probably change a lot I agree. But stop linking all those news stories, they are just ridiculous and are pretty far from reality. They create completely over the top stories because they are in the ENTERTAINMENT business and not in EDUCATION.

And the technology companies are over-hyping it seriously because they want investors. This happens all the time, every time some new tech hits the market.

Have any of you actually looked at one of those AIs where the toolkit is open and you can actually use it and do experiments with it yourself?

We are living in interesting times, that sounds like a curse. :stuck_out_tongue:

1 Like

I wouldn’t be worried about AI, if people would only use it for handling complex things that computers could be very good at. For example, Air traffic control, analyzing huge volumes of data on global weather and making micro and macro predictions. But I worry if people start wanting AI to learn human behavior, emulate it, predict it, and control it. Which is probably the first thing someone will want to do-- have AI analyze human behavior and thereby make loads of money by “intelligent” marketing, investing, etc.

The problem with that is, people themselves don’t even understand human behavior. So because of assumptions and prejudices, the AI is designed based on false premises. We learn that the answer is 42, but it turns out we didn’t know what the question was.

Consider this: Somebody recently won the Nobel Prize in economics for proving that people make irrational decisions with money. It turns out that they will make illogical decisions based instant gratification, value perceptions based on emotion, etc. Duh-uh! Any roulette wheel operator, car salesman, or carnival barker could have told you that. But heere’s an example of one of the experiments that proved the theory:
-Volunteers A and B will get a total of $20 for participating. If they both play fair, each gets $10.
-If A cheats, he gets $12 and B gets $8.
-B can take the $8, or punish A. If he chooses to punish A, they each get $5.

During the experiments, something like 80% of people choose to punish the cheater, even though it meant getting less money. The economics world was dumbfounded. That makes no sense! It does not fit into Nash’s and others’ game theory, in which people always choose to maximize real benefits to themselves. But I’d bet any old grandma around the world, urban, rural, rich or poor could have predicted the results of that experiment. “It’s worth more to me to punish that jerk than to get a few more dollars.” Said another way, “Receiving dollars is not the only benefit to the participant. The satisfaction of punishing the wrongdoer is also a benefit to the participant. The participants therefore did maximize the benefits to themselves.” It’s rational, but the new field of behavioral economics calls it “irrational.” And they just now figured out that people behalf “irrationally” and awarded a 2017 Nobel Prize for the discovery, after how many millennia of people doing just that. (Behavioral economics came after the 2008 real estate crash and the cascading effects, by the way. The sellers and consumers did not behave the rational way that game theory and orthodox economics predicted they should).

So, the point of that tl;dr is I think humans are just too dumb and unaware of themselves to be trusted to develop any AI that emulates or predicts human behavior. There’s a big risk that they will ask the wrong question. Or alternately, deep learning AI, through trillions of iterations of tests, learn human behavior better than humans themselves do.

The other thing is that we live in a capitalist world, and it will be for-profit corporations with deep pockets that will develop it. Will the motive be to benefit all of humanity? Or will it be to maximize profits for the developers/owners? If the latter, then there are Darwinistic “winners and losers” factors involved.

3 Likes

These are articles about the developments being made utilizing AI, not theories because there is nothing interesting to publish about.

Yes and no, over-hyping? no, they want investors? yes.
One of the reasons everything is published as much as possible is because of the fear of “something goes wrong” to what Nana Skalski is referring to.
So a lot of tech companies and other AI developers have agreed to be 99% public about what they are doing with AI so “everyone” is aware of what is going on.
Also nearly all toolkit are for public download from OpenAI if I’m not mistaken, because^


:fearful:

1 Like

@discobot fortune