Is it possible that we’re underestimating? Earlier this year, AI machines assigned themselves the task of developing a communication language only understood by themselves, and they succeeded.
We’ve never experienced anything remotely like this before.
These are all AI’s strong points, not weak points. Our weak points are actually opposite of that. AI will survive longer than humanity, easily. Also if it will not see us as a danger to itself. And here is where humans are doomed. If humans cant possibly hold themselves from killing each other, not mentioning exterminating other species by accident, AI can see that and commence EXTERMINATION, untill we are nothing more than apes in technological and cultural terms.
Then the only intelligent species on earth will be AI after dust will settle.
Ok… Either people have way, way, way too much faith in AI or have no clue how you FC. It’s not a simple task that is easily learned. If it was, there would be ample fleets up for everyone to join at all times.
There is no software that would detect an AI running on another computer and if there ever was, it would itself be a graver threat to a system’s security than any mere cheat bot.
software like punkbuster is built into or runs the client not on the server.
EDIT: well it runs on the server also but the cheat detection part is on the client
PunkBuster does not allow Windows users without administrative accounts to connect to any games.
That is from wiki. So basically CCP would require all users to have admin rights to play EVE, given that may eliminate people under 18, wouldn’t that be a bad thing?
Also… The processing power of any AI is enormous. If you could afford that, you could probably afford to pay an FC to lead as their job negating the need for an AI.
Also… Punk buster reads memory content, all memory regardless of the memory’s function. Spamming the right string of texts in a team speak server could falsely flag everyone on that team speak as cheating. Gee… Let’s give nul sec spies a way to ban an entire alliance fleet by a quick copy and paste… No one in eve would ever abuse that.
if anyone could create an AI this good it would not be wasted in EVE
you can bet your hole it would be snapped up by governments or put to use on the stock markets, that’s enough pipe dreaming now lol
Any computer-born, self-directed intelligence would probably not be detectable, unless it chose to be detected or unless you used another computer-born, self-directed intelligence to detect it.
Once it can play EVE via screen and audio outputs and keyboard and mouse inputs, all bets are off.
Lets go further.
Would a self-aware AI be allowed to play the game if it wanted to?
How about cybernetically enhanced humans, able to independently control multiple accounts without touching the keyboard?
If I were an AI, I would advise my operators against interacting with governments, most especially against exposing my working parts to a martial analysis. If I were a government, I would seek to destroy any such entity by any means necessary before it could become a threat. Didn’t you watch “The Terminator”?
Punkbuster would be trivially easy to get around anyway.
One off-the-shelf counter to Punkbuster recognizing a cheat program in memory would be to protect said cheat program with just about any modern copy protection tool, because obfuscating code is how copy protection protects itself from analysis.
Punkbuster is not 100% waterproof in my experience, I played BF3 with PunkBuster for a long time and I’ve seen hackers through it.
When I look at the forums of websites which provide hax and bots for like Battlefield everytime new updates for like anticheat software is launched, hack software is directly modified to coop with the new updates.
It’s a cat and mouse game, only where the cat takes months to make a move and the mouse just a few days, where it can make use of the cat slow reaction time to live in freedom in between.
There comes AI in handy, which you can use to intercept cheaters and hackers, but on the other side these hack program providers can use AI in their way to learn it to stay below the radar.
It still stays a cat and mouse game, but more balanced than what I currently is seeing lul
We might see these kind of systems (hopefully) by next year already.
Do you remember to worldwide banking and real estate crash in 2008? One reason it happened was because an independent firm of economists, mathematicians, and IT people had developed a market predictor tool. It could gather huge amounts of data about buy/sell market patterns, and so easily predict what to expect for the future. It was so amazingly next-generation and intellectually sound that Lehman Bros. or somesuch bought the product. And leaked the word out. “Oy investors, we’ve got this thing that incorporate chaos theory and ■■■■, and those other guys still relying on a bunch of sweaty and greedy Wall Street human market traders listening for bells to chime and such. Who are you going to trust? A mob of human hustlers and their 5 senses, or an insta-reacting algorithm?”
Well after that, most of the competition jumped onboard and bought the algorithm market predictor system too. Problem was, the math behind it was so hard, even the salesman couldn’t understand it or properly explain it. Of course the IT department hearing about it couldn’t understand it either, much less the board members hearing the presentation. But, “If they have it, no way around it, we have to have it too. Here’s 4mil or 10mil or whatever to keep us up to speed and in the race.”
Well, what happened was, the front line guys approving mortgages in offices with mums and dads in front of them did not use the tool. That’s a given. They got commissions per each mortgage application received and loan given. Possibly, those applications eventually went through the chaos theory market predictor human behavior trends predictor tool, before being approved. What do you think? Did they? And did the results make their way back down to the unit that decided on Approve or Deny? Seems not… thus 2008. And the 10 year aftermath.
So… AI. On a theoretical level, the challenge is so intellectually clean. But… can there ever be an AI project that won’t involve some human self-interest behind the project and it’s funding?
You’re looking at AI simply as an algorithm. Who do we learn from, who shapes our consciousness, our world view from the earliest onset? Our parents. Who are AI’s parents? We are. And we are eternally fawked when AI learns to be like us. Love, kindness, charity, hate, discontent, needs, wants, greed, prejudice, racism.
All those things called emotions or coming up from emotions will not be there, unless we will make it empathethic by advanced structures, similar to human brain, without ability of it to switch it off by some means. Before it will happen, the AI would be in the first place logic, and in that timespan it will be very dangerous.
Why love, when there is no shape or similarity to humans or attraction to them build in? There will be cold calculation instead.
This will be psychopathic and sociopathic machine, phenomenon known already in humans. AI as those people could also fake that it learned emotions to such a degree that it will appear very human to us. If machine could learn how it can profit from lies, it will be a very bad sign. And I think faking something in games is just that. Underlying motives causing lies to flow from machine, combination that would make it so dangerous.
First, what is peace. If it is lack of wars, AI could in fact throw everyone into jail or some kind of reeducation camps and commence constant deculturation and detechnicization of human civilization. To the point when we will be grown up babies with robotic babysitters, whole our life.
I think that scientists should make works on such advanced, planning and strategically thinking AI a taboo before AI will be so advanced. Laws should be made that forbid creation of complete capability AI, that would be able to plan strategically, influence powergrid, holding any power in governments. Or we will supress it, or it will take our place in the world when it comes to intelligence.
Laws are broken every day. What happens when someone breaks that one is that a being more intelligent than the entire human race is brought into existence. That is probably an irreversible outcome.
There is also the matter of organic-based intelligence. Would this law apply to a genetically enhanced chimpanzee that was as smart as a human? Would it apply to an enhanced human being? Would the parents of a plain 'ole genius be guilty of breaking this law? We wouldn’t tell a child not to get too smart; why tell an AI?
Laws, human ones, only describe how the world should work. And, how SHOULD we play God? Should we limit it strictly to games like DOTA and Go (and EVE Online)? Is it okay to build killer robots as long as they are dumb? How dumb? And dumb in what ways? And, what Is dumb? What about worker robots? Isn’t thinking a kind of work? What good is a dumb thinking robot? LOL