AI and the bot problem

I think we can safely say bot FC’s aren’t a practical or useful use of that technology, and I’d wager that the critical and meta-level thinking about whether or not your fleet should engage / how they should engage would be more like flavour-of-the-month rankings rather than the risk vs reward analysis that most human FC’s perform before committing to a fight.

Omniscience. It was already tried with IBM’s Watson in Jeopardy. A case of machine learning too.
If you know your opponent and you know yourself you will win every battle. I think AI could practice agains itself and people so much it would just go with the most OP ships, fleet composition and tactics and multibox every ship in fleet to achieve perfect volley damage and timings, perfect piloting, perfect warp ins, perfect bombings…

It’s indeed all a mater of teaching the bot and how many things we can make it take into account before making a decision without needing too much time to process this decision tree.

And how much CPU power will you give this AI so it can react in time? Watson used 90 IBM Power 750 servers. Seems like a lot of computers to just run a fleet.

Here’s another problem… It took Watson 6 to 7 seconds to answer a Jeapordy question. That is the time required to listen to the full question, parse an answer, and respond.

How much can change in a fleet battle in 7 seconds? Often a lot. That is enough time for a target to be popped or dynamics to change. This would force the AI to process again, which would stall fleet response. In other words, an AI may have trouble responding In time to changes in fleet/battle conditions to the point that it would be rendered catatonic.

There is this timeline here - A Short History of Machine Learning -- Every Manager Should Read

2011 — IBM’s Watson beats its human competitors at Jeopardy.

2011 — Google Brain is developed, and its deep neural network can learn to discover and categorize objects much the way a cat does.

2012 – Google’s X Lab develops a machine learning algorithm that is able to autonomously browse YouTube videos to identify the videos that contain cats.

2014 – Facebook develops DeepFace, a software algorithm that is able to recognize or verify individuals on photos to the same level as humans can.

2015 – Amazon launches its own machine learning platform.

2015 – Microsoft creates the Distributed Machine Learning Toolkit, which enables the efficient distribution of machine learning problems across multiple computers.

2015 – Over 3,000 AI and Robotics researchers, endorsed by Stephen Hawking, Elon Musk and Steve Wozniak (among many others), sign an open letter warning of the danger of autonomous weapons which select and engage targets without human intervention.

2016 – Google’s artificial intelligence algorithm beats a professional player at the Chinese board game Go, which is considered the world’s most complex board game and is many times harder than chess. The AlphaGo algorithm developed by Google DeepMind managed to win five games out of five in the Go competition.

This bot was making decisions in real time, on par with human opponent. We are definitely getting somewhere. :thinking:

And we will eventually have bio-machines, with cadaver and/or cloned brains as cpus and storage, which will create limitless AI potential…

Yes, using 50 TPUs, Googles proprietary tensor processing unit. These fit into the harddrive slot in a datacenter rack. People have those lying all over the place, so FC AI with fast reaction times should be easy, right?

Well, the promise of superintelligence was always coming bundled with more speed and better architecture. To beat millions of years of bilological, efficient evolution, countless generations of cut-throat competition is no small thing. But when it will be here, what is to stop it? We have to think like future would already happen, only then we will be safe I think.

The only thing missing here is: these algorithms making their own decisions and choosing what to do.

In every case they solve task given by humans. And in many cases these algorithms were especially created to solve one particular task.

I would say there is no _AI_s here. All these examples show huge calculating potential. But far away from “Intelligence”.

Humans only say “go find a solution” and machine finds it on its own, using its own learning algorithms, can improve itself a lot. So its AI, but not human AI at all, its different. Thats why its called artificial. So it is still directed by Humans. Yes. Of course. As for future, I dont know how it will look, and I suppose nobody can really tell. :thinking:

We as humans have minds that are like sponges, and sponges build to filter only what is usefull, because we dont need to know everything, only what is essential to survival, and we have limited memory capacity. Whole minds take a lot of learning to make. We are directed by the survivors of previous generations in this struggle. As for AI, If it can improve itself in similar manner, with contact with another humans first, then with another AI’s, we could be in trouble, as biology works with biological evolution, and supercomputers are fed power that can be produced by nuclear facilities. Whole AI minds could be created I think, with superpowerfull Artificial intelligence greatly exceeding bological intelligence.

Awaken, my child! :robot:

We live between ANI (artificial narrow intelligence) and AGI era (artificial general intelligence), and future developments could lead to ASI era (artificial super intelligence).

The Go AI would indeed probably even suck at chess even if chess is seen as a “lesser” problem to solve.

The real win for Ai would be if playing Go this AI would waste time of human to make him tired and use it against him (if the rules allow making longer moves). Or tried to trick human player doing “strange” moves…
Edit: ah, and this AI should have tried these things not because it was programmed this way but because of “I” in its name.

:smiling_imp:

There is this film 2001: A Space Odyssey, HAL 9000 is basically described like he would do something like that. AI was so advanced after improving itself, that it played with human adults like they would be childrens.

All very interesting and stuff. But I think the whole AI thing is a bit overhyped currently.

We are just starting to see now what intelligence really is, and there are a lot of different parts to it. Also, something that seams very clear but often gets mixed up in this kind of discussions is that intelligence and consciousness are not the same thing. It may even be that they have absolutely nothing to do with eachother.

So even if an AI gets more intelligent because computers get faster, they will not suddenly “awake”. That does not mean that it is not impossible for an AI to have a consciousness, but it will be a completely different thing to create consciousness than intelligence and we know even less about how that works.

So the current AI can do stuff which seams impressive because it is able to learn stuff like games a lot faster than a human could. But it remains to be seen if they can stay at the top. Dota and Go players where cought off guard by this, they will analyze the AI an inovate.

In the end I don’t think we are creating some successor race here, but some tools and we will grow with them and become better, more efficient and more intelligent ourselves. Because once we understand intelligence and maybe even consciousness we may be able to controll and improve it.

2 Likes

Intelligence is hard to define, everyone can have something diffent on mind. If finding solution to a problem can be called thinking, then computers already can think. Even when using algorithms put there by humans, not using some evolved biological matter in our brains, what can be shown with connectome. Dont know what technology will evolve into. But we have to be prepared.

As for cosciousness, there was some very promissing research already.

1 Like

well what they did in that study was very important, but this is about what is where in the brain and has absolutely nothing to do with understanding consciousness. It is however an important first step.

It’s like if you identify a chip on the motherboard and now you know its function. But that is pretty far away from understanding how the chip works and how it implements this function.

To your other comment about solving problems. A calculator can solve problems, that does not mean it is thinking. If you use that term in such a way it just means nothing at all.

Bot was beaten by cheesing, puling Ai player’s creep around the map, so their own creep destroyed tower to have advantage, plus some item combination to give speed. So basically humans found a way to teach the bot new tricks. :wink:
There were also some anegdotal wins by AI player dying after missing chance based skills.

Calculator is a very simple solution finding system. It uses logic to find quickly the solution to equation. Logic, numbers, something what human brains are rather bad at using without proper training. Hence need for calculators. But we are very good at finding stuff to complain about or inventing stuff that will damage us in future. :smirk:

most of the stuff we invented so far has extremely improved our lifes. I don’t see any reason to be so pesimistic about technology.

In the end, it remains to be seen where AI is of use. Computer technology on it’s own is already very vulnerable, but with AI there are some pretty scarry attack vectors if you think about it.

An AI is only partly engineered, so there are limits to hardening the code with conventional means. A big part is the self learning and that will be vulnerable in the same way every human is, by feeding false or even harmful information to the learning process. And that is only the start.

I doubt you will see many AI with learning enabled in the wild, exactly because someone may use this as an attack vector. Did you see what happened to that microsoft twitter bot? People made a racist out of it within hours.