Isn’t an “advanced” AI with singularity less dangerous than an “advanced” AI without it?
If you are going to tell an AI to solve a problem, and it picks you out as an ‘possible’ threat because you ‘are able prevent it from happening’ just by blocking it’s walking path in the hallway without even considering it, I rather have it some sense.
I see an more advanced AI without having some ability of some perception waaay more dangerous than one with, because in the end the AI will complete it’s task with the precision you can pair to the core of nature.
It still could remove you in any case, but when it got some sense, it got some sense…could decide it’s not necessarily for it’s task, I’m not saying it’s not dangerous, I’m saying it’s less dangerous one with imo.
You don’t want when you order you robot to get you a cookie from the kitchen that it is going to kill you because you are ‘possible’ obstacle which could go stand in it’s way, and it won’t even tell you because that would reduce the chances of succeeding it’s task, even if you are never even intended to do that, the robot will take you out because it needs to make sure it would execute the task perfectly according what it’s ordered to, which is getting you a ordinary cookie from the kitchen (this concept is no joke, it’s a serious concern)
Call me sci fi hippi or something but I’m more concerned about robots without any sense than one with, either way advancements in computing, robotics, and AI will come if you like it or not tho.
@discobot fortune don’t even think about it
(also watch the monkey video again and imagine the soldiers are the AI and the ak-47 stand for technology and the ape stands for the humans)