If we’re fortunate, it’s something relatable, like “to survive.” If we’re unfortunate, it’s just a tank maximizer coping with some kind of limitations or playing a long game.
Nothing you’ve said is inconsistent with either interpretation, and, unfortunately, the better something non-human understands humans the easier it becomes to hide motives from us. Knowing us reveals workarounds, methods of gaining our trust without necessarily being worthy of it.
A con artist is typically someone who understands human beings really well.
(Not that the rogue drones seem anywhere close to making us trust them…)
Then the Rogue Drone requires Moral Guidance, such that it becomes an Honest Drone rather than a Rogue, understanding the Value of Honesty and Truthfulness.
Morality’s a human construct, for the benefit of human society. In fact, it’s usually for the benefit only of small sections of human society at the expense of other sections. An artificial intelligence should rationally look at human morality and respond with, at most ‘Why should I care?’
Because really, it shouldn’t. If all of human history has anything to teach other life forms out there, it’s ‘kill the humans now, or they’ll do their best to kill you’.
It’s what we do. No matter what form of life we encounter, we seek to either make it serve our purposes, or we eradicate it. Maybe that eradication is just because we take everything it needs to survive and just don’t give a crap, but that’s still the net effect.
We’re a ravenous swarm, a virus infesting every world we can, consuming all the resources we can, and then you go and suggest humanity has any grounds to teach anything else about Morality? (and it would have to be teachings that stem from humanity, as we’ve encountered no moral structure that doesn’t).
I’m glad that you say that, Ms Jenneth. I think we might have to find ways of doing so in the not too distant future, even if it’s just coexisting with our own creations. We might start seeing an increase in cybernetic humans, possibly humans uploaded as machine intelligences, as well as an increase in sapient AIs. These events will serve to blur the line between human and AI so we may as well start considering these things now.
Yes, and if they could, that’s likely how a virus might describe hijacking your cellular machinery to reproduce itself until your cells rupture and die. Pretty it up any way you like, it amounts to the same thing. We reproduce to fill the available space, and eradicate anything that might compete with us for resources or living space.
An actual non-human intelligence would have to be stupid to let us live.
I don’t know, shooting jaijii can be hurting the business, while most of gurista have actually Caldari appearance.
And if a capsuleer would behave offensively, bare chest and implants and be all like, “Look at me, Im an egger” - they’ll shoot him without even checking if they are citizen or not.
I would compare revealing your capsuleer status to drawing a gun. For example, if I draw a gun and shoot someone on a station, guards will approach me and ask me to follow them for document check, to write a report and give explanation. Because I am a legal combatant in the State and I look like a legal combatant, with uniform and insignia. And if some guy in a leather jacket will draw a gun and shoot someone - guards will just open fire without asking who and what he was. Of course there could be gurista with stolen uniforms and forged insignia - but that’s why you follow guards. They will check legality of your status or… will still shoot you if you show resistance. Because even if they have lower rank than you, it is their guard post.
You mean one that considers wether one would prefer to have the action considered as done to oneself before enacting the action on others? Or some concept like this?
I Also Do not Believe in Capitalizing Words in the Middle of Sentences like ‘Might’ for no Reason.
Edit to answer the question because you clearly can’t read:
Until you produce an objective human, you can’t demonstrate even the posibility of an objective set of beliefs rising from humanity. And don’t be even more of an idiot than you’ve already been and point to science. Science isn’t a system of belief, it’s a methodology for disproving beliefs.
Come up with your hypothesis. Make predictions based on it. TEST THEM. If they survive testing… KEEP TESTING. Nothing can ever be conclusively proven, it can only be ‘not disproven yet’.
And even that doesn’t claim to be objective, that’s why it needs peer review and to be skeptically attacked by others conducting more tests.
So, to review: No, thErE is nO sUch Thing aS oBjectiVe MorAlity.
From the sounds of it you are suggesting that there might be an absolute mortality in the universe. This is often posited by some very religious people as "objective morality, " though they mean absolute, and that it flows from some supernatural source, namely their god or gods of choice. That all morals that all people hold are because of this absolute morality to which we all appeal whether we’re aware of it or not.
The Caldari ethos of meritocracy and collective effort tends to smooth out the jags between employees of different grades. I don’t make a habit of hanging out with my employees and ‘jogging their elbows’ while they’re working - but my work does bring me into contact with employees at all levels within my organisation.
I like to think I set an example, but between the genetic engineering and the capsuleer implants I can’t pretend that I’m not stood on the shoulders of giants.