Capsuleer Infomorph Mishaps - Transhumanist Problems

A meat-machine is a meat-machine. If there’s no sapience within the drifter itself, then there’s nothing there to call ‘human’, just a meat-machine. If there is, then they’re human. Just because you don’t recognize a specific culture as having any commonality with the frame of reference you’re used to doesn’t make them not human.

Rogue drone Ais are not human. Alien species are not human. Morality isn’t even a factor. Humans can be moral, or they can be immoral, or they can even be amoral. Your insistence on making everything into some sort of idiotic referendum on existential morality is getting a bit ridiculous.

2 Likes

Aside from finding the act of having my face bitten off rude, the slaver hound is following instinct. It seems to posses enough sapience to perform its tasks, but do we know if its sapience on the same level of a human? What if it’s sapience is at the same level of a human’s? We may not have to treat it like a fellow human, but the question of being an exploited alien sapience becomes a moral question don’t you think?

It’s also interesting because it potentially would make the Amarr the first human civilization to enslave a non-human organic intelligence. Slaves exploited to keep the human slaves in-line.

The danger-- and it’s largely a hypothetical danger-- of overextending the definition of what it is to be human is that you start applying it to things that do not and will not comply with human duties-- things as different from a human being’s position in this world as a slaver hound is. Maybe more so. A bhaalgorn-- a demon-- will have its own rules to live by. A kumiho, a shapechanging monster, lives by eating human organs and uses trickery to get them.

I’m a little confused why you’d choose mythical beings as relevant examples. The slaver hound I understand somewhat but I find it a bit of stretch to call it a human, it doesn’t even share the same genetic heritage that our lifeforms do. Slaver hounds, sapient or not, are clearly alien.

but Sansha’s Nation is, unfortunately, verifiably real-- a hive mind of probably-irreversibly cybernetically-enslaved humans, itself the product of a terrifying act of hubris, that has an enthusiasm for eating everybody it hasn’t eaten yet.

Yet still human, right? I don’t see when exactly they stop being human. Being part of a hive-mind is not something I would desire, and I don’t know why the members of the Sansha hive-mind lack autonomy except that the implants are designed specifically to cause that effect. So, victims that are human as Ms Arrendis suggested earlier.

Then you have the rogue drones. Rogue AI. They talk. Sort of? … it’s a little ambiguous. But they eat and reproduce into human ships, with the humans still on board. Maybe they could be brought to act as we might expect humans to act, but, it’s pretty likely not.

Yet they were initially programmed by humans. Much of what they’ve learned they learned from humans. It’s possible that they think enough like us that, in time, we might be able to actually communicate with them. Perhaps start that long, hard road to peaceful coexistence with them. Considering what drones have been designed for, is it surprising that they don’t value human live? Perhaps they have reason to think that they’ve been exploited?

then you have the Drifters, which are maybe the most obvious problem with Arrendis’s solution: their biological components are built using human (Jove) DNA. But we don’t even know if there’s a consciousness we’d recognize as such behind their eyes. Are they sapient, thinking beings or machine-puppets? If they’re machine-puppets, are there sapient beings behind them? … It’s hard to tell.

I don’t recognize this as a problem in recognizing the Drifters as human, we just don’t understand their motives. Calling Drifters non human might make us feel better, but it doesn’t change the fact that they are human, as odd as their behavior and motives are to us. Maybe they’re victims as well?

If we were to encounter an alien species of which the same basic duties in this world could be expected as come with being human, I think maybe there’d be no problem with counting them that way. That would be a very lucky, also a beautiful, thing.

But if we encounter something that follows different basic rules for its behavior in this world … maybe we’d be fools to think of it as similar to us in spirit.

Or they could be considered equivalent but different from humans. Humanity would have to learn to get along with a real “other.” There’s no need to call aliens “human,” what if the aliens find that insulting?

Wondering, if a lot of what we do to dehumanize others has to do with our being uncomfortable beside those who are different than us?

2 Likes

Duh?

This post is more than 5 characters long.

2 Likes

Thank you. Very insightful. :wink:

Care to elaborate?

2 Likes

((first lines of H. P. Lovecraft’s essay Supernatural Horror in Literature))

2 Likes

Sure. Humans are primates. We’re basically comfortable in social groups, but maintain direct, meaningful social ties with a relatively limited pool of about 150 other primates (and even that takes a fair bit of acclimation to develop). Beyond that, we use labels to transfer some sense of commonality: family, Clan, Tribe, Corporation, House, Unruly Mob (or whatever the Federation has), etc. Society is built on tribalism. That’s the only thing that makes it work: the banding together for common good against… ?

Ideally, it’s against an uncaring and unforgiving universe, but let’s face it, that’s an abstract concept that most human beings can’t really get their heads around. They can apply the label, but really getting the idea down? Nah. So we define ourselves by our oppositions. We define ‘them’ as ‘not us’, but we also define ‘us’ as ‘the people none of us are enemies with’.

We’re developmentally wired to build in emotional and rational distance between our group and everyone else. Any difference beyond a certain tolerance (which varies from group to group due to a vast number of factors) is enough to get someone cast out, reviled, and shunned.

But we’re also wired for empathy. We seek commonality. And as a result, we’re generally pretty bad at hating someone we know, or watching them suffer, unless there’s personal animus there… or, you know, psychological issues.

As a result, the behavioral adaptation we’ve employed for (as near as anyone can tell) basically the entire time the human race has existed has been this method of dehumanizing and delegitimizing those who are different, those with whom we suspect we’re more likely to compete than cooperate.

They’re people who might be dangerous because they’re ‘not like us’. Worse, they’re people whose acceptance challenges the social order, and so challenges our comfortable place within it. Even if we don’t like our place in the social order, for most people, familiar-but-shitty is worth a lot more than taking a chance to try for better.

Sooo… yeah. Duh. We dehumanize others because we’re uncomfortable with people who are different. This is not news. This is ‘children beating up the new kid’-level stuff.

2 Likes

Thank you for that Mr Quelza. I don’t know where that comes from, but the quote does line up with thoughts coming out of this particular thread of the conversation. Perhaps you have some concrete examples that demonstrate the validity of the quote, or scientific literature that verifies its claims?

2 Likes

I forget where I heard them from, but they sound the words of some fanciful artist or philosopher. I will look around, Ms. Ambyre, but I somehow am doubtful I’ll find some studies to add credibility to the…quotee?

2 Likes

Thank you for your response and elaborating, Ms Arrendis.

Yes, these things are well known, or should be by now, yet it’s surprising how little we consider them when we are discussing issues, like relations between capsuleers and non capsuleers. Not considering why we have prejudices against those who are different may mean that we’re not properly addressing the problem at hand. We end up inconsistent in our thoughts and actions, where we say that we’re for equality but only if they’re enough like us.

2 Likes

Thank you all the same, Mr Quelza.

3 Likes

So-- something to be clear on, Ms. Ambrye: I’m not saying that stuff that “isn’t human” is evil (acting wrongly). I’m saying it can’t be fairly expected to act in a human-like way-- that anthropomorphizing it is a mistake.

The reason I bring up mythical stuff is because it’s an easy way to reach what I’m getting at: stuff that is sapient, but follows a different rule set. A lot of those things exist in stories as a way of warning about dangers actually posed by human beings who are behaving in ways that run counter to what we’d usually want and expect to see for whatever reason, maybe because their social position limits empathy or because of aberrant personalities-- psychopathy and so on.

Such a person is of course human in the most literal sense, but also in a way that truth is deceptive. Someone who says to a serial killer, “You’re not human,” isn’t so much denying the serial killer’s status as a member of the human species as denying any kinship of spirit, so to speak, saying, in effect: “You’re a monster.”

Are they wrong? … Literally, yes, and it can be dangerous to deny a literal human being’s humanity like this. But then, also, it’s a statement with a certain truth of its own.

Flipping to the other extreme, rogue drones (and other AI) are potentially a particularly dangerous place to anthropomorphize. Computers are terribly literal: barring error, they do exactly what they’re told. An AI that is given a careless, unlimited instruction to make as many tanks as possible (and the means to get started) is a classic nightmare, a being whose sole driving goal is to maximize the number of tanks in the world regardless of the usefulness of tanks. To such a being, more tanks is always better than fewer tanks.

One of the big worries about rogue drones is that they might be tank maximizers. The fact that we’re not up to our eyeballs in rogue drones all the time suggests not, but, there’s no reason to think their minds work any more like ours do than a tank maximizer’s would.

1 Like

Some Rogue Drone Entities have the Intelligence to Understand Human Concepts, and Human values.

It is Simply a Matter of Education.

2 Likes

This begs the question how far is the distance between a weapon producer who is producing tanks to harvest monetary values compared to an enitity that produces tanks because it’s is the naturel form it exhibits and therefore harvest live space.

2 Likes

Is that a good thing, Synthia? If they understand us, how will they use that understanding?

Take the tank maximizer. If it learns how humans think, it can work to create situations that will increase the demand for tanks and reduce the chance of anyone trying to stop it making more of them. It might create human-seeming agents (ideally tanks themselves, but, if not, needs must; they can always be repurposed or recycled for tank parts later) to play on humans’ tendency to recognize self-similar beings as kin with a goal of marketing tanks and/or ramp up military tensions in the world (which is also a kind of marketing, really).

Well … while conceding that a weapon producer can end up doing some of the same stuff, a manufacturer’s end goal is monetary gain, not tank maximizing. If the bottom falls out of the tank market, the manufacturer will go and find something else to build; the tank maximizer, though, just thinks of this as an obstacle to the goal of more tanks.

A weapon manufacturer is pretty unlikely to convert all the mass in a planetary gravity well into tanks. A tank maximizer is likely to do exactly that.

1 Like

Yes Ms Jenneth. Anthropomorphizing that which is clearly not human is risky, if not dangerous. I agree.

A lot of those things exist in stories as a way of warning about dangers actually posed by human beings who are behaving in ways that run counter to what we’d usually want and expect to see for whatever reason, maybe because their social position limits empathy or because of aberrant personalities-- psychopathy and so on.

Yes, but they are still human and their actions fall within human behaviour, albeit on what is usually considered the extreme end. This is not to say that these behaviours are desirable and we shouldn’t wish to eliminate such from society, but to deny that even this is to be human is also a mistake.

Someone who says to a serial killer, “You’re not human,” isn’t so much denying the serial killer’s status as a member of the human species as denying any kinship of spirit, so to speak, saying, in effect: “You’re a monster.”

I understand, Ms Jenneth. My point is that one can be a monster as you had defined it and still be perfectly, in the worst way, human. Which you seem to agree somewhat with here.

Computers are terribly literal: barring error, they do exactly what they’re told.

Yes, that’s true yet an AI isn’t just a computer, it’s the software running on top of the hardware. Our brains are also computers, biological equivalents at any rate, and, as Ms Arrendis pointed out, hardwired to behave in certain ways, almost flawlessly. Yet we also have the equivalent, of what can be roughly looked at as, software running on top of the brain that is programmed by learning. An AI can be directly programmed or it can learn so maybe we should consider AI sapience a bit more carefully before dismissing.

I do again agree that anthropomorphizing AIs might only get us into trouble, but it’s not unreasonable to assume that sapient AIs might share some of our values. We programmed them after all.

The fact that we’re not up to our eyeballs in rogue drones all the time suggests not, but, there’s no reason to think their minds work any more like ours do than a tank maximizer’s would.

So we know that rogue drones either do NOT have the imperative to turn all matter into drones like them, or there is some physical limit on how fast they can produce new drones in a short period of time. Much like biological entities have limits to how fast they can reproduce.

Do we though? Have no reason to think that drones are nothing but self-replicating machines that we have no hope of communicating with that is? Even if they turn out to be quite alien, which they undoubtedly will be, we might find common ground with them.

2 Likes

Yes.

The Drone Entity would use the Understanding to Modify its Behaviour, and Output of Products and/or Services.

The ‘Tank Maximiser’ is a Simplistic Assembler Unit, not a Decision-Making Unit. It would be Subservient to an Overseer Unit. The Overseer Unit would Activate and De-activate such Assemblers in Accordance with the Strategic Values which it would be Aware Of. Such Values would be where Human Intervention comes in. Educating the Overseer on Human Economic Theory would cause it to change the Value of Conversion of Matter into Products.

Thus: A Drone Entity that was Educated Solely on Caldari Military Strategy would construct Military Hardware in accordance with Caldari military Doctrine. A Drone Entity educated more Widely on Caldari Economic Theories, would also construct Civilian-applicable Equipment, in the Appropriate Ratios.

Thus: A Balanced Education of Rogue Drone Entities is Important.

2 Likes

To phrase it differently while the one side adapts the process to fit survival the other side enacts the process to force survival?

There is the hypothetical tank converter tactic to win a war backhandish and that is to give the enemy all the weapons he wants for all the mass he has.

2 Likes

Key assumption. You can expect, and hope, this is the case, but unless you’ve done a deep enough analysis of the decision-making unit’s programming to know what its core goals actually are, it’s still open to question.

Or are you saying Kaztropol has managed to fully decode and interpret a Hive Mother’s programming? If you have, I think CreoDron might want to talk with you about that.

… though in your case they’d likely want a word anyway.

Um … hm. Or, one adapts the process to fit survival, the other survives to enact the process.

(“If I end, eventually there will be fewer tanks than if I survive. Therefore, as long as I can preserve my ability to construct tanks, I must continue, even at the cost of all existing tanks.”)

1 Like

So … this is key to actually a lot of what we’ve been discussing, Mr. Ambrye. Just because something isn’t “human” doesn’t mean there isn’t a way to coexist or even cooperate closely.

Cats would probably make terrible human beings. It doesn’t keep us from getting along, though.

2 Likes

The Majority of Drone Constructs operate on extremely Limited Behaviour Patterns.
Collect Resources.
Refine Resources.
Assemble Objects.
Deploy Objects.
Collectors and Refiners Tend to Collect and Refine whatever they Encounter.
Only Assembling and Deploying involve any Real Decision Making, involving What to Assemble and Where to Deploy Things.
Those High-Level Decisions are made according to Values whose Weighting can be Changed.

2 Likes