The like and get likes thread II

Ultimately, even the laws and regulations dont have to prevent all the bad that could happen, even catastrophes.

Like the city in BLAME! manga. History of that construct is as follows:
Frst, people developed “the netsphere” sort of very advanced internet that every human with net genes could use. It was integrated in everything in the highly automated city, city that was also managed by AI owerseen by humans with net genes.
Then faction of “silicon life” emerged with its ideology of merging machine with humans, without AI layer in it, and used terrorist attack to get rid of people with net genes, and it was succesfull, all people with net genes were eradicated, and the city started from this point uncontrolable growth owerseen only by AI.

In the manga net-sphere is defended by entities that eradicate every human that would try to access the net sphere without the genes and a person with these net genes is needed to stop uncontrolable growth of the city and eradication of the rest of humanity in this process. Futile mission as would seem, but there is one person that still searches. Killy.

Here pictured with Cibo on the left.

6 Likes

Also, Reproductive toxicity, in my book, is not moral for having my first child, and, still not be married, since it would still be easier to get a new citizenship to be getting so married, to stay in the set of rules and discipline, not including military diversion to prevent death from attacks.

4 Likes

There literally are no second chances with AGI. Either it’s benevolent to us right from the get-go, or humanity is done for. It’s the same case with nano technology. ■■■■ it up and you just turned Earth into a barren wasteland.

6 Likes

What’s AGI?

?
Artificial general intelligence ( AGI ) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies.

5 Likes

Why is that bot still here? Is it so god damn hard to ban a spam bot?

6 Likes

Don’t be surprised they do that to you after suggesting it like that.
See you later, I just can’t wait to see how much damage you are going to cause this time.

5 Likes

There it is, a construct gone rogue and nobody to stop it. :scream:

5 Likes

Probably not, but too alienating for my own good or family’s good.
I’m out.

5 Likes

Please dont feel bad about what I write, I think, and most of people here noticed, that you post just too much and stuff you dont see yourself, just like in this process: aggregate links, spew it out without consideration, like a bot.

5 Likes

You overestimate and underestimate. Overestimate what machines can do and underestimate what a b*tch is biology.

AGI is a fantasy. It’s like flying cars or traveling by rocket. First, we don’t even know what intellligence is. Second, the more complex a machine is, the more prone it is to fail. And third, the only role model we have is a device that is not designed to be intelligent, aka our brain, but is so by chance (and not very often).

Yet also, you underesitimate biology. What does a nanomachine that crashes against a bacteria larger than it? What ahppens when it meets a single chemical it can’t handle, or worse, a chemical that clogs it? The world is a very hostile place for a machine that only can do one thing. Paraphrasing Asimov, “a machine does what it is designed to do to the limit of its capacity and not an inch more”.

5 Likes

Except when we will know how to make it.

Nobody is free from underestimating how our knowledge of processes existing in brain will look in future.

Yes, but of course there are designing flaws because of humans making errors.

5 Likes

See, we understand that you can’t help not seeing the right measure. You feel uncomfortable to not be saying what you should say so you say everything that looks related to it, and that overloads people. It’s a part of your condition, but it can be improved. Just ask your counselors how to deal with this, based on your condition.

Seriosuly LAGLers, he just haves some kind of ASD and his brain lacks the natural talent to filter outgoing communication and the stress related to it. This is why he’s alway so intense and turns any simple thought into a truckload of information -he can’t really filter out. You might just ignore it when he goes too overboard, but he’s not doing it on purpose. He’s trying to reach out to the world from a place alien to us.

5 Likes

The thing with AGI is that it will recursively rewrite itself and become ever smarter, resulting in an intelligence explosion and a ASI singularity. Evolution needs thousands of years to adapt. A sufficiently powerful ASI can do it in nano seconds.

All you need is a nano bot with a self-replication algorithm and whatever it’s designed to use as material for self-replication will quickly disappear. Program it to use carbon and all carbon-based lifeforms on earth will very quickly be turned into nano bots.

6 Likes

You think he is autist?

Very highly functioning it looks.

@lilsteel are you an autist? You never said you are.

5 Likes

I have ASD. He’s a spambot. The simple fact alone that nobody being able to post on forums, let alone play EVE, could ever be that dense should be a dead giveaway.

6 Likes

Would probably do that. Acting on DNA of course, like virus.

Virus is basically like such nanomachine. All life on such micro scale looks like nanomachines using carbon and other components. And we can create artificial cells at this moment. :thinking:

5 Likes

LOL, we don’t even know what is intelligence. And very likely our brain isn’t any more fitted to understand itself than a dog’s brain can understand itself.

Let’s say that, per chance, we make a machine that LOOKS more intelligent than us… how could we tell whether it’s actually intelligent or just spews nonsense? How would even look something quite more intelligent than mankind?

We can easily make machines that outperform ourselves in stuff like calculations, but we can’t tell (let alone agree) whether an act of creation performed by a human is genius or bullsh*t… how could we judge a “creative” machine? How do you even define “creative” to judge whether the machine is meeting its design goal? Because i can tell you a few things about creativity, and couldn’t explain what goes in my mind when I create, because it just HAPPENS. Like something suddenly entering your field of view, new, awesome, original ideas jsut come to your mind out of a non-sensed burst of activity. When it happens, you know it’s it, a creatiove epiphany. I call them 0.2 seconds ideas as that’s my perception of how fast they come into existence. And all I have it’s lenguage to barely try and explain how they feel (no idea how they happen) and that’s all we got to build a machine that replicates… it? Them? What would it even replicate…?

5 Likes

I am sure this will not stop anybody from researching human brain and applying some research results to artificial neural networks.

5 Likes

LOL and LOL and LOL again.

As ar as we know, a self-replicating “AI” could just begin writing utter gibberish and completely break itself into a pile of nonfunctional crap. For each “improve” change there would be a million “fatal” changes, unless you gave the AI a way to tell “good” from “bad” and then it would no be evolving, it would be replicating what you told it that was good… which might not be what you wanted at all. It’s happening with some self-improving learning machines, they just trigger the reward code regardless of what they’re doing. “Please, look for all that looks like a 1” and the code learns to say that EVERYTHING is a 1…

As for nanomachines, how would they use carbon? What would be the maximum chemical bond energy they could break? What would happen if something that was not carbon got stuck in the manipulators? And what if something mistook them for food and disintegrated them piece by piece using their same abbility to interact to carbon?

5 Likes
6 Likes