Oumuamua - artificially made object! 🥳

No, to simply say “we don’t know” is always an option. It just seems most people are not comfortable with tgat and rather go with a wild speculation than uncertainty.

2 Likes

Someone should make I want to believe X-files poster with Oumuamua.

1 Like

Neil DeGrasse Tyson: Is This Thing A Spaceship?

The Late Show with Stephen Colbert

Published on Jan 6, 2018

James Ferguson, Physicist, 1757
“Of all the sciences cultivated by mankind,
astronomy is acknowledged to be and undoubtly
is the most sublime , the most interesting
and the most useful, for by knowledge derived
from this science, not only the bulk of the earth is discovered , but our very faculties are enlarged with the grandeur
of the ideas it conveys.
Our minds exalted above their low contracted prejudices.”

Interesting enough is that we (humans), are locked in this need for contact, yet every contact we have had with other cultures, races, animals, bugs, microbes, in general has been bad for the other side, humans are terrible neighbors that often spell distruction for those who are contacted.

Even our biosphere of our own planet has suffered because of us, now, before I get hung for such a terrible thing I’ve posted any advanced civilization would have survived destroying themselves and found better ways to deal with problems not by force but by solving how to approach things in an open manner without the baggage humans still cling to, we are about as interesting to them as baboon’s are to us.
At which point when a civilization reaches a high enough level they may even choose to not have any contact whatsoever as to disturb their tranquility of exploring things we can’t imagine because we are fixed on this level at this time.

1 Like

I disagree, you give baboons choice, technology and nuclear weapons I bet they would be a hell of a show to watch.

Stop worrying, aliens will never invade like some silly movie, they will simply wait 100 years for all the humans to die off of incompetence and corruption. Sure humans can live underground for a while, but we know incompetence and those people will just eventually die too of some other corruption.

Humans cant even agree there is a problem and this is at the exact same time there is a nuclear reactor spewing death to one ocean and a very limited reported oil leak that is going to be far worse than the gulf oil disaster to another.

The social order that keeps everyone quiet will be the death of everyone. What a positive realistic future.

Artificial intelligence will save us

Heheh, just like humans saved those neanderthals? :joy:

Artificial intelligence will go both ways, good and bad, even at the same time, creating a lot of chaos. Even when humanity will survive that part of their history, it will be forever changed.

Why do you think that, because Elon Musk said so? If you want to talk to someone like me about topics like this, you need to persuade me better. On what basis are you saying this. On what basis is it probable?

I think AI will help us because it would be a tool to perfect the rationale and accuracy of our decision making and critical thinking. It will lead to actions that are more perfectly aligned with what is reasonable. For example an AI doctor would have a god level ability to reason and infer conclusions or new tests based on the answers to questions or past test results of a patient. No human doctor would be able to compete.

It would be like a human chess player playing a move from a position, then comparing it to what the chess computer says is the best move, and then using that as a judgement as to whether their move was optimal.

I dont want to pursuade you.

I just present my own opinion about the matter. Also that opinion was building on basis of many articles, not because Musk said so.
Just a quick course to understand how it works actually: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Yeah, maybe in a remote future. But the trash they sell today looks more like an investor bait scam. IBM’s watson for example produced pretty dangerous results when tested by actual medical doctors as a helper to find a good treatment.

All those predictions are based on the assumption that there is almost infinite exponential growth in computing power, which is an immensely stupid thing to assume. And it’s also already slowing down since that article was released.

Its not only about computing power but what new advancements in technology will make it will look and behave, and you cant predict that. Just like people who build computer using lamps were not able to predict how will transistor revolutionize the industry. There is also a case of S turns:

The curve goes through three phases:

  1. Slow growth (the early phase of exponential growth)
  2. Rapid growth (the late, explosive phase of exponential growth)
  3. A leveling off as the particular paradigm matures

image

Just read the article, its very good, and written in such language as everybody can understand it.

We as humans are stupid, and you said it yourself, we assume stupid things. How people actually behave, and AI as being here and becoming more common is at risk that it will be downplayed, untill the rise of more advanced AI, the AGI:
https://forums.eveonline.com/uploads/short-url/mBZfKOTWMqxN1LfC1MlWhY7vsop.jpeg
From “The Psychology of Security” by Bruce Schneier. This guy knows what he is saying.

AGI is far future one may think, but when it comes, what you are going to do with it?

So the thread has gone from an asteroid to AI, quite the leap i say :nerd_face:

The whole Singularity voodoo and Kurzweils predictions is all about computing power.

Then why are you hyping articles that try exactly that? I have read all about this stuff years back and while there are certain observations they make that are interesting, the conclusions they draw are absolutely ridiculous.

We may differ in case of how soon future will come here, ok, but what about the actual dangers? You know that now AI KILLS PEOPLE, this shodily designed and without human keeping eye on the road all the time. You know that?

AI we currently have is extremely limited. I find it very interesting to follow those developments because those systems resemble small parts of human/animals intelligence and is actually helping us understand part of our own brains better.

And it also becomes increasingly clear that intelligence is completely separate from self-awareness and consciousness. There are some things we do because we emerged from an evolutionary process where survival was key and which is a big part of what motivates us to do things or how. AI is different, it is in most cases a pure problem solving machine with an artificial purpose.

Those systems will in my opinion become useful in a lot of situation in the future. They will be extremely better at some tasks similarly like a normal computer already is. But I really doubt it will become “self-aware” or “awake” in any way, as that is simply not a property of intelligence but of biological life.

But it is actually hard to say how fast AI will revolutionize the world. I’m not sure about cars, but it seems a super difficult problem which may be decades away still. In general I think the current hype is an investor bait, they seriously oversell those systems to get money and I would be extremely surprised if we see something in the near future that is actually game changing.

You of course write about AI, not AGI?

Anyway, human + technology became apparent that it messes greatly with human. The case of world war 1 when people were using advanced weapons but tactics used were still so backwards that millions died, until of course people used tanks, the cavalery of modern battlefield. But millions were hurt, incapacitated, shell shocked.

The capacity of changing society, culture, it is underestimated very much in my opinion.

If humans are dying today while AI is not keeping up with road, and humans dont keep up with how AI have to work, then I only can predict it will be worse and worse with time, and what will be human reaction?

Thousands killed by technology every day now - zero reaction. :pensive:

Oh yes, it will change society and culture. Technology always does and has already many times. I’m just very skeptical about the timeframe they claim.

And I don’t see that there is any reason to fear a terminator doomsday scenario. For the reasons I already explained.

I’m not even sure what you mean by that. Thousands died when there was no technology

My guess is, you are referencing the Uber self-driving car accident in Arizona? So a single death becomes “AI kills people”?

Yeah, the systems need more development and probably where introduced onto real roads a bit too early. But it’s nowhere near something where the AI’s have developed sentience and are making a specific decision to kill anything. Maybe review the footage of the accident. The woman came out of nowhere, very likely she would have been hit by a human driver as well.

This thread is getting quite tinfoily though. I’d suggest you learn how machine learning actually works. While the technology definitely can be used for malicious purposes, the threat they pose is mostly because of how people decide to use these technologies. So it’s just people killing people, not the technology itself. The systems are nowhere advanced enough to be the singularity.

This thread was made out of pure tinfoil from the very start.