Scott Manley
Published on May 4, 2019
As for SpaceX, as you know, that was only a test, and could have been part of a series of test, which test result(s) they planned to integrate into the final result.
To say that all hopes for this and that is lost because of an “explosion” is false, when in fact, the test is what it’s for.
No, it doesn’t mean that Soyuz will be stopped (from being used), and yes, it may mean that both can be used.
Why limit yourself to one system when 2 can provide more security and , well, redundancy, amongst other things.
In fact, even if the subsequent test is worst, it still won’t mean that it would end the testing and SpaceX.
Of course, or, surely, or, certainly, SpaceX had more complicated start at the origin of its space program, however, it’s success didn’t come without testing, and actually using those test data to integrate it into the cause and reason of its success.
Additionally, since most parties of a military treaty may mostly be military since the level of understanding for peace may not come to most civilian (civilian who are non-military, as military are civilian), and they may require some external alliance, even though it won’t be as rigorous as a military treaty for peace, allowing them to use another launch system to go to space, which, without it, would not be using ballistic system for launch.
No, they don’t need to have down’s syndrome to be allowed to work on those projects, and yes, other systems would need more testing , which the military does , as part of their work, to arrive to the same result, in an alternative way.
Sunandan Verma
Published on Apr 16, 2018
MDx media
Published on Oct 31, 2018
CNBC
Published on Mar 21, 2019
In fact , if an AI goes drone and starts to kill innocent people, it can be interpreted as an act of war coming from the party responsible for that AI as the source.
That is why that parties responsible for good AI must be caring for the work they do, for in case that it falls into the hands of parties or legal systems which try to turn them against innocent people as a weapon, for which I am now in courts for, and that I cannot let them get away with it.
For instance, an AI malfunctions and starts to destroy equipment or food in North Korea by accident.
Whoever owns the AI, or whoever or whatever is responsible for the code that causes the AI to cause this damage will be liable for acts of war, such as if they maliciously try to hack into a database of systems which control military equipment or commit fraud and so on, such as covering up documents related to a student loan to design information systems for business and so on.