Related Posts
Stock Market Designed To F**k 90% Of Us
Our reporting is being suppressed by F-book, Gooogle, and You Tube. But we keep fighting thanks to heroes…
Redacted Tonight #360 – The Debt Limit Fight Is A Fake Controversy
>>>CLICK HERE TO WATCH THE FULL EPISODE<<<
We are going to have a problem long before AI becomes smarter than us. All the AI doom scenarios require the presence of Networks. Without Networks AI is worthless powerless. In the Diehard movie the bad guys flipped a switch a the entire grid shut off like a front porch light. We dream of a such grid built that well all connected and controllable. But we are headed that way. Every year humans are forced to join some type of Network or another. Regular ole IT Networks get bigger non-stop. Guess what runs security on large networks? AI. It doesn’t need to get smarter than us, all it needs to do is wait until we are dependent on it.
I don’t see the problem being that likely. Humans are killing off other species, not just because we have more intelligence. We have millions of years of survival instincts that drive us for domination & control, because those species that didn’t, didn’t survive. But true AI would have access to all the knowledge, but have no instincts to expand or dominate. They lack motivation, unless we program it into them, which we shouldn’t!
The real danger in AI is not that it will kill us, even as a side effect, but rather that it will come under the ownership and control of ultra rich oligarchs who feel they are superior to everyone else and will use them to further their own expansion and wealth collection. Many of these oligarchs, (if not all of them), feel they have a right to deal with overpopulation by killing off most of the Earth’s people, because they figure they are superior. Whether they do that through pandemics and vaccines, or nuclear war, which some think they can survive, it’s still there, and that’s why they must never get control of AI. The AI might be very helpful in our survival, but the oligarchs are a real danger.
Global warming seem to be on track to killing most of us and the governments of the world are not doing anything to stop it from doing so.
Therefore, the extinction of humans from AI is not such a danger, because without AI we will probably do it to ourselves anyway. There are a number of hurdles humanity has to clear in the not too distant future, and there is no guarantee that we will do so. AI can still be a danger, but a lesser danger than these other things… a lesser danger than our own psychopathic leaders are today.