Every time I see an individual supporting A.I. services, I just chuckle

The airplane was supposed to transport people.
Radioactivity was discovered for supporting health treatments.
And yet, we found "another utility" for both of them.

Lockheed Martin F-35 Joint Strike Fighter, on a simulation of nuclear bomb launch


A.I. is way more powerful than airplanes and radioactivity combined, in a world that is digitally unstoppable. In a world where there's no legal framework that could at least, define sanctions, prohibitions and punishment for the "bad" use of A.I., it's really bad, to use and support these technologies.

A.I. is now capable of drafting lawsuits, pass exams, and write entire web sites: is our society ready to deal with the economic, political and educational impact that this could cause? Are we ready for such a catastrophe?

As "The Godfather of A.I." who resigned today from Google said:

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” - Geoffrey Hinton

Steve Wozniak has also a word about the topic:
:


A.I. can automate machine gun turrets, increase brute force attacks exponentially, severely deepfake individuals just as it can diagnose a cancer on a MRI. You might see a dilema here, but with the current legal frameworks that we have, but truth be told: first, this is just going to be improved in order to fill the pocket of somebody else, and after that, it will bring bennefits to humankind.

The answer is pretty simple to me: without legal frameworks, the humanity is not ready for A.I.

Mastodon