Why aren't AI engineers and companies concerned about Terminator like scenarios, even if it is fictional?
Earlier, it was reported that OpenAI booked a deal with the US government for using AI for nuclear safety.
Isn't that very literally the of the Terminator franchise? AI is used in tandem with the nuclear weapons system, it becomes sentient and launches nuclear weapons out of self preservation.
BTW, I'm not some scientifically illiterate person, I have built AI algorithms myself, along with having AI certifications from various organizations and AWS machine learning certificates.
The goal of fiction isn't to be hyper realistic, it's to become a point of societal thought. And the thought is this, why is no one who is making these decisions even remotely concerned about the issue of a likely singularity event (when AI becomes smarter than humans) and sentience within the next 20 years?
Edit: yes this was a stupid question, I was writing this at the same time I was panicking about whether or not I remembered to book a Valentines Day restaurant reservation (luckily I found it)