Fox News Channel host Tucker Carlson sat down with Twitter CEO Elon Musk for a wide-ranging interview this week that dealt with, among other things, the potentially deleterious impact of artificial intelligence on our society.
Aside from the rapidly increasing capabilities of such technology, Musk asserted that developers are “training the AI to lie” in pursuit of a larger cultural mission.
The billionaire, who was among the more than 1,100 individuals to sign an open letter calling for a six-month moratorium in AI research, said that the tech is already being programmed to “either comment on some things, not comment on other things, but not to say what the data actually demands that it say.”
Musk has been touting the development of TruthGPT, his own alternative to ChatGPT and other AI chatbots, telling Carlson that he is deeply concerned about the current trajectory of the industry.
“It’s being trained to be politically correct, which is simply another way of being untruthful,” he said, adding that “a path to AI dystopia is to train AI to be deceptive.”
When asked whether he believes that “AI could take control and reach a point where you couldn’t turn it off and it would be making the decisions for people,” Musk did not hesitate.
“Absolutely,” he replied.
He went on to call out many in the tech industry — including former Google CEO Larry Page, whom he said seems more concerned with building a “digital god” than protecting society from the potential ills of AI that surpasses human intelligence.
Google’s current CEO, Sundar Pichai, also doubled down on the company’s pursuit of ever-increasing reliance on AI.
During a recent appearance on “60 Minutes,” he responded to concerns raised by interviewer Scott Pelley by essentially insisting that the AI takeover is inevitable.
— 60 Minutes (@60Minutes) April 17, 2023
“We need to adapt as a society for it,” Pichai said, confirming that what he called “knowledge workers” would soon see their jobs and way of life irrevocably uprooted.
He predicted that the technology “is going to impact every product across every company” and responded to calls for Google to take a leading role in implementing safeguards by claiming that it is “not for a company to decide” how AI is regulated.