Properly, that didn’t occur, clearly.
I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take inventory of what has occurred since. Listed here are highlights of our dialog.
On shifting the Overton window on AI threat: Tegmark informed me that in conversations with AI researchers and tech CEOs, it had change into clear that there was an enormous quantity of hysteria concerning the existential risk AI poses, however no one felt they might discuss it overtly “for concern of being ridiculed as Luddite scaremongerers.” “The important thing purpose of the letter was to mainstream the dialog, to maneuver the Overton window so that folks felt protected expressing these considerations,” he says. “Six months later, it’s clear that part was a success.”
However that’s about it: “What’s not nice is that every one the businesses are nonetheless going full steam forward and we nonetheless don’t have any significant regulation in America. It seems like US policymakers, for all their discuss, aren’t going to go any legal guidelines this yr that meaningfully rein in probably the most harmful stuff.”
Why the federal government ought to step in: Tegmark is lobbying for an FDA-style company that will implement guidelines round AI, and for the federal government to pressure tech firms to pause AI improvement. “It’s additionally clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very involved themselves. However all of them know they’ll’t pause alone,” Tegmark says. Pausing alone could be “a catastrophe for his or her firm, proper?” he provides. “They only get outcompeted, after which that CEO will likely be changed with somebody who doesn’t wish to pause. The one manner the pause comes about is that if the governments of the world step in and put in place security requirements that pressure everybody to pause.”
So how about Elon … ? Musk signed the letter calling for a pause, solely to arrange a brand new AI firm referred to as X.AI to construct AI methods that will “perceive the true nature of the universe.” (Musk is an advisor to the FLI.) “Clearly, he desires a pause similar to a whole lot of different AI leaders. However so long as there isn’t one, he feels he has to additionally keep within the sport.”
Why he thinks tech CEOs have the goodness of humanity of their hearts: “What makes me suppose that they actually desire a good future with AI, not a nasty one? I’ve identified them for a few years. I discuss with them frequently. And I can inform even in personal conversations—I can sense it.”
Response to critics who say specializing in existential threat distracts from present harms: “It’s essential that those that care so much about present issues and those that care about imminent upcoming harms work collectively reasonably than infighting. I’ve zero criticism of people that concentrate on present harms. I feel it’s nice that they’re doing it. I care about these issues very a lot. If individuals have interaction in this type of infighting, it’s simply serving to Massive Tech divide and conquer all those that wish to actually rein in Massive Tech.”