Published , by Donovan Erskine
Published , by Donovan Erskine
As AI technology continues to grow, so do concerns about what happens when it falls into the wrong hands. Those fears were realized in disturbing fashion this past week when sexually explicit AI deepfakes of Taylor Swift went viral, prompting Twitter/X to temporarily suspend searches of the pop star. In a new interview, Microsoft CEO Satya Nadella acknowledged the incident and the greater threat of AI deepfakes, stating that his company needs to act quickly to address the issue.
Satya Nadella was interviewed by CNBC last Friday, where he spoke about the potential harm of artificial intelligence. He called the viral deepfakes of Taylor Swift “alarming and terrible,” and stated that the onus is on Microsoft to make the tech safer. “I think we all benefit when the online world is a safe world. And so I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this.”
Many have speculated whether or not Taylor Swift will take legal action following the release of deepfake images of her. If she does, it could set a massive precedent for AI regulation moving forward. The account that spread these images (and presumably created them) managed to do so without moderation from Twitter/X until the account received a swarm of reports from fans of Swift.
With Microsoft’s own CEO admitting the company needs to take steps to combat the harmful use of AI, we’re curious to see what changes are implemented in order to do so. Shacknews we’ll continue to cover the most relevant stories in technology, including artificial intelligence.