Published , by Donovan Erskine
Published , by Donovan Erskine
It’s not a controversial stance to say that toxicity is rampant in online gaming. Anyone that’s spent enough time on Call of Duty or any other competitive online game will likely echo that sentiment. This issue is precisely what Intel is looking to tackle with Bleep, a program that will automatically censor offensive language over voice communications, allowing players to adjust what they do and don’t hear. The program’s sliders and categorization of different forms of offensive language became a laughing stock on social media, and now Intel has responded.
Following the uproar that new screenshots of Bleep made on social media yesterday, Marcus Kennedy, general manager of Intel’s gaming division spoke with Polygon about the team’s response to the backlash. “The intent of this has always been to put that nuanced control in the hands in the users,” Kennedy said.
That “nuanced” control lead to some harsh internet memes, in which users mocked the idea of someone choosing to hear or not to hear different forms of offensive language on different days. Our own Chris Jarrard likened it to “constructing a Chipotle burrito.”
Intel did state that what users saw in the presentation is not final, and could be altered before release. We’d have to imagine Intel will take a strong second look and reexamine the sliders and options present in the program, as not to provoke another, much larger round of backlash from users.
The issue surrounding toxicity in gaming and harmful behavior online is a serious issue, and one that shouldn’t be completely forgotten in the name of slam dunking on Intel. That said, it’s likely that the company will look to make necessary adjustments to Bleep before the program releases.