Metaverse: another cesspool of toxic content?

The metaverse is filled with bigotry, harmful content and even reports of rape, according to a new report by the corporate accountability group SumOfUs.

The report suggests Meta’s VR platforms (e.g. Horizon Worlds) have the same issues as other current social media sites, but now with the addition of virtual assault. “Given the failure of Meta to moderate content on its other platforms, it is unsurprising that it is already seriously lagging behind with content moderation on its metaverse platforms,” states the report. “With just 300,000 users, it is remarkable how quickly Horizon Worlds has become a breeding ground for harmful content.”

As noted by the researchers: “In a recent blog post, Meta’s policy chief Nick Clegg said Meta is viewing interactions in the Metaverse to be more ‘ephemeral’, and that decisions around moderating content will be more akin to whether or not to intervene in a heated back and forth in a bar, rather than like the active policing of harmful content on Facebook.” While Meta did introduce a four-foot Personal Boundary barrier for its avatars earlier this year (which you can turn off), that doesn’t stop users and even children from the barrage of hate speech and conspiracy theories that users have reported. 

The solution? The researchers suggest not allowing Meta to engage in anticompetitive practices, increasing data privacy laws, and applying Big Tech with legislation around targeted ads and demands more transparency from companies regarding how algorithms are designed, operated and enhanced.