With well over 40 million users just in the United States in 2020, Twitch was bound to see its share of trolls and abuse. Yes, keeping bad actors away from large platforms is virtually impossible, but moderation is not. And streamers have been fed up with the lack of meaningful moderation for some time now. Hence the recently announced Suspicious User Detection tools, which Twitch believes will help channels deal with accounts that attempt to circumvent a ban. The intent is to make it easier for streamers and channel moderators to identify and deal with abusive users who won’t stay gone, but is it enough? Well, no. Not even close. “For the life of me, I cannot understand why they would build a feature to flag problematic accounts and stop there,” said Twitch Streamer TheNoirEnigma, in an email to Lifewire. “It’s kinda like a guy dying of thirst and getting muddy water.”
Shifting Responsibility
A big problem with Suspicious User Detection is that, as Noir points out, detection is all it really offers. Using machine learning to help identify problem accounts isn’t a bad idea, but once those accounts are identified, the onus is still on the streamer and their mod team. These are people who are already extremely busy simply running and managing the stream and likely don’t have enough time to constantly micromanage yet another list. “Being a streamer is already a job that requires so much of our time—building a community, setting, sticking to a schedule, and making sure the audiences we build aren’t filled with toxic people,” said Noir. “It would be nice for Twitch to lend us a hand and take direct action on these problematic accounts since they’ve already shown us they can ID them.” Another issue with the tools only identifying potential problem accounts is that it doesn’t do anything meaningful to protect streamers from abuse. Flagged accounts that are ’likely’ to be evading a ban will be muted from public chat, and accounts that are ‘possible’ evaders can also be muted—but so what? While this prevents the possible/likely abuse from being seen by the general chat, it doesn’t hide it from the streamers or moderators. It merely tags the (potentially) abusive messages ahead of time. “Muting the messages, but still showing them to the streamer and the mods, is effectively doing nothing,” Noir explained. “The purpose of safeguards is to prevent harm, and that’s not what these features Twitch is implementing will do.”
It’s Just Not Enough
The Suspicious User Detection Tools also fail to account for the sheer scope of the problem—especially for streamers who have been targeted by hate raids. These organized attacks in which a mob of users (sometimes also bot accounts) hurl abuse en masse at the target channel have been an ongoing problem. “I do not know why we must update a list of banned words for each individual channel. I can’t imagine what reason anyone at Twitch could give me for not having certain words like the ‘N’ word banned in all its variations,” Noir stated. “The people at Twitch are brilliant; they’ve put together a platform that has given us all a chance to have our voices heard—I just can’t believe this is the best they can do.” Streamers are why Twitch exists, so it would make sense to look out for them. As sensible as that might seem, many streamers—particularly marginalized streamers—are feeling ignored. “Twitch is a capable company that’s well funded and has some of the most brilliant minds at their disposal,” said Noir. “Figuring this out shouldn’t be something we streamers have to think about.” Though Noir does have some ideas for what Twitch could do to more effectively address its abuse and harassment issues. “I would love to see IP bans—banning accounts simply isn’t effective. I would also love to see [dealing with harassment] remain a priority for Twitch, as I don’t think that it has been for some time.”