Twitch has pledged to do more to help protect marginalized content creators from harassment and abuse.
The company has been put under pressure after streamers from marginalized backgrounds reported being hit by 'hate raids,' which effectively see large groups of people attempt to fill a streamer's chat with targeted abuse using dummy accounts.
In the wake of such widespread and flagrant abuse, those affected along with other Twitch users have demanded the company do more to keep users safe, with many streamers and supporters rallying behind the #TwitchDoBetter hashtag.
Responding to those demands on social media, Twitch said it wants to create "an open and ongoing dialogue about creator safety" and conceded it needs to do more to address hate raids, botting, and other forms of harassment targeting marginalized creators.
"Thank you to everyone who shared these difficult experiences. We were able to identify a vulnerability in our proactive filters, and have rolled out an update to close this gap and better detect hate speech in chat. We'll keep updating this to address emerging issues," it wrote.
"We're launching channel-level ban evasion detection and account verification improvements later this year. We’re working hard to launch these tools as soon as possible, and we hope they will have a big impact. Our work is never done, and your input is essential as we try to build a safer Twitch."
The streaming company also suggested users learn more about its existing safety tools, and encouraged them to share any feedback they might have via its UserVoice platform.