Instagram puts AI to work burying offensive comments
Earlier this month, Facebook announced that it had begun using a language AI called DeepText on its platform. Language is complex, and in order to better understand intent (an important part of flagging hate speech, for example), any computer program needs to figure out how humans use language. Now, Instagram, which is owned by Facebook, has announced that it's begun using DeepText to eliminate comments that violate Instagram's Community Guidelines.
DeepText is currently in limited deployment on Facebook, but immediately after learning about the AI, Instagram's top people wanted to test it out on their own platform. They first focused on spam, rather than mean or spiteful comments, asking human workers to wade through a giant set of comments and flag spam by hand. They then fed most of this data into DeepText, which created algorithms based on what it found in the spam comments. The team then turned around and tested the algorithm on the portion of the human-filtered data they'd left in reserve.
While it's unclear exactly how well DeepText worked in this trial run, Wired reports that Instagram CEO Kevin Systrom was "delighted." Now, Instagram has rolled out DeepText-powered comment moderation to the public, but its purpose is twofold. While the system will indeed tackle spam comments, Instagram also has put a filter in place to hide mean or offensive comments. It's currently only available in English, but the service plans to roll out the filter to more languages soon.
Time will tell how well DeepText works on Instagram; hate speech and spam will likely slip through the filters, but will legitimate comments become caught as well? It's possible. But hate speech has become such a common occurrence across social media platforms; if DeepText has a chance of making that any better, we'll call that a win.