Article 5DZSH Sentropy launches tool for people to protect themselves from social media abuse, starting with Twitter

Sentropy launches tool for people to protect themselves from social media abuse, starting with Twitter

by
Ingrid Lunden
from Crunch Hype on (#5DZSH)

Last year, in the midst of a particularly spiky U.S. presidential election campaign, a startup called Sentropy emerged from stealth with an AI-based platform aimed at social media and other companies that corralled people together for online conversation.

Sentropy had built a set of algorithms, using natural language processing and machine learning, to help these platforms detect when abusive language, harassing tendencies and other harmful content was coming around the bend, and to act on those situations before they became an issue.

Today, the startup is unveiling a new product, now aimed at consumers.

Using the same technology that it originally built for its enterprise platform, Sentropy Protect is a free consumer product that detects harmful content on a person's social media feed and, by way of a dashboard, lets a person have better control over how that content and the people producing it are handled.

Starting initially with Twitter, the plan is to add more social feeds over time, based initially on which services provide APIs to let Sentropy integrate with them (not all do.)

Sentropy CEO John Redgrave said the consumer product launch is not a pivot but an expansion of what the company is building.

The idea is that Sentropy will continue to work with enterprise customers - its two products in that department are called Sentropy Detect, which provides API-based access to its abuse detection technologies; and Sentropy Defend, a browser-based interface that enables end-to-end moderation workflows for moderators.

But at the same time, the Protect consumer product will give people an added option - whether or not Sentropy is being used by a particular platform - to take the reins and have more hands-on control over their harassment graph, as it were.

We always had deep conviction of going after the enterprise as a start, but Sentropy is about more than that," he said. Cyber safety has to have both enterprise and consumer components."

It's refreshing to hear about startups building services that potentially affect millions of people also being cognizant of how individuals themselves want to keep an element of self-determination in the equation.

It's not just, Well, it's your choice if you use service X or not," but a grasp of the concept that when someone chooses to use a service, especially a popular one, there should be and can be more than just a hope that the platform will always be looking out for that user's best interests, by providing tools to help the user do that, too.

And it's not a problem that is going away, and that goes not just for the hottest platforms today, which are continuing to look for ways to handle complex content - but also on emerging platforms.

The recent popularity of Clubhouse, for example, highlights not just new frontiers in social platforms, but how, for example, those new models - with Clubhouse based on rooms" for conversations and a reliance on audio rather than written text for interactions - are handling issues of harassment and abuse. Some striking examples so far point to the problem being one that definitely needs addressing before it grows any bigger.

Protect is free to use today, and Redgrave said that Sentropy is still working on deciding how and if it will charge for it. One likely scenario will be that Protect might come in freemium tiers: a free and limited product for individuals with pro" services featuring enhanced tools, and perhaps a tier for companies who are managing accounts on behalf of one or several high-profile individuals.

Of course, services like Twitter, Reddit, Facebook, YouTube and many others have made a big point over the years - and especially recently - to put in more rules, moderators and automated algorithms to help identify and stop abusive content in its tracks, and to help users report and stop content before it gets to them.

But if you are one of the people who gets targeted regularly, or even occasionally, you know that this is often not enough. Sentropy Protect seems to be built with that mindset in mind, too.

Indeed, Redgrave said that even though the company had consumers on its roadmap all along, its strategy was accelerated after the launch of its enterprise product last year in June.

We started getting pinged by people saying, I get abused online. How can I get access to your technology?'" He recalled that the company realized that the problem was at once bigger and more granular than Sentropy could fix simply by working its way through a list of companies, hoping to win them over as customers, and then successfully integrating its product.

We had a hard decision to make then," he recalled. Do we spend 100% of our time focused on enterprises, or do we take a portion of our team and start to build out something for consumers, too?" It decided to take the latter route.

On the enterprise side, Sentropy is continuing to work with social networks and other kinds of companies that host interactions between people - for example, message boards connected to gaming experiences or dating apps. It's not publicly disclosing any customer names at the moment, but Redgrave describes them as primarily smaller, fast-growing businesses, as opposed to larger and more legacy platforms.

Sentropy's VP of product, Dev Bala - who has previously been an academic, and also worked at Facebook, Google and Microsoft - explained that bigger, legacy platforms are not outside of Sentropy's remit. But more often than not, they are working on bigger trust and safety strategies and have small armies of engineers in-house working on building products.

While larger social networks do bring in third-party technology for certain aspects of their services, those deals will typically take longer to close, even in urgent cases such as around working with online abuse.

I think abuse and harassment are rapidly evolving to be an existential challenge for the likes of Facebook, Reddit, YouTube and the rest," Bala said. These companies will have a 10,000 person organization thinking just about trust and safety, and the world is seeing the ills of not doing that. What's not as obvious to people on the outside is that they are also taking a portfolio approach, with armies of moderators and a portfolio of technology. Not all is built in-house.

We believe there is value from Sentropy for these bigger guys but also know there are a lot of optics around companies using products like ours. So we see the opportunities of going earlier, in cases where the company in question is not a Facebook, and having a less sophisticated approach."

As a sign of the changing tides and sentiment in the market. It seems that the tackling of abuse and content is starting to get taken seriously as a business concept. And so Sentropy is not the only company tackling this opportunity.

Two other startups - one called Spectrum Labs, and another called L1ght - have also built a set of AI-based tools aimed at various platforms where conversations are happening to help those platforms detect and better moderate instances of toxicity, harassment and abuse.

Spectrum Labs raises $10M for its AI-based platform to combat online toxicity

Another, Block Party, is also looking to work across different social platforms to give users more control over how toxicity touches them, and has, like Sentropy, focused first on Twitter.

With Protect, after content is detected and flagged, users can set up wider, permanent blocks against specific users (who can also be muted through Protect) or themes, manage filtered words, and monitor content that gets automatically flagged for being potentially abusive, in case you want to override the flags and create trusted" users. Tweets get labelled when they are snagged by Sentropy by the type of abuse (for example, threat of physical violence, sexual aggression or identity attacks).

Since it's based on a machine-learning platform, Sentropy then takes all of those signals, including the tweets that have been flagged and uses them to teach Protect to identify future content along those same lines. The platform is also monitoring chatter on other platforms all the time, and that too feeds into what it looks for and moderates.

If you're familiar with Twitter's own abuse protection, you'll know that all this takes the situation several steps further than the controls Twitter itself provides.

This is still a version one though. Right now, you don't see your full timeline through Protect, so essentially it means that you toggle between Protect and whatever Twitter client you are using. Some might find that onerous, although on the other hand Bala noted that a sign of Sentropy's success is that people will actually let it work in the background and you won't feel the need to constantly check in.

Redgrave also noted that the service is still exploring how to add in other features, such as the ability to also filter direct messages.

Tracy Chou launches Block Party to combat online harassment and abuse

Techcrunch?d=2mJPEYqXBVI Techcrunch?d=7Q72WNTAKBA Techcrunch?d=yIl2AUoC8zA Techcrunch?i=lMJ1N9iaqTw:7tCLR-Mzono:-BT Techcrunch?i=lMJ1N9iaqTw:7tCLR-Mzono:D7D Techcrunch?d=qj6IDK7rITslMJ1N9iaqTw
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TechCrunch/
Feed Title Crunch Hype
Feed Link https://techncruncher.blogspot.com/
Reply 0 comments