Article 6MWX9 Slack users horrified to discover messages used for “AI” training

Slack users horrified to discover messages used for “AI” training

by
Thom Holwerda
from OSnews on (#6MWX9)

After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers' data-including messages, content, and files-to train Slack's global AI models.

Ashley Belanger at Ars Technica

I've never used Slack and don't intend to ever start, but the outcry about this reached far beyond Slack and its own communities. It's been all over various forums and social media, and I'm glad Ars dove into it to collect all the various conflicting statements, policies, and blog posts Slack has made about their Ai" policies. However, even after reading Ars' article and the various articles about this at other outlets, I still have no idea what, exactly, Slack is or is not using to train its AI" models.

I know a lot of people here think I am by definition against all forms of what companies are currently calling AI", but this is really not the case. I think there are countless areas where these technologies can make meaningful contributions, and a great example I encountered recently is the 4X strategy game Stellaris, one of my favourite games. The game recently got a big update called The Machine Age, which focuses on changing and improving the gameplay when you opt to play as cybernetically enhanced or outright robotic races.

As per Steam's new rules regarding the use of AI in games, the Steam page included the following clarification about the use of AI":

We employ generative AI technologies during the creation of some assets. Typically this involves the ideation of content and visual reference material. These elements represent a minor component of the overall development. AI has been used to generate voices for an AI antagonist and a player advisor.

The Machine Age Steam page

The game's director explained that during the very early ideation phase, when someone like him, who isn't a creative person, gets an idea, they might generate a piece of AI" art and put it up on an ideation wall with tons of other assets just to get the point across, after which several rounds of artists and developers mould and shape some of those ideas into a final product. None of the early AI" content makes it in the game. Similarly, while the game includes the voice for an AI antagonist and player advisor, the voice actors whose work was willingly used to generate the lines in the game are receiving royalties for each of those lines.

I have no issues whatsoever with this, because here it's clear everyone involved is doing so in an informed manner and entirely willingly. Everything is above board, consent is freely given, and everybody knows what's going on. This is a great example of ethical AI" use; tools to help people make a product, easier - without stealing other people's work or violating various licenses in the process.

What Slack is doing here - and what Copilot, OpenAI, and the various other tools do - is the exact opposite of this. Consent is only sought when the parties involved are big and powerful enough to cause problems, and even though they claim AI" is not ripping anyone off, they also claim AI" can't work without taking other people's work. Instead of being open and transparent about what they do, they hide themselves behind magical algorithms and shroud the origins of their AI" training data in mystery.

If you're using Slack - and odds are you do - I would strongly consider urging your boss to opt your organisation out of Slack's AI" data theft operation. You have no idea how much private information and corporate data is being exposed by these Salesforce clowns.

External Content
Source RSS or Atom Feed
Feed Location http://www.osnews.com/files/recent.xml
Feed Title OSnews
Feed Link https://www.osnews.com/
Reply 0 comments