Search

Slack is using your chat conversations to train its AI models, unless you take a complicated path to opt out.

Share it

Slack criticized for using customer data to train AI models

An extract from the company’s privacy principles page reads:

To develop non-generative AI/ML models for features such as emoji and channel recommendations, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.

Another passage reads: To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at [email protected]

The revelation that potentially sensitive information is being used to train Slack’s AI highlights the darker sides of the technology – generative AI has already come under fire for failing to correctly cite sources and its potential for generating content that could be subject to copyright infringements.

In a controversial move, Slack has been training the models it uses for its generative AI functionalities on user messages, files, and more, by default and without the explicit consent from users.

Instead (per Engadget) those wishing to opt out must do so through their organization’s Slack admin, who must email the company to put a stop to data use.

The company does not provide a timeframe for processing such requests.

In response to uproar among the community, the company posted a separate blog post to address concerns arising, adding: We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce any customer data of any kind.

Slack confirmed that user data is not shared with third-party LLM providers for training purposes.

More from us

The use of customer data for AI training showcases the ethical dilemmas that arise in the era of advanced AI technologies. While AI has the potential to revolutionize industries and improve user experiences, the misuse of personal data without explicit consent raises concerns about privacy and data security. Slack’s approach to training AI models using user messages without clear opt-out procedures underscores the importance of transparent data practices in the tech industry.

By requiring users to navigate a complex process involving organization administrators to opt out of AI training, Slack has faced criticism for potentially infringing on user privacy rights. Addressing these concerns and ensuring that user data is protected should be a top priority for companies leveraging AI technologies in their platforms.

As the debate around data privacy and AI ethics continues to evolve, companies like Slack must prioritize user transparency and consent when implementing AI models that rely on customer data. Clear communication, robust data protection measures, and user-friendly opt-out mechanisms are essential in building trust with users and upholding ethical standards in data-driven AI development.

Ultimately, the responsible use of customer data in AI training is crucial in safeguarding user privacy and maintaining trust in technology companies. By promoting transparency, accountability, and user empowerment in data practices, companies can navigate the complexities of AI ethics while fostering a culture of respect for user privacy and data protection.

🤞 Don’t miss these tips!

🤞 Don’t miss these tips!

Solverwp- WordPress Theme and Plugin