top of page

Data Privacy in Slack AI

Sponsored Project

Team Project (8) | 6 weeks |UX Designer/Reseacher

Educational environments are increasingly integrating AI tools, yet there is a significant gap in understanding and safeguarding data privacy, especially for students

About
Introduction

As generative AI becomes increasingly embedded in everyday tools, Salesforce AI remains committed to delivering secure and trusted solutions that prioritize data privacy. One key area of focus is Slack, a widely used platform in educational settings, particularly among students. While Slack’s AI features aim to enhance communication by summarizing conversations and streamlining interactions, students often face unique vulnerabilities when it comes to data privacy. Many educational institutions lack clear guidelines for AI use, leaving students exposed to potential misuse of personal and sensitive data. This gap highlights the need for AI designs that not only enhance usability but also protect user data and ensure compliance with privacy standards.

​​

Guided by Salesforce’s principles of safety, transparency, and relevance, this project focuses on enhancing Slack AI for educational use, addressing privacy concerns specific to students. Using speculative design, we identified potential vulnerabilities and developed solutions that empower students with more control over their data.  This project aims to create a safer, more reliable AI experience within Slack, tailored to the needs of students and their educational environments.

​

Frame.png

Integrating AI into the workflow ensures speed and efficiency. How can Salesforce go about this while prioritizing trust, privacy, and control over data for their users?

Why Slack?

While Salesforce offers a wide range of products designed to boost productivity and collaboration, most are tailored to the needs of working professionals. Tools across sales, marketing, and service functions focus on organisational workflows, leaving certain user groups, like students, largely underserved.

​

Slack stands out as an exception, being actively used by students and educational institutions. This highlights an opportunity to address a vulnerable group: students often interact with AI-powered features without fully understanding how their personal data is collected, stored, or used. Their exposure makes data privacy a critical concern.

​

We aimed to explore ways to improve AI data privacy specifically for students. By designing transparent, ethical systems with clear privacy defaults and intuitive controls, Salesforce can protect sensitive student information, build trust, and empower young users to manage their own data responsibly. This approach positions Salesforce as a proactive leader in safeguarding the digital experiences of a demographic often overlooked in enterprise-focused strategies.

Process
Design Process

Our design process followed four key phases: Explore, Speculate, Ideate, and Prototype. We began by analyzing trends in AI and data privacy within educational tools to identify key concerns affecting students. Using “Black Mirror” brainstorming, we imagined extreme future scenarios Slack’s AI might create, helping us anticipate potential ethical and privacy challenges. These scenarios were then turned into speculative concepts and paper prototypes, making possible issues and solutions tangible.

​

Next, we validated these concepts through Wizard of Oz testing and student feedback, creating relatable narratives to see how real users responded. Finally, we refined our ideas into actionable design concepts that directly addressed student concerns, ensuring the resulting solutions were both practical and sensitive to privacy needs in educational settings.

Research

As AI integrates into educational tools, it’s reshaping how teachers and students interact, collaborate, and share feedback, unlocking new possibilities for engagement, personalization, and smarter learning.

We looked into how AI is integrated in different communication tools used in education systems and this is what the users had to say: 

01

The common implementation of AI is in meetings and summarizing important conversations and transcriptions.


02

Many educational institutions are implementing AI-powered chatbots to assist students with routine queries.

03

Some AI tools are being used for automated grading and providing faster feedback to students.


04

Some platforms are exploring AI for monitoring student engagement.

05

All the platforms follow the GDPR and CCPA guidelines for privacy regulations.

Then we looked into how Salesforce goes about integrating AI into its products

Black Mirror brainstorming lets us play out worst-case scenarios to spot risks and ethical pitfalls early on

At the time, we didn’t have direct access to Slack AI yet. To navigate this limitation, we turned to a “Black Mirror” style brainstorming approach, imagining extreme, worst-case scenarios to explore how AI could potentially affect students in educational settings. By projecting into these hypothetical situations, we were able to surface risks and ethical concerns that might not be immediately obvious. This method helped us anticipate challenges such as biased feedback, over-reliance on AI for learning, or privacy issues, and gave us the opportunity to address them proactively. Ultimately, it allowed us to approach the design process with foresight, ensuring that the tools we imagined would be both impactful and responsible.

After brainstorming, we organized the potential scenarios into a futures cone diagram, mapping out possible, plausible, and probable outcomes. This visualization allowed us to see different paths AI might take in educational settings, helping us identify areas of risk and plan thoughtful design interventions to address potential issues before they arise.

Validation

Wizard of Oz brainstorming lets us simulate a system’s behavior before it exists, helping us test ideas, gather insights, and refine designs early on

After identifying potential issues, we wanted to validate whether these concerns actually resonated with users. Since we didn’t have access to Slack’s AI tools, we relied on Wizard of Oz testing, using paper prototypes to simulate scenarios around the risks we had identified. Testing these with students who regularly use Slack for coursework discussions helped us understand whether these potential negative outcomes were meaningful enough to design for.

​

We then analyzed the feedback by clustering responses through affinity mapping. This allowed us to identify recurring themes and pinpoint the fears that surfaced most frequently and felt most impactful. By focusing on these high-priority concerns, we were able to design solutions that addressed what truly mattered to our users. Based on this we came to the following conclusions: 

Design
Final Concepts

Concept 1: in-line privacy alerts

The In-line Privacy Alert gently flags messages that may contain personal or sensitive information, such as health details or personal identifiers, before they are shared. By prompting students at the right moment, these alerts encourage greater awareness around what is being communicated and with whom. This subtle intervention supports Salesforce’s Responsible AI principles by helping students make more informed choices and better protect their privacy, without interrupting their natural communication flow.

Students can create custom alerts specific to their projects and discussions, setting up user-initiated triggers for keywords that relate to sensitive topics.

Since they’re setting these alerts themselves, they’re more likely to interact with them and stay engaged.

Concept 2: transparency in AI’s bias moderation

This concept helps users recognize that AI systems don’t always fully understand context, cultural nuances, or variations in language, which can sometimes result in content being flagged even when it isn’t truly sensitive. By clearly communicating these limitations, the design sets realistic expectations around AI accuracy and reduces confusion or frustration for users. It aligns with Salesforce’s Transparent AI principle by offering insight into how decisions are made, fostering trust through greater clarity, openness, and accountability in the system’s behavior.

For example, if two students discuss USA’s colonial history, the AI might flag it as “sensitive” without grasping the full context. Users can easily see why their comment was flagged, which creates better transparency.

Concept 3: misinterpretation

Here’s a real example from one of our Slack channels. A student and professor were lightheartedly teasing each other during a class discussion, but this is how ChatGPT summarized the exchange:

 

“In this university class group chat, the professor expresses feeling hurt by De’s actions and suggests points should be deducted. De responds dismissively, citing the professor’s age as a reason for forgetfulness.”

 

The mismatch highlights how easily AI can misread tone and intent. By allowing users to set an engagement style, we give them greater control over how AI interprets and summarizes conversations, helping ensure that nuance, context, and humor aren’t lost in translation.

Here’s a comparison of two summaries: one generated without a defined channel tone, and another where the engagement style is set to relaxed. Adding this context significantly changes how the AI interprets the conversation, allowing the summary to better reflect the casual, playful nature of the exchange.

​

We proposed a user alert that highlights AI’s limitations in understanding sarcasm or context, helping set realistic expectations and reduce misunderstandings when casual or nuanced language is misinterpreted.

Future Scope

01

Currently, the channel tone settings may feel too rigid. For instance, if users might hesitate setting a “professional” tone as it could limit flexibility, and might avoid it altogether. Also the subtle nuances in group communication that the preset engagement styles might overlook entirely.

02

At the moment this project focuses on students perspective.It might we intersting to look at the educator’s perspective too . For example, instructors monitoring student progress with AI could impact grades, so transparency with students is essential here.

Project impact

01

The project was selected to be shared with Salesforce as a reference for potential future exploration. Both our project leads and Salesforce’s design leadership responded positively to the direction, particularly the forward-looking ideas developed within a highly speculative brief. They highlighted the team’s ability to navigate constraints and viewed the concepts as meaningful contributions toward rethinking how AI could be designed more responsibly in educational contexts.

02

While these ideas have not yet been incorporated into Slack’s AI product roadmap, we later noticed a comparable approach in Figma’s recent release. With the launch of Figma Slides, Figma introduced a feature that allows users to define the tone of AI-generated content. Instead of preset options, their solution uses a dial to adjust tone dynamically, reinforcing the idea that providing contextual input can meaningfully improve AI outputs. Seeing a parallel concept appear in a live product validated our direction and underscored the importance of context in shaping more accurate and nuanced AI behavior.

Reflection
Reflecting on the project

Designing for trust means designing for uncertainty


Through this project, I realized that building trust in AI isn’t about making it perfectly accurate, it’s about helping users understand how and why it works. Our designs shifted from trying to make AI invisible to making its reasoning transparent and explainable.

Balancing transparency and cognitive load


A big challenge was surfacing privacy and bias information without overwhelming users. I learned to apply progressive disclosure which is giving users clarity when they need it, but keeping the interface calm and focused during normal interactions.

The value of contextual awareness in AI design


We learned that context, such as tone, intent, or conversation style, deeply influences how AI is perceived. Adding “engagement styles” taught me that small contextual cues can make AI outputs feel more human and trustworthy.

bottom of page