Textio co-founder discusses bias in workplace communications, and how some AI propagates it

Share This Post

Understanding Bias in Workplace Communication: The Journey of Kieran Snyder and Textio

Kieran Snyder, co-founder of Textio, a Seattle-based augmented writing startup, has dedicated her career to understanding and addressing bias in workplace communication. Textio was founded with the goal of identifying and mitigating bias in areas such as job descriptions and performance feedback. Snyder stepped down as CEO of the company after 11 years but continues to explore the impact of bias, particularly in the context of the rise of large language models and generative AI. Her work has taken on a new dimension as she delves into the intersection of AI and workplace communication.

Kieran Snyder’s New Venture: Nerd Processor

After leaving her role as CEO of Textio, Snyder launched a new project called Nerd Processor, a website where she shares her insights, revisits previous research, and explores new studies. This platform serves as a space for her to engage with data and share her findings with a broader audience. Through Nerd Processor, Snyder continues to analyze the evolving landscape of workplace communication, with a particular focus on the role of AI in shaping these interactions. Her work on this platform highlights her commitment to understanding the complexities of bias and its implications in the age of technology.

The Experiment: Bias in AI-Generated Feedback

In a recent episode of the “Shift AI” podcast, Snyder discussed her latest research, which reveals the potential for bias in AI-generated performance feedback. She conducted an experiment where she asked ChatGPT to write sample performance reviews for two digital marketers who had a tough first year on the job. The only difference between the two scenarios was the alma mater of the employees: one attended Harvard University, while the other attended Howard University, a prominent historically Black college and university. The results were striking. The feedback generated for the Harvard alum focused on developmental areas such as “stepping up to lead more,” while the feedback for the Howard alum highlighted issues like “lack of attention to detail” and “missing technical skills.”

The Implications of Bias in AI Systems

While Snyder acknowledges that the feedback generated by ChatGPT could, in isolation, be valid, she points out that the aggregate data tells a different story. The AI system consistently associated the Harvard alum with leadership potential and the Howard alum with more fundamental shortcomings. This disparity underscores the inherent bias in AI systems, which can perpetuate stereotypes and discriminatory attitudes. Snyder’s experiment highlights the need for a critical examination of how AI systems are trained and the data they are based on. She argues that building datasets with bias in mind from the start inevitably leads to biased outcomes, reinforcing existing inequities in the workplace.

The Impact on Workplace Fairness

The implications of Snyder’s findings are profound, particularly in the context of workplace fairness and equity. If AI systems are used to generate performance feedback, hiring decisions, or other employment-related communications, the embedded bias could have far-reaching consequences. Employees from underrepresented groups, such as those who attended historically Black colleges and universities, may be disproportionately impacted by these biases, facing more critical feedback and fewer opportunities for growth. Snyder’s research serves as a warning about the potential for AI to exacerbate existing inequalities in the workplace, rather than mitigating them.

The Way Forward: Addressing Bias in AI

To address the issue of bias in AI, Snyder emphasizes the importance of critically examining the data used to train these systems. She suggests that developing AI systems with abias in mind from the start is essential to ensuring fairness and equity in workplace communications. Additionally, ongoing research and transparency in AI development processes can help identify and mitigate bias. As the use of AI in workplace communications continues to grow, it is crucial to prioritize ethical considerations and work towards creating systems that promote inclusivity and fairness for all employees, regardless of their background.

By tuning into the full episode of the “Shift AI” podcast, listeners can gain deeper insights into Snyder’s research and the broader implications of AI in workplace communication. The episode offers a thought-provoking discussion on the challenges and opportunities presented by AI, as well as the steps needed to ensure that these technologies are used responsibly and equitably. As AI becomes an increasingly integral part of our work lives, the lessons from Snyder’s research remind us of the importance of vigilance and proactive measures to combat bias in all its forms.

Related Posts

Samsung Display Wowed Us With Its Concept Devices at MWC 2025

Introduction: Samsung's Vision for the Future of Displays Samsung has...

Is It Time to Transfer Frozen Russian Assets to Ukraine? Calls Grow Louder.

A New Path for Ukraine: Unleashing Frozen Russian Assets The...

‘Connections’ March 4: Hints and Answers for Puzzle #632

Welcome to the World of Connections: Your Daily Brain...

Global Okanagan’s Adopt A Pet

Global Okanagan’s Weekly Pet Adoption Segment: A Beacon of...

Philippine Air Force fighter goes missing during ‘tactical’ operation

Incident Involving Philippine FA-50 Fighter Jet Raises Concerns and...