AI is not neutral. Neither are we.
Artificial intelligence learns from us. From the texts we have written, from the images we have produced, from the decisions we have made. The problem is that it also learns from our prejudices and returns them to us amplified, with the apparent authority of an objective system.
In this workshop we will explore, through concrete examples, practical demonstrations and interactive exercises, how gender biases hide in the algorithms we use every day: in search results, in generated images, in personnel selection processes... Not to frighten us, but to learn to recognize them and ask better questions.
Because the way we talk to AI says a lot about us.
And we can choose to make it count.
Laura Nacci, she is a linguistic communicator, teacher of gender equality in professional settings and training director at SheTech. She has a column in "Donna Moderna" that explores the power of words, and is the author of "Words that Hurt. The Adjectives at the Root of Violence Against Women" (Prospero Editore, 2025); "Words and Power at Work. The Gender Gap in Ten Linguistic Stories" (TAB edizioni, 2025); "What a Pain These Stereotypes. 25 Sayings That Have Messed Up Our Lives" written with Marta Pettolino Valfrè (Fabbri, 2023).
Jacopo Sabba Capetta. He is an Innovation Manager and designer of collaborative processes. He works with organizations that want to navigate change without leaving people behind. He integrates artificial intelligence and human intelligences — convinced that the most useful technology is the one that amplifies what you already know how to do, not the one that replaces it. He is the author of Hot Coffee (Tab Edizioni, 2026).