Join us to explore how generative AI can reproduce and amplify deceptive design patterns and how we can prevent them.
We invite researchers and practitioners to share ideas, demos, and visions for more transparent and user-friendly AI-generated experiences.
Have a perspective on AI-enabled deception? A prototype that exposes manipulative patterns? A concept for stronger user protection? Submit a short paper or demonstration and join us for an active, hands-on workshop developing concrete strategies for more transparent AI-driven experiences. We are looking for following submissions:
Position Paper
Position Papers should present ideas, challenges, or perspectives on how generative AIs can enable or prevent deceptive design patterns. They can include early insights, conceptual arguments, or open research questions to spark discussion during the workshop. Submissions should be in the ACM two-column format with a lenght between two and four pages, excluding references.
Research Statement
Research Statements should outline ongoing or planned work that investigates deceptive design in generative AIs. They allow authors to share preliminary findings or research directions and receive feedback from the community. Submissions should be in the ACM two-column format with a lenght between two and four pages, excluding references.
Demonstration
Demonstrations can showcase tools, prototypes, or practical examples that highlight deceptive designs in AI-generated content. Alternatively they can propose ways to counter them. They will be explore hands-on during the workshop to inspire discussion and new ideas. Submissions should be in the ACM two-column format with a lenght between two and four pages, excluding references.





