Responsible AI Special Day at KDD2025
Join Us at KDD 2025 for a Special Day on Responsible AI
We’re excited to announce the Responsible AI Special Day at KDD 2025, one of the world’s leading conferences on knowledge discovery and data mining. This special event will take place on August 5, 2025, in Toronto, Canada, and will bring together researchers, practitioners, and policymakers to explore the critical challenges and opportunities in the development of responsible, ethical, and socially conscious AI systems.
Why Responsible AI?
As artificial intelligence becomes increasingly embedded in everyday life, from healthcare to finance, from education to public policy, the need for responsible approaches to its development and deployment is more urgent than ever. This special day aims to foster meaningful discussions and share innovative research and practices that can help build fairer, more transparent, and trustworthy AI systems.
Call for Papers
We invite submissions on a wide range of topics related to Responsible AI, including but not limited to:
Fairness, Bias & Equity: Techniques for assessing and mitigating algorithmic bias
Adversarial AI & Red Teaming: Methods to test and improve system robustness
Transparency & Explainability: Strategies to make AI systems more understandable and interpretable
Accountability & Recourse: Approaches to ensure responsibility and redress in AI decisions
AI Ethics & Social Impact: Studies on justice, labor, equity, and societal risks
Human-Centered AI: Participatory, interdisciplinary, and values-sensitive system design
Auditing & Evaluation: Tools and methods for responsible assessment of AI systems
Sociotechnical Perspectives: Research on the cultural and societal contexts of AI adoption
Regulatory & Policy Dimensions: Legal frameworks, data governance, and compliance
Sustainability & Environmental Impact: Ecological implications of AI technologies
AI Ethics Education: Innovations in training and curriculum development
Trust & Reliability: Techniques for ensuring assurance and appropriate reliance
Risk Management & Safety: Technical and policy-based mitigation strategies
We welcome interdisciplinary submissions and encourage work that bridges technical, social, and humanistic perspectives. Research papers, theoretical work, case studies, empirical evaluations, and position papers are all welcome.
Submission Guidelines
We accept two types of submissions:
Original Papers (up to 6 pages): These can include new research, practical applications, or theoretical contributions.
Previously Published Papers: Share impactful work already published in leading venues. Submit a 1-page abstract summarizing the published work, along with a citation and the original paper.
Submission site: OpenReview – Responsible AI Special Day (external link)
Format: LNCS (Lecture Notes in Computer Science). LNCS Template (external link)
Review process: Single-blind peer review
🗓️ Key Dates
Submission Deadline: June 28, 2025 (tentative)
Notification of Acceptance: July 21, 2025
Responsible AI Special Day: August 5, 2025
👥 Organizing Committee
Ebrahim Bagheri (University of Toronto)
Robin Cohen (University of Waterloo)
Faezeh Ensan (Toronto Metropolitan University)
Benjamin C. M. Fung (McGill University)
Sébastien Gambs (Université du Québec à Montréal)
Reihaneh Rabbany (McGill University)
Calvin Hillis (Toronto Metropolitan University)
✉️ Contact Us
Questions? Reach out at responsibleai@torontomu.ca.
We look forward to your submissions and to sparking vibrant conversations about the future of responsible AI. See you in Toronto this August!