Trust to Impact: Trustworthy and Responsible AI for Citizenship, Migration, and Social Good (TIAI)
Key Details
Workshop Date: May 25, 2026
Location: Toronto Metropolitan University, Toronto, Canada
Abstract Submission Deadline: April 17, 2026
Notification of Acceptance: May 1, 2026
Camera-ready Submission: May 8, 2026
Workshop Overview and Motivations
AI systems and AI agents are increasingly deployed in high-stakes social domains, including healthcare, mental health support, public services, and migration- and citizenship-related decision-making. While these technologies promise efficiency, scalability, and broader access, they also raise fundamental concerns around trust, safety, reliability, fairness, accountability, and transparency. These challenges are especially acute in contexts where errors, bias, or misuse can have serious human, social, and legal consequences.
Aligned with the Bridging Divides research program, this workshop focuses on responsible and trustworthy AI at the intersection of technology, society, and policy. The goal is to foster a constructive, interdisciplinary dialogue between researchers, practitioners, policymakers, and community stakeholders on how AI systems can be designed, evaluated, governed, and deployed in ways that promote public trust and social good—particularly in domains related to citizenship, global migration, and other safety-critical social systems.
The Trust to Impact: Trustworthy and Responsible AI for Citizenship, Migration, and Social Good (TIAI) workshop will feature invited keynote talks, technical and non-technical paper presentations, poster sessions, and panel discussions. It is explicitly designed to bridge technical perspectives (e.g., robustness, uncertainty, safety, verification) with societal, ethical, legal, and policy perspectives (e.g., governance, accountability, equity, lived experience, and community impact).
Topics of Interest
We invite submissions of abstracts on topics including, but not limited to, the following:
Technical and Methodological Aspects
- Trustworthy and responsible AI agents
- Confidence, uncertainty quantification, and reliability assessment in AI systems
- Robustness, reliability, explainability, and safety of AI in high-stakes applications
- Privacy-preserving, fair, and interpretable machine learning methods
- Safeguarding AI systems against misuse, distribution shift, and adversarial manipulation
- Hallucination detection, mitigation, and prevention in generative AI and LLM-based systems
- Verification, validation, and auditing of AI systems
- Human–AI collaboration and decision support in safety-critical settings
- Data quality, dataset shift, and responsible data practices
Societal, Ethical, Legal, and Policy Aspects
- AI governance, regulation, and accountability frameworks
- Ethical, legal, and social aspects of AI in citizenship, migration, work, healthcare, and mental health
- Bias, discrimination, and equity in automated and AI-assisted decision-making
- Transparency, explainability, and public trust in AI systems
- Participatory, community-centered, and co-design approaches to responsible AI
- Case studies of AI deployment in public services, health, or migration contexts
- Risk communication, oversight, and institutional responsibility
- Socio-technical perspectives on AI adoption and impact
Submission Format and Review
We invite short abstracts for consideration for oral and/or poster presentation at the workshop. Submissions may represent original research results, work-in-progress, system descriptions, position papers, or policy analyses. All submitted abstracts will be peer-reviewed by the workshop organizing committee based on relevance, clarity, and potential to stimulate discussion.
Accepted abstracts will be published on the workshop website. This workshop does not require full papers, and presentation at the workshop does not preclude later publication of extended versions elsewhere.
Submission Instructions
All submissions should be in PDF format and follow the ACM Primary Article Template (external link) . The accepted length of the abstracts is 2 pages, excluding references and appendix. At least one author of each accepted abstract must register for and attend the workshop to present the work.
Please submit your work via the EasyChair Submission Site (external link) by April 17, 2026.
Audience and Participation
The workshop welcomes participation from researchers, students, practitioners, policymakers, clinicians, community advocates, and industry professionals interested in responsible AI and its role in shaping equitable, trustworthy, and socially grounded digital systems.
Registration: Details and online registration will be made available on the Bridging Divides website in the coming weeks.
Organized by: Reza Samavi and Naimul Khan