INNOVATION
Issue 43: Fall 2025
Beyond artificial intelligence: How perspective-aware AI is making technology more collaborative
Meet the Expert
Beyond artificial intelligence: How perspective-aware AI is making technology more collaborative
Artificial intelligence (AI) is often seen as a force that replaces human effort. But what if it could amplify our knowledge, creativity and ability to learn from one another instead? That question drives the work of Hossein Rahnama, the Edward S. Rogers Chair in Artificial Intelligence and Human Creativity and professor at The Creative School at Toronto Metropolitan University (TMU). Throughout his career, professor Rahnama has explored how people and machines could collaborate rather than compete.
His interest in human-centred AI began with early research focused on personalizing digital content using large amounts of data. But as technology evolved, so did his perspective. “The old model was about algorithms pushing information to users,” he said. “Now people want to collaborate with technology, not just receive from it.”
Making immersive technologies more collaborative and transparent
That shift in thinking led Rahnama to explore how AI might become more collaborative, not just responding to data, but learning from people’s experiences. His latest research focuses on Perspective-Aware AI (PAi), which asks how machines can better understand the way people think, feel and interpret the world.
“Perspective-aware AI challenges us to see technology as a partner in understanding,” professor Rahnama said. “It’s about creating systems that don’t just predict what we’ll do next but recognize who we are and what matters to us.”
Building on this idea, professor Rahnama worked with researchers at TMU and the MIT Media Lab, where he is a visiting professor, to apply Perspective-Aware AI to extended reality systems, such as virtual and augmented reality. Together, the team developed a new framework called Perspective-Aware AI in Extended Reality (PAiR).
Extended reality technologies use AI to blend digital and physical worlds, creating lifelike, immersive experiences that are transforming how we live, work and connect. Yet, despite their sophistication, today’s immersive technologies lack a true understanding of the people using them. “Current systems are good at responding to what users do in the moment,” professor Rahnama explained. “But they rarely consider how someone’s experiences, beliefs or goals shape their decisions.”
At PAiR’s core are Chronicles, dynamic records that represent a user’s knowledge, choices and experiences over time. These secure, privacy-protected identity models are built from a person’s or group’s digital footprint, including text, images and online interactions. “Think of a Chronicle as a digital extension of yourself,” said professor Rahnama. “It can learn from you, represent your knowledge and even collaborate with other people’s Chronicles.”
Unlike typical AI models that rely on vast public datasets, Chronicles draw only from an individual’s data and can be shared selectively, allowing for transparency. “When we share our Chronicles transparently, we can see our biases and understand where others are coming from,” he explained.
Building empathy through technology
This richer, more personal picture allows AI to create experiences that build empathy and feel more meaningful, potentially transforming collaboration in many aspects of life. Professor Rahnama’s early prototypes demonstrate the concept in action. His Perspective-Aware Desk Environment created a virtual workspace that adapted based on the user’s cognitive state. The Perspective-Aware Financial Helper offered guidance in response to a user request that evolved as a person’s attitudes toward risk and responsibility changed.
In health care, similar systems could help doctors consult a colleague’s Chronicle for a second opinion or tailor therapy to a patient’s emotional state. In classrooms, students might learn by exploring subjects through their peers’ or mentors’ perspectives. And, in public life, perspective-aware systems could reduce polarization by showing how and why people think differently.
In one recent project, professor Rahnama collaborated with two York University professors and writer Steven Jenkinson, a former palliative care director at Mount Sinai Hospital. To explore how end-of-life reflections could inform AI design, he asked Jenkinson what people most often say in their final days. The experience reminded professor Rahnama that what truly matters when building technology is embedding human literacy, empathy and emotional insight into AI systems. By integrating these lessons into his lab’s training models, he hopes to ensure that future technologies reflect deeper human values, not just technical intelligence.
A more personal future for AI
This research aims to redefine our relationship with AI technologies. Instead of the traditional concept of human-computer interaction, which focuses on how people interact with machines, professor Rahnama envisions a new approach based on humane, calm and intelligent interfaces. This approach emphasizes human-to-human communication, with AI quietly supporting in the background. “AI shouldn’t replace us,” he said. “If we design it well, it will give us more time to learn from one another and focus on creative, human pursuits.”
Read the paper “Perspective-Aware AI in Extended Reality (external link) " in the journal Springer Nature to learn more about this research.
Perspective-aware AI challenges us to see technology as a partner in understanding. It’s about creating systems that don’t just predict what we’ll do next but recognize who we are and what matters to us.
