You are now in the main content area

How to program a better tomorrow: Harnessing disruptive technologies
Innovation Issue 38: Summer 2023

A collaborative approach to improving cybersecurity

Intersection

A collaborative approach to improving cybersecurity

Headshots of Cybersecure Catalyst Fellowship members Burcu Bulgrcu, Rasha Kashef, Reza Samavi, Monika Freunek, A.J. Khan and Jeff Schwartzentruber.

Rogers Cybersecure Catalyst Fellowship members Burcu Bulgurcu, Rasha Kashef and Reza Samavi (top row) and Monika Freunek, A.J. Khan and Jeff Schwartzentruber (bottom row).

From remote work, to app-controlled appliances, to online shopping reviews, modern life has never been more connected. But this increased reliance on the internet comes with a related increase in threats to system security, data privacy and physical safety.

To tackle cybersecurity challenges, six researchers and cybersecurity experts from Toronto Metropolitan University (TMU) and the private sector are bridging the gap between academia and industry as the inaugural cohort of the Catalyst Fellowship program at Rogers Cybersecure Catalyst.

The fellows’ research will carve a path for more trustworthy algorithms and their by-products, including more reliable online reviews and novel systems for locating and monitoring cyber threats. Their research will provide a better understanding of current security systems and their vulnerability to harmful and disruptive attacks.

The fellowship program fosters a strong network of researchers and collaborators from diverse backgrounds and areas of expertise for cross-disciplinary collaboration, innovation and mentorship. While each fellow conducts an individual research project, the group works together to brainstorm how to overcome research obstacles, to offer each other helpful insights and to discuss opportunities and issues in cybersecurity. In addition to their research projects, the fellows co-create and lead three webinars on key cybersecurity issues and policies. They also engage with the overall Catalyst community, including entrepreneurs, directors and other TMU faculty and students.

“The collaboration between the fellows, whether academic or industry, is a great enriched experience,” said Rasha Kashef, a research-stream fellow and electrical, computer, and biomedical engineering professor at TMU. “How to package the research, how to commercialize it, how to align training with industry standards and not only the academic standards – this is where the communication with the industry fellows comes in.”

Using machine learning to enhance resilience

For her fellowship project, professor Kashef is building a novel robust recommendation system using machine learning that can detect and then ignore or remove malicious recommendations caused by shilling attacks, such as fake online shopping reviews. Shilling attacks occur when bad actors try to fool recommendation systems into promoting products for their own gain by adding fake reviews, ratings or both. “If we don’t have trustworthy e-communication, our digital economy can be affected,” said professor Kashef. 

Her project uses generative modelling and a proposed new architecture to create a system that keeps online recommendations as accurate and trustworthy as possible. The next steps in the research will be to conduct a pilot project and work toward creating a product that can be commercialized and sent to market.

Industry-stream fellow Jeff Schwartzentruber, a senior machine learning scientist at eSentire and TMU alumnus, is developing a machine learning system to more efficiently extract and analyze data for enhanced security analytics, which could lead to a more limber method for catching threats. 

Typically, security analysts take log data from computers and process it to identify specific signatures that correlate with known threats. This method requires a lot of manual effort, since no industry standard for log data formats exists, and computers create far more data than engineers can process. However, this method of detecting threats has a limited ability to detect unusual activity outside identifiable signatures.

“For every log source a data engineer incorporates, they have to write a configuration file and ensure it is compatible with every part of the data pipeline. It’s a pretty rigid system,” said Schwartzentruber.

To create more flexibility and resilience in the operations pipeline for when new threats emerge, Schwartzentruber is creating machine learning models that can ingest any data type and look for unusual patterns instead of specific signatures. Using natural language processing, which allows computers to understand human language, engineers can increase how much data can be processed and improve resilience to new threats. 

“It won't be able to confirm specific malware, but it can flag something anomalous is occurring and flag security experts to investigate,” said Schwartzentruber.

Finding security solutions for a connected world

Attacks by malicious actors can also affect physical safety and security when it comes to the Internet of Things (IoT), a catchall term for many modern-day connected devices like personal health-care monitors, smart home appliances and app-controlled security systems.

“Making devices smart and being connected has been the default decision, but security was not really the priority,” said Monika Freunek, founder of Lighthouse Science Consulting and Technologies Inc. and a leader in the critical infrastructure and cybersecurity fields.

As an industry-stream fellow, Freunek is aiming to address the gap in IoT cybersecurity by working with industry and academic experts to determine what solutions are currently available and how they can be implemented in ways that are practical and efficient for today’s society.

One of the challenges in IoT security is that, despite being omnipresent, there is no clear definition of the IoT or what constitutes an IoT security breach. According to Freunek, since IoT devices are not thought of clearly as computers, regulations and standards rarely cover them. As a result, much of the risk has been left to individuals to manage when they are not equipped with an efficient means to do so.

Freunek’s research aims to develop recommendations for technical and regulatory approaches and to determine how solutions can successfully account for the complex global environment of the IoT. Freunek is also working to identify which sectors and actions should be prioritized from a Canadian perspective to most effectively improve security in the short and long term. 

Industry-stream fellow A.J. Khan, CEO and founder of Vehiqilla Inc., specializes in automotive security. Through his fellowship, Khan is developing a Vehicle Security Operations Centre to monitor cyber threats aimed at connected and autonomous vehicles. This focus is a new area of cybersecurity and one that most governments have not yet mandated. However, as connected vehicles become more ubiquitous, it’s an area of great importance given how severe threats could become.

“Imagine a person is riding in a driverless vehicle, the doors are locked, and they are on the highway. The person gets a message from a hacker to pay so much in bitcoin or the hacker will take control of the vehicle,” said Khan who is working to prevent such occurrences.

Khan uses ethical hacking, where he has permission to gain unauthorized access to a vehicle's systems, to understand how connected vehicles can be hacked. For example, hackers might be able to gain entry through Wi-Fi, cell phone signals or apps used to start vehicles. Khan is also examining how vehicles respond to hacking techniques and what factors may prevent them from safely reacting, since vehicles operate in uncontrolled, dynamic environments.

Learning from the past

The pandemic-driven mass transition to remote work meant many employees were working outside firewalls for the first time, and reports of cybercrimes spiked significantly.

“We know that this is related to the rapid digitalization of the workforce that happened on an unprecedented scale,” said Burcu Bulgurcu, an information technology management professor who focuses on behavioural and strategic cybersecurity and privacy issues for technology users and organizations.

As a research-stream fellow, she is examining the impact of remote work on the management of cybersecurity during and after the COVID-19 pandemic, which forced organizations to transform and digitalize their work environments without sufficient planning and preparation.

“My goal is to pinpoint the problematic or insufficient work practices that lead to these issues and propose solutions and guidelines,” said professor Bulgurcu. “Was it the adoption of new technologies? Was it the need to update existing policies and best practices? Was it the allocation of resources and expanding capabilities of the workplaces?”

Through a series of interviews with technology and cybersecurity professionals, as well as business managers at both traditional and more dynamic workplaces, professor Bulgurcu is discovering what and how organizations learned about cybersecurity as they adapted to the pandemic-driven changes. Her research will help identify the gaps in cybersecurity measures, policies and training programs related to remote work and provide guidelines and recommendations to improve organizations’ levels of preparedness. According to professor Bulgurcu, learning from the pandemic-driven mass digitization is overdue now that hybrid-style work has become a norm.

Looking to the future

What if the very systems used to protect people and data from cyberattacks are themselves untrustworthy? In his research-stream fellowship, TMU electrical, computer, and biomedical engineering professor Reza Samavi aims to create discussion around the trustworthiness of algorithms used in the current cybersecurity realm. He is using a “security for machine learning” approach to cybersecurity to explore the intended or unintended corruption of algorithms and raise awareness of potential issues. 

Machine learning algorithms used in safety-critical domains could be fooled to make incorrect predictions, intentionally or unintentionally, by active adversaries or environmental characteristics. For example, a machine-learning model trained for an autonomous vehicle could be fooled by an altered road sign that causes the vehicle to speed up instead of stopping at an intersection, putting the passengers at risk.

“To be aware of these issues and develop robust algorithms that can withstand intended or unintended corruption, we should not only analyze the algorithms from the security perspective but also understand how certain we are about the outcome of the algorithms,” said professor Samavi.

Professor Samavi is also developing training recommendations to help security professionals recognize both existing and emerging algorithmic cybersecurity threats, preparing them to develop solutions. Beyond the fellowship, professor Samavi, a faculty affiliate with the Vector Institute of AI, aims to make security an integral component of the trustworthy machine learning pipeline and, to a larger extent, develop trustworthy artificial intelligence.

“This fellowship is a good opportunity for me to learn from industry about their needs and the training happening there,” said professor Samavi. “I’m learning how I can contribute to training a new generation of cybersecurity practitioners in not only traditional security methods but also for new security threats.”

How to package the research, how to commercialize it, how to align training with industry standards and not only the academic standards – this is where the communication with the industry fellows comes in.

Learn more about Rogers Cybersecure Catalyst (external link, opens in new window)  and the Catalyst Fellowship Program (external link, opens in new window) .

Funding for the Catalyst Fellowship Program is provided by Rogers Communications Inc., through its partnership with Rogers Cybersecure Catalyst.