Research Group Human and Societal Factors

Research Group “Human and Societal Factors” (HSF) focuses on answering research questions in the area of society-centred security and privacy by design. Society-centred security and privacy by design entail security and privacy solutions specifically developed from scratch considering security, usability, and legal aspects. Thereby, the research efforts currently cover four areas and one cross-cutting topic:

(1) Security awareness measures to help users understand important attacks on IT systems and their role in cyber defense

(2) Security interventions to effectively and understandably communicate the information security and privacy status of systems to users

(3) User authentication to help people cope with the plethora of passwords they have to manage and explore viable alternatives

(4) Legal guidelines for GDPR to enable users and decision-makers not knowledgeable in GDPR to operate their IT infrastructure in a legally compliant way

(5) Securing democracies to investigate how democracies can be protected in the face of nation-state adversaries using potentially AI-assisted misinformation and cyberwarfare campaigns

Platzhalter

 

Research Area 1 – Security & Privacy Awareness

Involved PIs: Emilia Grass, Peter Mayer, Melanie Volkamer (Head), Marcus Wiens, Frederike Zufall

Active Researchers: Benjamin Berens, Gustavo Gil Gasiola, Anne Hennig, Ashima Khurana, Mattia Mossano, Leonie Schmidt-Enke

Within the Research Area “Security & Privacy Awareness,” our research endeavors are directed towards the development and enhancement of awareness measures, with a particular emphasis on their effectiveness in mitigating phishing threats and fulfilling security incident reporting requirements, while also addressing legal considerations. These investigations are conducted across diverse contexts, including a partnership with an energy sector entity. The research investigates the duration of how long cybersecurity awareness bolsters the security resilience of both individuals and organizations. The findings reveal that awareness requires renewal approximately every six months, with minimal measures proving adequate. This contrasts the prevalent annual renewal of traditional awareness campaigns.

 

Research Area 2 – Security Interventions

Involved PIs: Jürgen Beyerer, Peter Mayer, Melanie Volkamer, Christian Wressnegger

Active Researchers: Benjamin Berens, Pascal Birnstill, Anne Hennig, Mattia Mossano, Maximilian Noppel, Maxime Veit

The Research Area “Security Interventions” aims to explore potential security interventions to help users make informed decisions and adopt secure behaviors. Currently, design principles and their alignment with psychological theories for such user interfaces are not well understood. Therefore, we investigate how interventions should be designed to effectively support users. Based on these findings, we intend to research how developers can be supported by intervention design patterns. Another overarching objective of the research area is to develop explainable AI (XAI) outputs. However, it was observed that current XAI methodologies are vulnerable to attacks, necessitating initial attention to these vulnerabilities. Comprehensive reviews and detailed analyses of attacks are conducted, as well as creating measures to create awareness for the potential of such attacks.

Research Area 3 – Usable Secure Authentication

Involved PIs: Patricia Arias Cabarcos (Head), Thorsten Strufe, Melanie Volkamer

Active Researchers: Matin Fallahi, Tobias Länge, Philipp Matheis

The Research Area “Usable Secure Authentication” is dedicated to the human-centric design and assessment of secure, knowledge-based authentication systems, as well as continuous biometric authentication. Among the investigated topics are users’ perceptions, usage strategies, and the usability of password managers. Results indicate that the prevalence of password manager use might be higher than previously thought and that ease-of-use is a pivotal factor in their adoption. Additionally, research in this area explores shoulder-surfing resistant authentication within VR and AR environments, in collaboration with the production lab where these technologies are increasingly utilized. In the context of implicit authentication, the research concentrates on leveraging brainwave and eye movement data to assess the viability and constraints of EEG as means for authentication.

Research Area 4 V Legal Design Patterns

Involved PIs: Jürgen Beyerer, Indra Spiecker gen. Döhmann (Head)

Active Researchers: Maximilian Becker, Pascal Birnstill, Paul Dieler, Julian Hunter, Mona Winau

The Research Area “Legal Design Pattern” focuses on legal frameworks pertinent to IT security research, aiming to provide actionable advice and guidance for specific scenarios and broader challenges through succinct recommendations. This focus is on crafting technically informed legal directives under the GDPR, facilitating users and decision-makers in maintaining legally compliant IT operations. The group produced the inaugural comprehensive article-to-article commentary on the GDPR in English, offering detailed insights into its interpretation and addressing pertinent issues for an international audience with a strict emphasis on privacy preservation. Additionally, the group conducts extensive analyses of evolving AI regulations across various domains. The exploration of the interrelationships between the requirements of the AI Act and the GDPR in the context of automated decision-making systems is another focus in this Research Area.

Research Area 5 – Cross Cutting Topic: Securing Democracies

Involved PIs: Jürgen Beyerer, Indra Spiecker, Melanie Volkamer (Head), Christian Wressnegger

Active Researchers: Amina Gutjahr, Tobias Hilt

The world changed dramatically since we submitted the proposal for the Topic ‘Engineering Secure Systems’:

(1) Due to the COVID-19 pandemic, remote electronic voting became very popular. As a consequence, we decided to research on understanding and mitigating threats associated with remote electronic voting in order to protect our democracies.

(2) In particular, in the beginning of the Russian invasion, fake information became an emerging topic. Therefore, it was decided to conduct research on the understanding and mitigation of threats in the context of fake information.

(3) Cambridge Analytics demonstrated the power of political microtargeting related to election campaigns on social media platforms. Therefore, it was decided to research political microtargeting.