Anna Katariina
Wisakanto

Anna Katariina Wisakanto is an AI safety researcher and strategist dedicated to addressing risks from global systems and AI. As a researcher at the Center for AI Risk Management & Alignment (CARMA), she develops methodologies for comprehensive risk assessment of advanced AI and their capabilities.

Her research contributes to the field of AI risk management by focusing on evaluations, risk assessment, and developing a holistic understanding of the actual risks and limitations of AI systems. Her ongoing project, Comprehensive Risk Assessment, is a holistic approach to risk assessment derived from first principles, utilizing novel analytical methods to model the pathways connecting AI capabilities to potential catastrophic outcomes, identifying those that pose the greatest risk.

With a background in philosophy, engineering physics, and complex adaptive systems, Anna combines theoretical and empirical work. She holds an engineering physics degree from Chalmers University of Technology, where her thesis focused on quantum error correction.

As a sometimes visiting scholar at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, Anna explores the intersection of philosophy of AI, complex adaptive systems, and cognitive science, particularly the impact of global AI systems on human cognitive and moral autonomy. She advises industry leaders on new technologies and contributes to high-level foresight working groups.

Anna Katariina Wisakanto is a member of the European Leadership Network (ELN), the AI safety community at the Future of Life Institute (FLI), the New European Voices on Existential Risk initiative (NEVER) and the European Network for AI Safety (ENAIS).

Based in Helsinki, Finland, Anna embraces a digital research-first nomadic lifestyle, frequently traveling between London, Warsaw, and the Bay Area. She subscribes to Crocker’s rules and actively welcomes unsolicited constructive feedback.