Wir freuen uns über Beiträge zum ECSCW 2025 Pre-Conference Workshop zum Thema Malicious Use of AI, der am 1. Juli in Newcastle upon Tyne, UK, stattfindet. Die Einreichungsfrist ist der 26. Mai 2025.
Bitte beachten: Der Call ist nur auf Englisch verfügbar.
Call for Contributions
- ECSCW 2025: Pre-Conference Workshop
- Topic: Malicious AI Use: Shifting the Conversation on Malicious Use of AI: A Value-Sensitive Approach for Stakeholder Consensus
- Date: July 1, 2025
- Location: Newcastle upon Tyne, UK
- Duration: Half-Day Workshop
Workshop Overview
As artificial intelligence becomes increasingly embedded in our digital and organizational infrastructures, so too do the risks of its malicious use from disinformation and surveillance to cyberattacks and online radicalization. These threats are not merely technical challenges but sociotechnical ones, rooted in the intersection of platform design, workplace practices, and evolving governance structures.
This half-day workshop aims to reframe how we approach AI-enabled threats by leveraging a Value-Sensitive Design (VSD) perspective to explore ethical, organizational, and policy dimensions of AI governance. We will bring together participants from academia, policy, and industry to identify pathways for collaboratively addressing malicious AI use in a way that is inclusive, actionable, and grounded in real-world work practices. The workshop offers a forum for interdisciplinary dialogue across informatics, social computing, and organizational research, appealing to researchers and practitioners in CSCW, information systems, digital ethics, and trust & safety.
Organized in collaboration with the CTS//circle.responsibleComputing at the Center for Technology & Society (CTS), TU Vienna, and sponsored by the Terrorism And Social Media (TASM) at Swansea University the workshop provides a forum for sharing applied insights and fostering multi-stakeholder exchange.
We are also pleased to be inviting a select group of expert panelists and participants from policy and industry (names to be announced) to further enrich the dialogue.
Topics of Interest
- - Sociotechnical mechanisms enabling or mitigating malicious AI use
- - AI threats in content moderation, trust & safety, OSINT, and cybersecurity
- - Case studies on platform governance or regulatory adaptation
- - VSD-informed approaches to ethical AI governance
- - Empirical or theoretical explorations of AI-related harms
- - Strategies for multistakeholder collaboration in high-risk environments
- - Organizational adaptation to AI threats in the public and private sector
- - Lessons learned from success and failure in cross-sector governance efforts
Submission Details and Important Dates
- Submission Format: Short position paper (2–4 pages) or equivalent (e.g., white paper, case study, theoretical piece)
- Submission Deadline: May 26 23:59 (UTC+2), 2025
- Notification of Acceptance: June 1, 2025
- Workshop Date: July 1, 2025
- Submission Email: Please submit your proposal to kevin.blasiak (at) tuwien.ac.at
Selected participants will be invited to contribute to panel discussions, breakout groups, and co-authored post-workshop reports. A maximum of 20 participants will be accepted to ensure focused and productive dialogue.
Organizers
- Kevin M. Blasiak, PhD – Postdoctoral Researcher TU Vienna, Leader Responsible Computing Circle at the Center for Technology & Society (CTS)
- Daniel E. Levenson, MA, MLA – PhD Student, Swansea University, Board Member Society for Terrorism Research