TY - GEN
T1 - Multi-Agent Security Tax
T2 - 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
AU - Peigné, Pierre
AU - Kniejski, Mikolaj
AU - Sondej, Filip
AU - David, Matthieu
AU - Hoelscher-Obermaier, Jason
AU - de Witt, Christian Schroeder
AU - Kran, Esben
N1 - Publisher Copyright:
Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2025/4/11
Y1 - 2025/4/11
N2 - As AI agents are increasingly adopted to collaborate on complex objectives, ensuring the security of autonomous multi-agent systems becomes crucial. We develop simulations of agents collaborating on shared objectives to study these security risks and security trade-offs. We focus on scenarios where an attacker compromises one agent, using it to steer the entire system toward misaligned outcomes by corrupting other agents. In this context, we observe infectious malicious prompts - the multi-hop spreading of malicious instructions. To mitigate this risk, we evaluated several strategies: two”vaccination” approaches that insert false memories of safely handling malicious input into the agents’ memory stream, and two versions of a generic safety instruction strategy. While these defenses reduce the spread and fulfillment of malicious instructions in our experiments, they tend to decrease collaboration capability in the agent network. Our findings illustrate potential trade-off between security and collaborative efficiency in multi-agent systems, providing insights for designing more secure yet effective AI collaborations.
AB - As AI agents are increasingly adopted to collaborate on complex objectives, ensuring the security of autonomous multi-agent systems becomes crucial. We develop simulations of agents collaborating on shared objectives to study these security risks and security trade-offs. We focus on scenarios where an attacker compromises one agent, using it to steer the entire system toward misaligned outcomes by corrupting other agents. In this context, we observe infectious malicious prompts - the multi-hop spreading of malicious instructions. To mitigate this risk, we evaluated several strategies: two”vaccination” approaches that insert false memories of safely handling malicious input into the agents’ memory stream, and two versions of a generic safety instruction strategy. While these defenses reduce the spread and fulfillment of malicious instructions in our experiments, they tend to decrease collaboration capability in the agent network. Our findings illustrate potential trade-off between security and collaborative efficiency in multi-agent systems, providing insights for designing more secure yet effective AI collaborations.
UR - http://www.scopus.com/inward/record.url?scp=105003998206&partnerID=8YFLogxK
U2 - 10.1609/aaai.v39i26.34970
DO - 10.1609/aaai.v39i26.34970
M3 - Conference article
AN - SCOPUS:105003998206
T3 - Proceedings of the AAAI Conference on Artificial Intelligence
SP - 27573
EP - 27581
BT - Special Track on AI Alignment
A2 - Walsh, Toby
A2 - Shah, Julie
A2 - Kolter, Zico
PB - Association for the Advancement of Artificial Intelligence
Y2 - 25 February 2025 through 4 March 2025
ER -