AI & Cybersecurity Red-Teaming Lab

Project: Research

Project Details

Description

Initiated the design of an AI & Cybersecurity Red-Teaming Lab to investigate jailbreak vulnerabilities in large language models, iOS devices, and federated learning environments. Developed experimental frameworks for adversarial prompt injection, malicious client simulations, and Deepfake-based attacks on medical imaging systems. Proposed layered defenses, including guardrails for LLMs, forensic imaging verification, and federated anomaly detection, to build resilience against emerging threats.

Key findings

AI and mobile device security through adversarial red-teaming, jailbreak analysis, and Deepfake attack simulation
StatusActive
Effective start/end date08/1/25 → …

Collaborative partners

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.