DARPA vs. Poison AI: GARD Program Ready for Implementation

It’s a nightmare scenario.

The enemy injects false data into the Pentagon’s database used for training AI controlled drone weapons systems.

Subtle alterations to drone target ID algorithms misclassify civilian structures as military assets. Trained on the tainted AI, the drones cause significant civilian casualties after mistaking numerous civilian infrastructures as enemy command centers.

International outcry ensues.

Adversaries quickly launch a disinformation campaign to maximize the fallout. Public trust in AI systems deteriorates as tensions escalate globally. The Pentagon faces a crisis of credibility.

Secret operations are compromised due to their reliance on poisoned AI assessments.
Special operations personnel are captured.

It’s just the beginning. It’ll be decades before the extent of the fallout is understood.

This is the type of worst-case scenario the Pentagon is trying to prevent with a DARPA program focused on safeguarding AI and machine learning models.

GARD Program

The Guaranteeing AI Robustness Against Deception (GARD) program was initially revealed in January 2022 and now it looks like it’s ready to be implemented, according to The Brief website.

Matt Turek, deputy director of DARPA’s Information Innovation Office, revealed recently that 70% of the agency’s programs involve AI in some way, The Register reports.
Data poisoning is just one of many bad outcomes from hacked systems, and potential targets extend far beyond the defense department.

Other possibilities, according to the Modern War Institute, include:

  • Evasion attacks that involve modifying inputs to AI systems
  • Adversaries reverse engineering Pentagon systems to undermine security and strategy
  • Inference attacks focusing on deducing data used to train AI systems, potentially revealing sensitive or classified info

GARD collaborators include:

  • Two Six Technologies
  • IBM
  • MITRE
  • University of Chicago
  • Google Research


The components so far include:

Armory

The Armory platform, available via Github, is for running robust and repeatable evaluations of adversarial defenses in relevant scenarios

Adversarial Robustness Toolbox (ART)

ART is a Python library developed for machine learning security. It includes tools to evaluate and defend ML models and applications against threats like evasion, poisoning, and inference.

ART supports all major machine learning frameworks and tasks across datatypes.

APRICOT

Created by MITRE, the APRICOT dataset serves as a benchmark to help test and improve how AI recognizes objects.

It’s specifically designed to determine how well these systems can spot attempts to trick them. The attacks, called adversarial patches, essentially apply stickers that trick the AI into seeing something that’s not there.

The Armory, ART, and APRICOT datasets are available free of charge through DARPA’s Public Release and Open Source Initiative.

Kathleen Fisher, director of DARPA’s Information Innovation Office, said during a recent AI summit that the program has funded test attacks on large language models.

GARD researchers managed to fool open-source language models almost perfectly and had an 87 percent success rate with the commercial version of GPT 3.5 and a 50 percent success rate with GPT 4, according to GovCon Wire.

She said some AI tools may need more work before they can be trusted in government and military use cases.

Program Winding Down

According to Defense Scoop, the Pentagon requested 10 million USD for the program in fiscal 2024.

The budget proposal for 2025 includes no additional requests because the program is winding down as GARD capabilities are transitioned to other defense department components.

The Pentagon is investing heavily in AI as near-peer nations like China and Russia do the same.

For instance the US Air Force is eager to digitally transform modern warfare by integrating emerging technologies in its Advanced Battle Management System (ABMS).

Featured Videos

Digital Twins: The Next 75 Years [Future Applications from 2025 - 2100]

Kalil 4.0 May 13, 2024 3:22 am

Navy's Slow Adoption of Digital Tools Hurting Shipbuilding Programs: GAO Report

Kalil 4.0 May 8, 2024 3:21 am