US federal government agencies have identified a surge in AI use cases over the past year. But rolling them out? That's where things slow down, thanks to funding gaps, compute shortages, outdated policies, and a workforce still playing catch-up.

A Congressionally-mandated report by the US Government Accountability Office (GAO) into the state of federal AI deployments found that there has been an explosion in use cases based on inventories from 11 of 12 selected agencies. The Department of Defense was included in the review but exempt from inventory reporting due to security restrictions.

According to the GAO, federal agencies reported 1,110 AI use cases in 2024, nearly double the 571 they reported the year prior. The numbers are even more stark when filtering specifically for generative AI - in that case, there was a nearly ninefold increase in identified use cases, with 282 flagged last year as compared to the 32 identified in 2023. 

Of those 282 GenAI use cases, 159 (about 56 percent) of GenAI programs have been initiated or are in acquisition or development, while about 41 percent are in implementation or are operational. The remaining few have been retired already.

The GAO said 61 percent of AI projects have centered on internal mission support work such as operational support, improving communication, and process streamlining. Public-facing government service AI like chatbots, virtual assistants, and other support service automation, are the second most popular.

Medical applications are the third most frequent use case cited by federal agencies. According to the GAO, the VA has begun using AI to automate some medical imaging processing, while the Health and Human Services has been using AI to help contain the re-emergent polio virus. 

NASA was the only agency reviewed in 2024 that reported no generative AI use cases, despite having listed related activities in its 2023 inventory.

Government agencies: They're just like you! 

Despite having the budget of the United States and the blessing of the US President to Make AI Great Again, at the end of the day, those faceless monoliths of federal bureaucracy face some of the same problems implementing AI as enterprises. 

Of the 12 agencies the GAO spoke with for its report, eight said that they had trouble obtaining the compute resources needed to support their workloads with seven of those blaming a lack of necessary funding. Six also cited challenges with finding or training a skilled AI workforce - a problem that pretty much everyone has right now.

Agencies also share a common concern that AI tools can output biased data or just plain make things up. That's a reason five agencies identified for being cautious with their adoption. The Defense Department and NASA also had a particular issue with the fact that most generative AI systems are still black boxes, and neither is happy with that lack of transparency. 

Then there are problems that are specific to government agencies, which move at a famously glacial pace, often thanks to layers upon layers of red tape. 

Regulations are a particular concern for 10 of the 12 government agencies, which all said that existing federal policy "may not account for or could present obstacles to the adoption of generative AI." The GAO cited cybersecurity as an example, noting that there are so many regulations and policies to comply with that some GenAI use cases may be difficult or nearly impossible to do appropriately. 

Speaking of that glacial pace, a common refrain running through the report is that AI is also moving too darn fast for government agencies to keep up with in terms of policies and practices. Several agencies also reported having a hard time acquiring the AI products they want, because it takes too long to get FedRAMP approval from the US government, stating a piece of commercial software complies with government cloud security standards. 

As for what government agencies should do to address these shortcomings in their AI implementations, the GAO didn't make any formal recommendations as part of this report, but it noted that agencies are leveraging existing frameworks like the one the National Institute of Standards and Technology has published to help guide their GenAI efforts.

Report author Candice Wright, GAO director for science, technology assessment and analytics, told The Register in an email that those policy and practice approaches are key for agencies looking to deploy generative AI systems - that and looking to successful teams to see how they did it. 

"Learning from others' experiences and approaches can be helpful for agencies facing similar challenges or to help them plan ahead for potential hurdles they could face in their own generative AI pursuits," Wright told us, while noting that the problems she singled out won't just go away on their own. 

"These challenges will warrant attention as agencies move forward," Wright said. �