Artificial intelligence has rapidly become a priority risk concern for the United States financial system, according to testimony before the Senate Banking Committee by US Treasury Secretary Scott Bessent. He noted that regulators are now intensifying their scrutiny of how AI technologies are used across financial markets and institutions, recognizing that while AI can bolster operational efficiency, it may also introduce new systemic vulnerabilities if not properly overseen.
Bessent highlighted that the Financial Stability Oversight Council (FSOC) — a body that includes the Treasury, Federal Reserve, and other key regulators — has listed AI among its key focus areas in its latest annual report. The council is “prioritizing the responsible use of artificial intelligence to strengthen financial stability,” underscoring that AI’s growing role in compliance, risk management, and fraud detection could simultaneously expose the system to unforeseen risks.
While AI is widely deployed for beneficial purposes within financial institutions, including automating compliance workflows and enhancing fraud detection, regulators warned that new vulnerabilities may arise from both legitimate and malicious use of AI, especially by sophisticated state or non-state actors. This concern reflects broader unease that advanced, opaque AI systems might magnify systemic risk if inadequate safeguards and oversight mechanisms are in place.
This focus on AI risk builds on earlier concerns from US regulators that rapid AI adoption in finance — without corresponding governance, transparency, and risk controls — could contribute to safety and soundness issues, model risk, and cybersecurity challenges. Analysts and systemic-risk watchers argue that regulators must deepen expertise and monitoring capacity to keep pace with how AI is reshaping financial decision-making and market dynamics.