A recent meeting of the Politburo of the Chinese Communist Party (CCP), focused on “governance of the Internet ecosystem,” has exposed a shift in how Chinese authorities view AI: not just as a frontier industry to be regulated, but as a powerful instrument for social control. According to experts, the government is increasingly leveraging advanced AI tools to automate and intensify censorship, online monitoring, and broader social surveillance.
According to a report by the Australian Strategic Policy Institute (ASPI), these AI systems are being used to censor sensitive photos, monitor public sentiment — including among ethnic minority groups — and enable mass‑scale surveillance. The ASPI report suggests that AI allows the state to monitor “more people, more closely, with less effort.”
On top of online content monitoring, AI is being integrated into broader state control infrastructures. According to reporting, AI‑enabled tools are increasingly used by private tech firms to moderate content, flag high‑risk users, and score online behavior — effectively deputizing them as “deputy sheriffs” for state censorship. In some cases, AI-powered moderation is supplemented by human oversight — particularly when content involves political nuance or coded language that requires human judgment.
While the rise of such AI systems signals a new level of efficiency for state suppression, experts warn it also deepens systemic problems: reducing transparency, enabling bias, and complicating accountability in judicial or content‑moderation decisions. The shift marks a transformation of AI from a technological frontier into a key part of governance and social control infrastructure — with potentially profound consequences for privacy, freedom of expression, and human rights.