Brazil's National Social Security Institute (INSS) faced significant backlash over its implementation of an AI system designed to automate the granting of social benefits. The system, known as Isaac, was intended to streamline the process, but it ultimately led to a significant increase in automatic rejections, often due to problems with the database and complexity in handling user requests.
Many beneficiaries faced delays or denials of their benefits, leading to increased court cases and financial burdens. The system's complexity and lack of digital literacy support disproportionately affected vulnerable populations, such as the elderly and those with disabilities.
The INSS was criticized for its lack of transparency regarding the system's workings, and requests for more information were denied on the grounds of system security. Experts emphasized the need for a regulatory framework to ensure AI systems serve the public interest and promote equity.
The incident highlights the importance of carefully considering the potential consequences of AI system implementation, particularly when it comes to vulnerable populations. Transparency, public justification, and accountability are crucial in AI system development and deployment to prevent exacerbating inequalities and ensure that technology serves the greater good.