The Securities and Exchange Board of India (SEBI) has proposed guidelines for the responsible use of Artificial Intelligence (AI) and Machine Learning (ML) in the Indian securities market. The guidelines aim to safeguard investor protection, market integrity, and financial stability while promoting innovation.
SEBI's proposed guidelines include establishing internal teams to supervise and audit AI/ML models, with senior management responsible for the AI lifecycle. Market participants would be required to disclose AI/ML usage to investors, especially when it directly impacts them, such as in advisory services or algorithmic trading.
Firms would need to conduct thorough model testing in simulated environments and preserve documentation and logs for at least five years. Companies would also be expected to use diverse, high-quality datasets and train staff to identify and correct biases in algorithmic outputs.
Additionally, firms would need to ensure robust data governance protocols, comply with data protection laws, and communicate technical glitches or data breaches to SEBI and other authorities.
SEBI proposes a "regulatory lite" framework for AI/ML usage that doesn't directly impact customers, such as internal compliance or surveillance tools. This approach would apply lighter regulations to these uses.
The guidelines are open for public feedback until July 11, 2025, and are designed to balance innovation with accountability and investor safety.