
In today’s world, your investment decisions are increasingly being influenced not by humans, but by Artificial Intelligence (AI) and Machine Learning (ML) powered software. From algorithmic trading to portfolio management and even investment advisory, AI and ML have now penetrated every corner of the stock market ecosystem.
Recognizing both the potential and the risks of this technological revolution, India’s market regulator SEBI (Securities and Exchange Board of India) has stepped in with a proactive approach.
SEBI has recently released a consultation paper proposing five major guidelines aimed at ensuring the responsible use of AI and ML in the securities market. The main objective is to strike a fine balance between fostering innovation and protecting investors. The consultation paper is open for public feedback until July 11.
Today, stock markets do not run solely on human intelligence. AI and ML have made significant inroads into areas such as algorithmic trading, portfolio design, market surveillance, and investment advisory. Brokerage firms are using AI to offer investment advice, mutual funds are designing portfolios using AI-driven models, and stock exchanges are employing AI tools to detect irregularities in trading patterns.
However, with these advancements come serious concerns, primarily around transparency, data security, and potential biases. SEBI’s move aims to address these risks head-on.
According to SEBI’s proposal, all market participants—brokers, fund houses, exchanges—using AI and ML will now need to establish dedicated AI/ML teams comprising technical experts. These teams will be responsible for documenting every detail: how models are created, what data is being used, and how outcomes are generated.
If a third-party AI vendor is involved, the responsibility will still lie with the company deploying the AI systems. Importantly, senior management will be held fully accountable, ensuring that no one can hide behind the complexity of technology.
For AI-powered services that directly impact investors—such as algo trading platforms or advisory services—companies will be required to explicitly disclose the use of AI to the clients. They must clearly inform investors about the risks involved, the accuracy levels, and the limitations of these AI systems, all in simple, easily understandable language. This ensures that even retail investors, who may not be tech-savvy, are fully aware of what is driving their investment decisions.
SEBI has emphasized that AI is not a toy and must undergo thorough testing before being deployed in live markets. The proposed framework mandates that AI models must be tested in simulated environments before actual use. Furthermore, all related data must be stored for a minimum of five years, allowing for audits and accountability in case of any mishaps.
Continuous updates, periodic audits, and regular monitoring of these models will be compulsory to maintain the integrity of the system.
One of the biggest concerns with AI systems is bias. If the data used to train AI models is inherently biased, the AI will likely replicate and amplify these biases. SEBI insists that diverse, high-quality datasets be used to train AI models. Staff must also be trained to identify and mitigate potential biases, ensuring fair treatment of all investors and preventing any form of discrimination.
Data security forms another cornerstone of SEBI’s proposal. Since AI systems heavily rely on data, SEBI wants companies to adopt stringent cybersecurity protocols, maintain clear privacy policies, and prevent any misuse of user data. These measures aim to preserve investor trust and ensure the ethical use of cutting-edge technology.
SEBI also proposes a differentiated approach for internal-use AI models and client-facing AI models. While internal AI tools may be subject to relatively lighter regulations, models that directly affect clients—such as those used in trading and advisory—will face stricter scrutiny and tighter rules. In simple terms, the higher the risk to the investor, the stricter the regulations.
SEBI’s proposed five-point framework sends a strong message: while technology is essential for the growth and modernization of financial markets, responsibility and accountability are equally crucial. For investors, this move is reassuring as it ensures that even behind the complex algorithms and sophisticated AI tools, there will always be a layer of human wisdom and oversight.
In this evolving landscape, it becomes crucial for every investor to ask: who is really behind my investment advice or trading decision—a human expert or a machine? Thanks to SEBI’s initiative, transparency in answering that question may soon become the new norm.
Disclaimer:
The information provided in this article is for general informational purposes only. It is not intended as financial, investment, or legal advice. Readers should not make any financial decisions based solely on the content of this article and are strongly advised to consult qualified professionals before making any investment or trading decisions. The views expressed are based on publicly available information at the time of writing. SEBI’s proposed guidelines may be subject to changes and updates .