Artificial Intelligence in Financial Market Law: Opportunities, Risks and Regulatory Challenges
- Corinne Blessing
- Feb 4
- 2 min read
Artificial intelligence (AI) is revolutionizing the financial sector and is playing an increasingly central role in compliance, risk management and improving the efficiency of business processes. From automated trading algorithms to fraud detection and regulatory review, AI enables financial institutions to make more precise and faster decisions. But these advances also come with significant legal and ethical challenges, particularly in the highly regulated Swiss financial market.
The financial industry already uses AI in various areas. For example, it is used in fraud detection, where algorithms identify suspicious patterns in financial transactions. Transaction monitoring also benefits from AI, as suspicious activities can be analyzed and reported in real time. Other areas of application include digital forensics, checking sanctions risks, and continuous KYC due diligence processes that ensure that financial institutions meet regulatory requirements. AI can also automate compliance processes, making it easier to comply with regulations while reducing costs.
Despite the many advantages, there are regulatory challenges that financial institutions must be aware of. In Switzerland, the financial market supervisory authority FINMA is committed to technology neutrality, but demands clear governance structures, transparency and explainability in AI applications. Data protection also plays a central role, especially in connection with banking secrecy and the GDPR. Financial institutions must ensure that customer data is only processed within the legally permissible framework. In addition, regulations such as the Money Laundering Act (GwG/AML) and the EU AI Act apply, which has international implications for the industry despite not being directly applicable in Switzerland. FINMA has formulated clear expectations in its supervisory notice 08/2024: Financial institutions must maintain a complete inventory of their AI systems, classify them according to risk and monitor them regularly.
A particular risk in this context is the lack of transparency of many AI models. Black box algorithms make it difficult to understand decisions, which is a regulatory problem for financial institutions. Ethical biases that can arise from unbalanced training data are also a major challenge. Another problem is so-called "AI washing" - companies market their products as AI-based, although in fact they only use simple automation. Regulatory authorities such as the US Securities and Exchange Commission (SEC) have already initiated proceedings against companies that engage in misleading AI advertising. To minimize such risks, financial institutions should establish transparent AI strategies, conduct internal reviews and train their employees in AI-specific risks.
While AI is increasingly used in the financial industry, the human factor remains essential. AI does not replace compliance experts, but complements their work. The focus is shifting from repetitive, time-consuming activities to conceptual problem solving. Compliance teams must increasingly focus on reviewing AI-generated results to ensure their correctness and regulatory compliance. Training and education will become essential to familiarize compliance staff with the latest developments in AI.
The future of financial market law will be heavily influenced by AI. Financial institutions that address regulatory requirements early on and use AI responsibly can secure a clear competitive advantage. At the same time, increasing regulation is to be expected, particularly with regard to transparency, liability issues and data protection. Those who adapt proactively minimize risks and take advantage of the opportunities that AI offers in the financial market.