1. What are the Rules in Switzerland?
In the area of artificial intelligence (AI), there is not yet a legal umbrella solution in Switzerland; rather, there is some regulation from sector to sector in separate Acts. For example, the federal guidelines on AI, the Digital Switzerland strategy, the report of the interdepartmental working group on artificial intelligence (IDAG KI), and the FSO’s Competence Network for Artificial Intelligence (CNAI) should be mentioned. The revised Data Protection Act also covers critical areas where AI is applied.
2. What are the Rules in the European Union (EU)?
Conversely, in the EU, the legislative process for the Artificial Intelligence Act (“AI Act”: Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending certain Union legislative Act; hereinafter the “AI Regulation”) is in the home stretch. The draft came up for a vote in the plenary of the EU Parliament in mid-June 2023 (June 14). European lawmakers agreed on more stringent changes in June 2023. For example, amendments to the draft AI Act were agreed in June, which now include a ban on the use of AI technology in biometric surveillance and a requirement for generative AI systems such as ChatGPT to disclose AI-generated content.
However, the AI Act is not expected to be passed before the end of 2023, meaning it will not take effect until mid-2024 at the earliest. After that, there will be a two-year implementation period.
Even if it will be some time before the regulation will be relevant for (Swiss) companies, they should familiarize themselves with the current draft.
3. What Impact on Switzerland can be Expected be the new EU Rules?
The AI Regulation would have direct extraterritorial effect: Providers and users in Switzerland would be bound by it if they offer their systems in the EU or if the result of their systems is used in the EU. While providers will regularly be private companies, users could be public bodies as well as private ones – namely if they use systems whose results would be used within the EU. It is conceivable, for example, that a chatbot on a website of the Swiss federal administration would be used for information purposes by Swiss citizens living abroad in the EU – this would also have to comply with the transparency obligations under Article 52(1) of the draft AI Regulation in the future. The draft provides for an exception for authorities using AI systems if this is done in the context of an agreement with the EU or one of its member states in the area of law enforcement and justice. This would mean, for example, an exception for Swiss authorities using AI systems in the context of the agreement with Europol.
Many Swiss AI providers will develop their products for more than just Switzerland, which means that the new European standards of the AI Act are likely to become established in Switzerland as well.
4. What is the Risk-Based Approach of the EU Regulation?
The core element of the AI Act is a risk-based approach, which entails various requirements and prohibitions based on potential capabilities and risks. The higher the risk of an AI system to the health, safety, or fundamental rights of individuals, the more stringent the regulatory requirements. The AI Act thus classifies AI applications into different risk categories with different consequences:
5. What new Artificial Intelligence (AI) Principles are Added?
The compromise text contains with Art. 4a so-called “General Principles applicable to all AI systems”. All actors covered by the AI Act shall develop and deploy AI systems and Foundation Models in accordance with the following six “AI Principles”:
- Human agency and control: AI systems should serve humans and respect human dignity and personal autonomy, and function in a way that can be controlled and monitored by humans.
- Technical robustness and safety: unintended and unexpected harm should be minimized, and AI systems should be robust in the event of unintended problems.
- Data privacy and data governance: AI systems should be developed and deployed in compliance with data privacy regulations.
- Transparency: Traceability and explainability must be possible, and people must be made aware that they are interacting with an AI system.
- Diversity, non-discrimination, and fairness: AI systems should be inclusive of diverse stakeholders and promote equal access, gender equality, and cultural diversity, and conversely avoid discriminatory effects.
- Social and environmental well-being: AI systems should be sustainable, environmentally friendly, and developed and used for the benefit of all people.
LINDEMANNLAW advises on impact of existing regulatory and tax rules during the project development of your AI product/software and the analysis of the measures to be taken in the anticipation of the entry into force of new regulatory and tax requirements.