Feed

16/04/2024
Cross-border Financial Services 2024 webinar series
We're delighted to announce the launch of our third season of international webinars focusing on financial regulation, starting on 13 March 2024. Whether you are an in-house lawyer, compliance officer, financial analyst, risk manager, or any other professional concerned with maintaining the integrity of your organisation's financial practices, this series offers succinct 20-30 minute overviews of key industry trends and regulatory concerns across mul­ti­ple ju­ris­dic­ti­ons. If you have any additional topics that you would like us to add or address at one of the webinars, please contact us. Upcoming Webinars: 13 March: Spotting and avoiding red flags  What are the warning signals from firms that regulators act on?  How can you spot and address them before the regulator pounces?16 April: Handling a regulatory in­ves­ti­ga­ti­on  How a firm can understand the regulator’s concerns and manage the investigation process.8 May: Financial cri­me  Sanc­ti­ons, money laundering, market abuse and fraud - what are the key issues in your jurisdiction and what are the regulators focusing on.5 June: Preparing for a regulatory visit  How the banking regulator assesses a firm’s systems and controls: what to prepare and what to look out for.3 July: Handling a challenging ap­pli­ca­ti­on  Your application for a licence, product approval or change in control is meeting with regulatory resistance.  How can you surmount these challenges?31 July: Dawn Raids  Unan­noun­ced regulator visits are on the increase.  We look at what triggers a dawn raid, your rights if one happens, and how best to manage the consequences.4 September: Navigating the global ESG lands­ca­pe Is the regulatory reporting jigsaw puzzle causing more harm than good? We will provide an overview of the main cross-border issues impacting global financial institutions as they seek to manage ever expanding ESG regulations and discuss whether these rules are helping or hindering the action we need for change. The language of the webinar will be English.
22/03/2024
EU Competition Law Briefings 2024
The EU Competition Law Briefings have been created to provide a platform for our clients and other competition law experts to stay up to date on the developments of EU Competition Law. Every month CMS competition experts will present a recent case by the EU Commission or Community Courts during a webinar.
21/03/2024
Be­stuur­ders­aan­spra­ke­lijk­heid en D&O in 2024
Tijdens het webinar Be­stuur­ders­aan­spra­ke­lijk­heid en D&O in 2024 zullen wij zowel ingaan op de verschillende trends en ontwikkelingen als op de juridische en ver­ze­ke­rings­rech­te­lij­ke aspecten over be­stuur­ders­aan­spra­ke­lijk­heid...
19/03/2024
Webinar: Wet toekomst pensioenen - Wijzigen pensioenregeling
Met ingang van 1 juli 2023 is de Wet toekomst pensioenen in werking getreden. De werkgever is verantwoordelijk voor het opstellen van een transitieplan voor de overgang naar nieuwe pre­mie­over­een­kom­sten...
15/03/2024
Codes of conduct, confidentiality and penalties, delegation of power and...
Codes of conduct (Currently Title IX, Art. 69)In order to foster ethical and reliable AI systems and to increase AI literacy among those involved in the development, operation and use of AI, the new AI Act mandates the AI Office and Member States to promote the development of codes of conduct for non-high-risk AI systems. These codes of conduct, which should take into account available technical solutions and industry best practices, would promote voluntary compliance with some or all of the mandatory requirements that apply to high-risk AI systems. Such voluntary guidelines should be consistent with the EU values and fundamental rights and address issues such as transparency, accountability, fairness, privacy and data governance, and human oversight. Furthermore, to be effective, such codes of conduct should be based on clear objectives and key performance indicators to measure the achievement of these objectives. Codes of conduct may be developed by individual AI system providers, deployers, or organizations representing them and should be developed in an inclusive manner, involving relevant stakeholders such as business and civil society organisations, academia, etc. The  European Commission will assess the impact and effectiveness of the codes of conduct within two years of the AI Act entering into application, and every three years thereafter. The aim is to encourage the application of requirements for high-risk AI systems to non-high-risk AI systems, and possibly other additional requirements for such AI systems (including in relation to environmental sustainability).
15/03/2024
CMS Real Estate Data Centre Consenting in Netherlands
1. Do you have to enter into a form of agreement with the local au­tho­ri­ty/mu­ni­ci­pa­li­ty when applying for consent for a data centre in your jurisdiction? In cases where a zoning plan amendment for a new...
Comparable
15/03/2024
Real estate finance law in Netherlands
A. Mortgages 1. Can security be granted to a foreign lender? Yes, a mortgage can be granted to a foreign lender. 2. Can lenders take a mortgage over land and buildings on the land? Yes, lenders can take...
14/03/2024
CMS rankings in Chambers Europe 2024
De nieuwe rankings van Chambers Europe zijn gepubliceerd. We zijn trots dat CMS in Nederland opnieuw uitstekende rankings heeft behaald. We danken onze cliënten voor hun vertrouwen in ons en hun positieve...
14/03/2024
Governance and post-market monitoring, information sharing, market surveillance
Governance (Currently Title VI, Art. 55b-59)The AI Act establishes a governance framework under Title VI, with the scope of coordinating and supporting its application on a national level, as well as build capabilities at Union level and integrate stakeholders in the field of artificial intelligence. The measures related to governance will apply from 12 months following the entry into force of the AI Act. To develop Union expertise and capabilities, an AI Office is established within the Commission, having a strong link with the scientific community to support its work which includes the issuance of guidance; its establishment should not affect the powers and competences of national competent authorities, and bodies, offices and agencies of the Union in the supervision of AI systems. The newly proposed AI governance structure also includes the establishment of the European AI Board (AI Board), composed of one representative per Member State, designated for a period of 3 years. Its list of tasks has been extended and includes the collection and sharing of technical and regulatory expertise and best practices in the Member States, contributing to their harmonisation, and the assistance to the AI Office for the establishment and development of regulatory sandboxes with national authorities. Upon request of the Commission, the AI Board will issue recommendations and written opinions on any matter related to the implementation of the AI Act. The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related to market surveillance and notified bodies. The final text of the AI Act also introduces two new advisory bodies. An advisory forum (Art. 58a) will be established to provide stakeholder input to the European Commission and the AI Board preparing opinions, recommendations and written contributions.A scientific panel of independent experts (Art. 58b) selected by the European Commission will provide technical advice and input to the AI Office and market surveillance authorities. The scientific panel will also be able to alert the AI Office of possible systemic risks at Union level. Member States may call upon experts of the scientific panel to support their enforcement activities under the AI Act and may be required to pay fees for the advice and support by the experts. Each Member State shall establish or designate at least one notifying authority and at least one market surveillance authority as national competent authorities for the purpose of the AI Act. Member States shall ensure that the national competent authority is provided with adequate technical, financial and human resources and infrastructure to fulfil their tasks effectively under this regulation, and satisfies an adequate level of cybersecurity measures. One market surveillance authority shall also be appointed by Member States to act as a single point of contact.
13/03/2024
General purpose AI models and measures in support of innovation
General purpose AI models (Currently Title VIIIA, Art. 52a-52e)The AI Act is founded on a risk based approach. This regulation, intended to be durable, initially wasn’t associated to the characteristics of any particular model or system, but to the risk associated with its intended use. This was the approach when the proposal of the AI Act was drafted and adopted by the European Commission on 22 April, 2021, when the proposal was discussed at the  Council of the European Union on 6 December, 2022. However, after the great global and historical success of generative AI tools in the months following the Commission’s proposal, the idea of regulating AI focusing only on its intended use seemed then insufficient. Then, in the 14 June 2023 draft, the concept of “foundation models” (much broader than generative AI) was introduced with associated regulation. During the negotiations in December 2023, some additional proposals were introduced regarding “very capable foundation models” and “general purpose AI systems built on foundation models and used at scale”. In the final version of the AI Act, there is no reference to “foundation models”, and instead the concept of “general purpose AI models and systems” was adopted. General Purpose AI models (Arts. 52a to 52e) are distinguished from general purpose AI systems (Arts. 28 and 63a). The General Purpose AI systems are based on General Purpose AI models: “when a general purpose AI model is integrated into or forms part of an AI system, this system should be considered a general purpose AI system” if it has the capability to serve a variety of purposes (Recital 60d). And, of course, General Purpose AI models are the result of the operation of AI systems that created them.“General purpose AI model” is defined in Article 3.44b as “an AI model (…) that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications”. The definition lacks quality (a model is “general purpose” if it “displays ge­ne­ra­li­ty”1Re­ci­tal 60b contributes to clarify the concept saying that “generality” means the use of at least a billion of parameters, when the training of the model uses “a large amount of data using self-supervision at scale”. footnote) and has a remarkable capacity for expansion. Large generative AI models are an example of General Purpose AI models (Recital 60c). The obligations imposed to providers of General Purpose AI models are limited, provided that they don’t have systemic risk. Such obligations include (Art. 52c) (i) to draw up and keep up-to-date technical documentation (as described in Annex IXa) available to the national competent authorities, as well as to providers of AI systems who intend to integrate the General Purpose AI system in their AI systems, and (ii) to take some measures in order to respect EU copyright legislation, namely to put in place a policy to identify reservations of rights and to make publicly available a sufficiently detailed summary about the content used. Furthermore, they should have an authorised representative in the EU (Art. 52ca). The most important obligations are imposed in Article 52d to providers of General Purpose AI models with systemic risk. The definition of AI models with systemic risk is established in Article 52a in too broad and unsatisfactory terms: “high impact capabilities”. Fortunately, there is a presumption in Article 52a.2 that helps: “when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25”. The main additional obligations imposed to General Purpose AI models with systemic risks are (i) to perform model evaluation (including adversarial testing), (ii) to assess and mitigate systemic risks at EU level, (iii), to document and report serious incidents and corrective measures, and (iv) to ensure an adequate level of cybersecurity. Finally, an “AI system” is “an AI system which is based on a General Purpose AI model, that has the capacity to serve a variety of purposes” (Art. 3.44e). If General Purpose AI systems can be used directly by deployers for at least one purpose that is classified as high-risk (Art. 57a and Art. 63a), an evaluation of compliance will need to be done.
13/03/2024
Crypto Tax Legislation & Law in the Netherlands
1. Is there a specific legislation issued for the taxation of crypto-assets or do general national tax law principles apply because the tax legislator has not regulated this so far? In the Netherlands...
Comparable
12/03/2024
Revised European Commission Notice on the definition of the relevant market...
On 8 February 2024, the Commission adopted its revised Notice on the definition of the relevant market for the purposes of Union competition law. The objective of this Notice, which was the subject of...