Countries within the European Union (EU) have come to agreement on provisional regulations to govern the use of %ArtificialIntelligence (A.I.), including governments’ use of A.I. surveillance and how to regulate A.I. platforms such as %ChatGPT.
The European Parliament has become the first major governing body to adopt laws that regulate the use of A.I.
The new rules require products such as ChatGPT and general purpose A.I. systems to comply with transparency laws before they are made available to the public and corporations.
These include complying with European copyright laws and disseminating detailed summaries about the content used for training an A.I. system.
Going forward, A.I. companies will also be required to report to the European Commission any serious incidents, ensure cybersecurity protections, and report on their energy use.
As for governments, they can only use real-time A.I. surveillance in public spaces to prevent terrorist attacks and to carry out searches of people suspected of the most serious crimes.
The new laws ban cognitive behavioural manipulation, the untargeted scrapping of facial images from the internet or video footage, and biometric categorisation systems that identify people based on their race and other markers.
Fines for violating the new rules will range from 7.5 million euros ($8.1 million U.S.) up to 35 million euros ($37.66 million U.S.).
The new A.I. legislation is expected to come into force in early 2024.
Governments around the world are trying to balance the advantages of A.I. technologies against the risks and need to safeguard the public.
Europe’s new A.I. rules come days after %Google owner Alphabet (Nasdaq: GOOG) launched a new A.I. model called “Gemini” that is the most advanced such system yet, outperforming humans in areas such as math and physics.