Social Structures
Author: Gizem Yardimci, Early Career Researcher at ADVANCE CRT, PhD Student in Law, Maynooth University
The Draft Artificial Intelligence Act (Draft AI Act) for the European Union (EU) represents a significant milestone towards the regulation of technologies employing AI within the EU. Since the zero version of the Draft AI Act was released on 21 April 2021, it has been discussed extensively by academics, policymakers and professionals who are involved in the decision-making processes within the EU. In May of this year, the European Parliament released a Draft Compromise Text with significant amendments on the Draft AI Act. Therefore, the European Parliament is in a position to launch ‘trialogues’ with the European Commission and the Council of the European Union. Overall, this development represents a formal step towards finalising the regulation for AI systems in the EU.
The main goal of the Draft AI Act is to improve the functioning of the internal market and to advance the creation of a digital single market as indicated in the Digital Single Market Strategy.
AI systems are beginning to be used in various fields and are increasingly integrated with other technologies. For this reason, the Draft AI Act adopts a risk-based approach which categorizes the levels of risk posed by AI systems into three different categories such as ‘low-risk’, ‘high-risk’ and ‘unacceptable risk’. This risk-based approach is also adopted by other countries to regulate various fields of technology such as the National AI Research and Development Strategic Plan 2023 Update. As a result, it would not be wrong to say that there is a race to regulate AI systems. Being an early entrant into the regulatory discussion, the EU has some advantages. In general, the EU wants to be a leader by regulating this area, as it has done so previously with the (GDPR).
The EU wants to make sure that its values are reflected and implemented within the remit of the AI regulation. For this reason, the EU has evaluated various feedbacks and comments about the alignment between current European Codes and the Draft AI Act. In summary, the Draft AI Act, under development since 2021, puts the EU in an advantageous position to create a robust protection mechanism against potential concerns regarding democracy, fundamental rights and the rule of law.
With regard to democracy, Cambridge Analytica, Brexit and the United States elections in 2016 all showed that we should be worried about the use of AI tools in democratic processes. A relatively recent and shocking article from the Guardian also shows that a hacking and disinformation team in Israel meddled in the elections and influenced the outcomes in at least 20 countries.
The Draft Compromise Text gives emphasis to the use of AI systems for the ‘administration of justice and democratic processes’ and it regulates in detail, unlike the previous version of the Draft AI Act. In the previous version, AI systems are considered to be in the high-risk category if they are used for the administration of justice and democratic processes. However, it was not clear what to include in “democratic processes” as a concept; thus, the scope and potential activities of this concept were not determined in the Draft Compromise Text.
The Draft Compromise Text highlights some key points. First, it echoes the previous version of Draft AI Act which states that certain AI systems intended for the administration of justice and democratic processes are classified as high-risk. The reason for the classification of such AI systems as high-risk is due to the evidence that they can have a potential impact on democracy, the rule of law, and upon individual freedoms and rights. In addition, the risk of external interference with the ‘right to vote’ and the subsequent potential effects on ‘democratic processes, democracy and the rule of law’ are also considered therein.
Second, the text focuses on the utility of AI systems to influence “the outcome of an election or referendum” and “the voting behaviour of natural persons”. The Draft AI Act covers the “intentional” use of AI systems to influence the outcome of any election or to influence the voting behaviour of natural persons. In this scenario, it should be explored how AI systems can be used in democratic processes without any intention to influence results or processes. Moreover, if AI systems are used during democratic processes, proving whether it is intentional or not may become a crucial legal problem in the future. On the other hand, the Draft AI Act regulates ‘manipulative AI systems’ within the ‘unacceptable risk’ category. It seems that the Draft AI Act considers the action of “influencing the voting behaviour of natural persons” by AI not as a manipulation, but as AI systems to be evaluated in the ‘high-risk category’. Probably, “the voting behaviour of natural persons” will be another issue that needs to be clarified.
Third, while it remains broad, ‘democratic processes’ refer to elections or referenda according to the document. However, it is unclear if the phrase includes local elections, presidential elections, regional elections, and European Parliament elections. There is also a specific provision for regulating the utility of very large digital platforms for democratic processes. When those platforms use AI systems for their recommender systems, they are subject to the Draft AI Act and are, thus, required to comply with many requirements such as data governance, traceability, transparency, and human oversight. It should also comply with related provisions in Regulation EU 2022/2065.
However, the only exception has been defined as the ‘possibility that natural persons are not ‘directly’ exposed to certain AI systems that can be used to ‘organize, optimize or structure political campaigns’. It is possible to handle this exception in two different ways. First, if the exception is about organizing, optimizing or structuring how a political campaign is run before the official start of the electoral process, the probability that natural persons will already be directly exposed to this process is not high. Therefore, AI systems seem to be limited only to the management period of a campaign. On the other hand, the qualities and details of the required workforce and financial support for a political campaign may be defined by using AI systems. Namely, the number of voters living in a region, their age range, economic power and related data. Evaluating the results of some regions in previous elections, through AI, is already a common approach used in many projects, or marketing strategies; including political campaigns. Second, there is also the possibility that AI systems known as ‘political bots’ can be used to organise, structure and optimize political campaigns while the election process is underway. In this case, even when bots do not spread any false information, users may still be affected indirectly. Namely, bots cause ‘echo chambers’ or bombard the digital platform with information or by creating hashtags. As a result, natural persons may be out of the scope of the provision for democratic processes in the Draft AI Act. In this case, a question arises about how best to address candidates of elections who will be directly exposed to the organization, optimization and configuration of the political campaign.
In conclusion, the Draft AI Act marks an important step towards the regulation of AI technologies, particularly in relation to democratic processes. While some aspects remain broad and require clarification, the current text of the Draft AI Act does provide transparency and oversight regarding the use of AI in democratic processes.