Digital tech regulation 24


Don’t ask what computers can do,  ask what they should do!

Digital humanism

Digital technologies have demonstrated great potential for human progress, exemplified by scientific and medical advances that are positively affecting human health and well-being. However, plethora of critical issues attributed to ongoing digitalization are monopolization and platform power, threats to democracy, privacy, bias, and sovereignty. We recognize and respect concerns, even fears, of the inexorable intertwining of humans and technol-ogy (or even the singularity where technology becomes uncontrollable and irrevers-ible), but editorially speaking we lean towards rejecting technological determinism. Life and its fate must be in our hands, in the hands of humans, if democracy is to be sustainable.Many problems as well as possibilities of digitalization are not technical alone. They are related and intertwined with structural, economic, societal, political issues. Digital humanism is a rich landscape of digitalization, examined as a socioeconomic, sociotechnical, and cultural process. 

Digital revolution

The coincidence and interaction between technological and  political revolutions have resulted in epoch-changing social, political, and economic devel-opments on three occasions over the past 500 years: the Renaissance and Reforma-tion with the invention of print and weapons technology, the French and American revolutions with the industrial revolution, and the collapse of communism with the information revolution. Each boast their specific attributes but follow a pattern which can assist us in understanding the implications of today’s technologies for our social and political structures.The speed with which technological innovation has provoked dramatic social change since the mid-1980s is unprecedented. However, technological change is full of consequences, intended and unintended, expected and unexpected, good and bad, invidious and insidious, to which humans must adapt.

Digital technology platforms

A platform is a business based on enabling value-creating interactions between external producers and consumers. The platform provides an open, participative infrastructure for these interactions and sets governance conditions for them. Platform governance encompasses the set of rules and decisions platforms make to determine who gets to participate in an ecosystem, how value gets divided, and which mechanisms are used to settle disputes, whether among users or between the platform and its users. Platforms are widespread, and most people use them in their daily lives.  A digital technology platform is a software-based infrastructure or ecosystem that facilitates the development, deployment, and management of digital services, applications, and solutions. These platforms provide a foundation for building and delivering a wide range of digital products and services, often leveraging cloud computing, APIs (Application Programming Interfaces), and other technologies. Overall, platforms play a crucial role in enabling innovation, agility, and digital transformation across industries, empowering organizations to create and deliver value in today's rapidly evolving digital landscape. Digital technology infrastructure refers to the underlying hardware, software, networks, and facilities that support the operation and functionality of digital systems and services. It serves as the foundation upon which various digital technologies, applications, and solutions are built, deployed, and accessed. Infrastructure is essential for enabling digital transformation, innovation, and the delivery of digital services across various industries and sectors, including healthcare, finance, manufacturing, transportation, and entertainment. It provides the backbone for interconnected and intelligent systems that drive efficiency, productivity, and competitiveness in today's digital economy.A digital technology ecosystem refers to the interconnected network of organizations, individuals, technologies, and resources that facilitate the creation, delivery, and consumption of digital products and services. It encompasses a wide range of entities, including technology companies, developers, users, regulators, and other stakeholders, as well as the digital technologies themselves and the surrounding infrastructure and environment. Overall, a vibrant and dynamic digital technology ecosystem fosters innovation, competition, collaboration, and growth, driving the evolution and transformation of industries and societies in the digital age. It thrives on diversity, openness, and connectivity, creating opportunities for stakeholders to create, share, and capture value in an interconnected and rapidly changing landscape.
Tehnološki ekosistem je mreža međusobno povezanih i međusobno zavisnih raznovrsnih subjekata na globalnom tržištu. Udružuju se da podržavaju jedni druge i podstiču inovacije na održivi način. Ekosistem digitalne televizije obuhvata bogat skup tehnologije, ljudi, kompanija i proizvoda i uslugakoji zajedno evoluiraju i stvaraju nova tržišta i ispunjavaju nove zahteve. Ekosistem takođe obuhvata regulatorni pravac, organizacije za standardizaciju, kao i relevantna trgovinska udruženja. Konstitutivni članovi daju doprinose koji dopunjuju i pojačavaju doprinose ostalih članova, što rezultuje složenim integrisanim sistemom. Tempo tehnoloških inovacija je bez presedana. Prvu fazu razvoja karakterišu vizionari koji se fokusiraju na identifikaciju određenog začetka inovacija, bilo da su tehnologije ili koncepti, koje će radikalno stvoriti bolje proizvode i usluge od onih koji su dostupni. Preduzetnici eksperimentišu sa novim idejama i tehnologijama, prikupljaju podršku i učešće ranih korisnika, traženje zainteresovanih strana i ključnih saveznika, i traže nevelike resurse. U drugoj fazi, ekosistem u nastajanju je uspostavio uporište i spreman je za brzo širenje. Zainteresovane strane intenzivno rade na uvećanju svakog elementa ekosistema. Ekosistem dostiže strukturnu zrelost trećoj fazi, iako obično nastavlja neverovatno da raste. Novi učesnici se bore za poziciju, nastoje da se etabliraju u okviru uspešnog ekosistema, i često izazivaju preokret uvođenjem novih koncepata ili poslovnih praksi. Saradnja postaje značajnija, jer se lideri bore da održe sve glomazniji ekosistem. U četvrtoj fazi, promene se dešavaju u regulatornom okruženju, ekonomskom okruženje, ili u preferencijama korisnika. Alternativni ekosistemi i inovacije nastaju da bi iskoristili prednosti promena okolnosti, ili ih ponekad direktno podstiču.

Regulation of digital technologies

Regulation of digital technologies is a complex and evolving field that involves various aspects such as privacy, cybersecurity, intellectual property, competition, and more. Overall, the regulation is a multifaceted endeavor that requires collaboration between governments, industry, civil society, and other stakeholders to achieve desired outcomes while promoting innovation and protecting the interests of users.
Following years of a liberal approach to digital technologies, platforms, services, and markets, the EU has stepped up its action in recent years. The adoption of the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 1) in 2016 can be seen as a starting point for new regulations that are now enacted and proposed under the European Commission’s strategy “A Europe fit for the digital age.” This article will briefly summarize the contents of the GDPR as well as the Digital Services Act (DSA) (Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), OJ L 277, 1), Digital Markets Act (DMA) (Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), OJ L 265, 1), Data Governance Act (DGA) (Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Gov-ernance Act), OJ L 152, 1), and the proposals for the Artificial Intelligence Act (AI Act) (Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, 21 April 2021, COM(2021) 206 final.) as well as the Data Act (Proposal for a Regulation of the European Parliament and of the Council on harmonized rules on fair access to and use of data (Data Act), 23 February 2022, COM(2022) 68 final). 

AI Index report 2024

Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development.Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine.

EU AI Act

In April 2021, the European Commission proposed the first EU regulatory framework for AI, which uses a tiered structure based on risks. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.
  • AI applications that pose an “unacceptable risk” would be banned; high-risk applications in such fields as finance, the justice system, and medicine would be subject to strict oversight. 14 June 2023, passed its draft of this law—an important step, but only a step, in the process. Parliament and Council of the European Union, have been proposing amendments to the Act since its 2021 inception. Three-way negotiations over the amendments will begin in July, with hopes of reaching an agreement on a final text by the end of 2023. If the legislation follows a typical timeline, the law will take effect two years later. In the meantime, European officials have suggested that companies worldwide could sign on to a voluntary AI code of conduct, drafting in July 2023 set of rules outlining the norms, rules, and responsibilities or proper practices of an individual party or an organization. 
  • In the US, in October 2022, the White House issued a nonbinding Blueprint for an AI Bill of Rights, which framed AI governance as a civil rights issue, stating that citizens should be protected from algorithmic discrimination, privacy intrusion, and other harms. Blueprint suggests a civil rights approach in hopes of creating flexible rules that could keep up with fast-changing technologies. In April 2023 that he was circulating the draft of a “high level framework” for AI regulations. 
  • Chinese regulations have already been put in force, starting with rules for recommendation algorithms that went into effect in March 2022, requiring transparency from the service providers and a way for citizens to opt out. Next, in January 2023, the Chinese government issued early rules governing generative AI, and further draft rules were proposed in April 2023. China’s initial set of rules for generative AI required websites to label AI-generated content, banned the production of fake news, and required companies to register their algorithms and disclose information about training data and performance. The draft rules go even further, requiring that AI companies verify the veracity of all the data used to train their models. 

In June 19, 2023, parliaments negotiating position on the AI Act. The act vote passed with an overwhelming majority, but the final version is likely to look a bit different.It was a big week in tech policy in Europe with the European Parliament’s vote to approve its draft rules for the AI Act. The AI Act vote passed with an overwhelming majority, and has been heralded as one of the world’s most important developments in AI regulation. The European system is a bit complicated. Next, members of the European Parliament will have to thrash out details with the Council of the European Union and the EU’s executive arm, the European Commission, before the draft rules become legislation. The final legislation will be a compromise between three different drafts from the three institutions, which vary a lot. It will likely take around two years before the laws are actually implemented.Structured similarly to the EU’s Digital Services Act, a legal framework for online platforms, the AI Act takes a “risk-based approach” by introducing restrictions based on how dangerous lawmakers predict an AI application could be. Businesses will also have to submit their own risk assessments about their use of AI. Some applications of AI will be banned entirely if lawmakers consider the risk “unacceptable,” while technologies deemed “high risk” will have new limitations on their use and requirements around transparency.  Here are some of the major implications:
  • Ban on emotion-recognition AI. The European Parliament’s draft text bans the use of AI that attempts to recognize people’s emotions in policing, schools, and workplaces. Makers of emotion-recognition software claim that AI is able to determine when a student is not understanding certain material, or when a driver of a car might be falling asleep. The use of AI to conduct facial detection and analysis has been criticized for inaccuracy and bias, but it has not been banned in the draft text from the other two institutions, suggesting there’s a political fight to come.
  • Ban on real-time biometrics and predictive policing in public spaces. This will be a major legislative battle, because the various EU bodies will have to sort out whether, and how, the ban is enforced in law. Policing groups are not in favor of a ban on real-time biometric technologies, which they say are necessary for modern policing. Some countries, like France, are actually planning to increase their use of facial recognition.
  • Ban on social scoring. Social scoring by public agencies, or the practice of using data about people's social behavior to make generalizations and profiles, would be outlawed. That said, the outlook on social scoring, commonly associated with China and other authoritarian governments, isn’t really as simple as it may seem. The practice of using social behavior data to evaluate people is common in doling out mortgages and setting insurance rates, as well as in hiring and advertising. 
  • New restrictions for gen AI. This draft is the first to propose ways to regulate generative AI, and ban the use of any copyrighted material in the training set of large language models like OpenAI’s GPT-4. OpenAI has already come under the scrutiny of European lawmakers for concerns about data privacy and copyright. The draft bill also requires that AI generated content be labeled as such. That said, the European Parliament now has to sell its policy to the European Commission and individual countries, which are likely to face lobbying pressure from the tech industry.
  • New restrictions on recommendation algorithms on social media. The new draft assigns recommender systems to a “high risk” category, which is an escalation from the other proposed bills. This means that if it passes, recommender systems on social media platforms will be subject to much more scrutiny about how they work, and tech companies could be more liable for the impact of user-generated content.

In Dec. 2023, EU Council and Parliament strike a deal on the AI ActFollowing 3-day ‘marathon’ talks, the Council presidency and the European Parliament’s negotiators have reached a provisional agreement on the proposal on harmonised rules on artificial intelligence (AI), the so-called artificial intelligence act. The draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. This landmark proposal also aims to stimulate investment and innovation on AI in Europe. The European Parliament will vote on the AI Act proposals early next year, but any legislation will not take effect until at least 2025.
In January 21,  2024. the final text was circulated to EU Member StatesIt is rapidly progressing towards a vote by COREPER on 2 February, after a discussion in the Telecom Working Party. 
AI definitionWe now have a final definition of an AI system, which is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as content, predictions, recommendations, or decisions that can influence physical or virtual environments.TimelinesOnce entered into force, the AI Act will apply to prohibited systems from six months, of the commencement date, to GPAI from 12 months, and high-risk AI from 36 months. Codes of practice must be ready at the latest by nine months from when the AI Act entered into force.ConclusionIn conclusion, the AI Act marks a significant shift in the regulatory environment for organisations involved in AI and IP law, particularly within the Irish and EU contexts. The AI Act necessitates a proactive approach from organisations to ensure ethical compliance, especially in light of prohibitions on certain AI practices like manipulative methods and exploiting vulnerabilities. The careful assessment of high-risk AI systems and general-purpose AI models is essential, not only for compliance but also for shaping product development strategies and managing systemic risks.The advent of sophisticated deep fakes and the requirement for transparency in AI-generated content bring new challenges, particularly for the media and entertainment sectors. Human oversight becomes increasingly crucial in ensuring accountability and reliability of high-risk AI systems, requiring a significant investment in training and expertise. Additionally, organisations need to maintain transparent communication about AI deployment in workplaces, adhering to both European Union and national laws to build trust and avert legal disputes.Simplified technical documentation for SMEs and start-ups does not lessen the importance of accurate compliance, where advisory support can be highly beneficial. Robust data governance practices are imperative in maintaining data integrity and trust in AI systems. Organisations must also be vigilant about the substantial fines for non-compliance and stay updated on evolving standards to develop comprehensive compliance strategies.Preparing for the AI Act’s timelines is critical for strategic planning, with early efforts in compliance recommended, especially regarding prohibited systems and general-purpose AI models. Overall, the AI Act presents multifaceted challenges and opportunities, demanding careful navigation and strategic positioning for organisations to thrive in this new, AI-driven regulatory landscape.
AI ActThe AI act is a flagship legislative initiative with the potential to foster the development and uptake of safe and trustworthy AI across the EU’s single market by both private and public actors. The main idea is to regulate AI based on the latter’s capacity to cause harm to society following a ‘risk-based’ approach: the higher the risk, the stricter the rules. As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions, just as the GDPR has done, thus promoting the European approach to tech regulation in the world stage.The main elements of the provisional agreementCompared to the initial Commission proposal, the main new elements of the provisional agreement can be summarised as follows:
  • rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems
  • a revised system of governance with some enforcement powers at EU level
  • extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards
  • better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.

In more concrete terms, the provisional agreement covers the following aspects:Definitions and scopeTo ensure that the definition of an AI system provides sufficiently clear criteria for distinguishing AI from simpler software systems, the compromise agreement aligns the definition with the approach proposed by the OECD.The provisional agreement also clarifies that the regulation does not apply to areas outside the scope of EU law and should not, in any case, affect member states’ competences in national security or any entity entrusted with tasks in this area. Furthermore, the AI act will not apply to systems which are used exclusively for military or defence purposes. Similarly, the agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons. Classification of AI systems as high-risk and prohibited AI practicesThe compromise agreement provides for a horizontal layer of protection, including a high-risk classification, to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured. AI systems presenting only limited risk would be subject to very light transparency obligations, for example disclosing that the content was AI-generated so users can make informed decisions on further use.A wide range of high-risk AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. These requirements have been clarified and adjusted by the co-legislators in such a way that they are more technically feasible and less burdensome for stakeholders to comply with, for example as regards the quality of data, or in relation to the technical documentation that should be drawn up by SMEs to demonstrate that their high-risk AI systems comply with the requirements.Since AI systems are developed and distributed through complex value chains, the compromise agreement includes changes clarifying the allocation of responsibilities and roles of the various actors in those chains, in particular providers and users of AI systems. It also clarifies the relationship between responsibilities under the AI Act and responsibilities that already exist under other legislation, such as the relevant EU data protection or sectorial legislation.For some uses of AI, risk is deemed unacceptable and, therefore, these systems will be banned from the EU. The provisional agreement bans, for example, cognitive behavioural manipulation, the untargeted scrapping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals.General purpose AI systems and foundation modelsNew provisions have been added to take into account situations where AI systems can be used for many different purposes (general purpose AI), and where general-purpose AI technology is subsequently integrated into another high-risk system. The provisional agreement also addresses the specific cases of general-purpose AI (GPAI) systems.Specific rules have been also agreed for foundation models, large systems capable to competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code. The provisional agreement provides that foundation models must comply with specific transparency obligations before they are placed in the market. A stricter regime was introduced for ‘high impact’ foundation models. These are foundation models trained with large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain.Transparency and protection of fundamental rightsThe provisional agreement provides for a fundamental rights impact assessment before a high-risk AI system is put in the market by its deployers. The provisional agreement also provides for increased transparency regarding the use of high-risk AI systems. Notably, some provisions of the Commission proposal have been amended to indicate that certain users of a high-risk AI system that are public entities will also be obliged to register in the EU database for high-risk AI systems.  Moreover, newly added provisions put emphasis on an obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system.Measures in support of innovationWith a view to creating a legal framework that is more innovation-friendly and to promoting evidence-based regulatory learning, the provisions concerning measures in support of innovation have been substantially modified compared to the Commission proposal.Notably, it has been clarified that AI regulatory sandboxes, which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems, should also allow for testing of innovative AI systems in real world conditions. Furthermore, new provisions have been added allowing testing of AI systems in real world conditions, under specific conditions and safeguards. To alleviate the administrative burden for smaller companies, the provisional agreement includes a list of actions to be undertaken to support such operators and provides for some limited and clearly specified derogations. Entry into forceThe provisional agreement provides that the AI act should apply two years after its entry into force, with some exceptions for specific provisions.Next stepsFollowing today’s provisional agreement, work will continue at technical level in the coming weeks to finalise the details of the new regulation. The presidency will submit the compromise text to the member states’ representatives (Coreper) for endorsement once this work has been concluded.The entire text will need to be confirmed by both institutions and undergo legal-linguistic revision before formal adoption by the co-legislators.

EU AI Act: first regulation on artificial intelligence.  AI Act: different rules for different risk levelsDeadlines: prohibitions (6 months)                         GPAI (12 months)                         high risk AI systems (Annex III 24 months, Annex II 36 months)                          all other parts (24 months)The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.Unacceptable riskUnacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Real-time and remote biometric identification systems, such as facial recognition
Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.High riskAI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.2) AI systems falling into eight specific areas that will have to be registered in an EU database:
  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.Generative AIGenerative AI, like ChatGPT, would have to comply with transparency requirements:
  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training
Limited riskLimited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.
AIA je formulisan kao sektorski - poseban u odnosu na Direktivu o e-poslovanju, DSA i DSM. Zakon se primenjuje na dobavljače koji stavljaju  AI na tržište  EU, ali i na korisnike u EU i na dobavljače i korisnike u trećoj zemlji, ako se izlazni rezultati koriste u EU. AIA sadrži znatne elemente procene rizika. Predlog se sastoji od tri kategoriej sistema. Drugim poglavljem su obuhvaćeni sistemi koji su zbog visokog rizika potpuno zabranjeni. Član 5 izrikom zabranjuje četiri takva sistema zbog opasnosti koja preti od njihove upotrebe. Trećim poglavljem su pokriveni visoko-rizični sistemi koji nisu po sebi zabranjeni ali pred koje se stavljaju značajni pravni zahtevi. Konačno, poglavlje 4 i 5 se odnose na sve sisteme i uvode obaveznu transparentnost i mere podrške inovacijama.
13 March 2024.  Following EU Parliament's Plenary Session vote (523/46/49), AI Act will undergo final linguistic approval by lawyer-linguists in April, a step considered a formality, before being published in the Official Journal. It will then come into effect after 21 days, with prohibited systems provisions in force six months later, by the end of 2024. Other provisions will come in over the next 2-3 years.The AI Act imposes significant penalties for non-compliance with the prohibited systems provisions, with fines up to €35 million or 7% of global turnover. What's the message? Make sure you aren't providing or using prohibited AI systems in the EU by the end of the year.
The following will be prohibited AI systems under the AI Act by year end:
  • Manipulative and deceptive practices.  AI systems that use subliminal techniques to materially distort a person’s decision-making capacity, leading to significant harm, are banned. This includes systems that manipulate behaviour or decisions in a way that the individual would not have otherwise made.
  • Exploitation of vulnerabilities.  The Act prohibits AI systems that target individuals or groups based on age, disability, or socio-economic status to distort behaviour in harmful ways.
  • Biometric categorisation.  It outlaws AI systems that categorise individuals based on biometric data to infer sensitive information like race, political opinions, or sexual orientation. This prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images. There are also exceptions for law enforcement.
  • Social scoring.  AI systems designed to evaluate individuals or groups over time based on their social behaviour or predicted personal characteristics, leading to detrimental treatment, are banned.
  • Real-time biometric identification.  The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement is heavily restricted, with allowances only under narrowly defined circumstances that require judicial or independent administrative approval.
  • Risk assessment in criminal offences.  The Act forbids AI systems that assess the risk of individuals committing criminal offences based solely on profiling, except when supporting human assessment already based on factual evidence.
  • Facial recognition databases.  AI systems that create or expand facial recognition databases through untargeted scraping of images are prohibited.
  • Emotion inference in workplaces and educational institutions.  The use of AI to infer emotions in sensitive environments like workplaces and schools is banned, barring exceptions for medical or safety reasons.
The Act mandates prior authorisation for the use of 'real-time' remote biometric identification systems by law enforcement.
Maj 2024.  AI Act stupa na snagu (usvojen u EU 13.03.2024.) kao prvi pravni akt u svetu koji na obavezujući način reguliše oblast veštačke inteligencije, a sa ciljem da obezbedi napredak društva, i istovremeno poštovanje prava i osiguravanje bezbednosti pojedinca.
  • preambula
  • sadržina: I Opšte odredbe; II Zabranjene AI prakse; III AI sistemi visokog rizika; IV Obaveze transparentnosti za provajdere AI sistema; VIIIa AI modeli opšte namene; V Mere u cilju podrške inovacijama; VI Upravljanje; VII Baza podataka EU za visokorizične sisteme AI iz aneksa III; VIII Post-tržišni monitoring, deljenje informacija i nadzor tržišta; IX Kodeks ponašanja; X Poverljivost i kazne; XI Delegiranje ovlašćenja i postupanje organa; XII Završne odredbe
  • aneksi

Nakon zauzetog stava o etičkim principima kojima se tehnologija mora voditi, AI Act pokreće fazu regulacije, pretvaranja etičkih principa u načela i stvaranje konkretnih pravila.AI Act temelji digitalni prostor AI ekosistema na tri oslonca:
  • inovativni razvoj
  • transparentnost i odgovornost
  • sistem procene nivoa rizika po društvo i prava pojedinca
Akt klasifikuje aplikacije VI na osnovu procenjenog nivoa rizika, a zatim po funkcionalnosti svrstava različite pojavne oblike četiri kategorije:
  • Neprihvatljiv rizikzabranjene su prakse VI koje se smatraju previše štetnima da bi bile dozvoljene, poput manipulacije ponašanjem ljudi na njihovu štetu (prevarne tehnike za iskrivljavanje ponašanja i ometanje informisanog donošenja odluka) ili poput sistema koji nepravedno kategoriziraju pojedince (social scoring), pa i softvera za prepoznavanje lica u stvarnom vremenu na javnim mestima.
  • Visok rizik – deli se na dve podkategorije: sistemi sigurnosnih komponenti (medicinski uređaji) i sistemi za osetljivu upotrebu (biometrija, kritična infrastruktura, obrazovanje, zapošljavanje, osnovne usluge, sprovođenje zakona, migracije, pravosuđe). Kategorija obuhvata sisteme VI koji bi mogli značajno uticati na sigurnost ljudi ili na fundamentalna prava. Neophodna je stroga usklađenost i procena sigurnosti ovih sistema, nadgledanje, upravljanje podacima, čuvanje zapisa, sajber-bezbednost, sistem prijavljivanja ozbiljnijih incidenata...
  • Ograničeni rizik – sistemi VI koji direktno komuniciraju s korisnicima, poput chatbota. – Neophodno je da su transparentne u pogledu toga da njima upravlja VI kako bi korisnici bili svesni da ne komuniciraju s ljudima.
  • Minimalni rizik – većina aplikacija VI spada u ovu kategoriju, gde se sloboda inovacija održava uz minimalnu regulatornu intervenciju. To su sistemi koji predstavljaju zanemariv rizik za prava ili sigurnost pojedinaca.  Koriste se uz minimalan nadzor.

Neophodno je da organizacije i kompanije koje razvijaju i upotrebljavaju VI sisteme i softver, u potpunosti operacionalizuju odgovornu VI (Responsible AI) koja podrazumeva preduzimanje planiranih radnji dizajniranja, primene i upotrebe VI za stvaranje vrednosti i izgradnju poverenja štiteći korisnike, sugrađane, društvo, od potencijalnih rizika VI.
Fair Artificial Intelligence.  Neophodno je da kompanije koje razvijaju sisteme zasnovane na VI primenjuju FairAI (MIM5 Minimal Interoperability Mechanisms 5) kao i sve tehničke mere sajberbezbednosti NIS2 direktive o sajber-bezbednosti.
Council of Europe's AI Convention (2023–2024) (Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law) aims to protect human rights against the harms of AI. The AI Convention may become the first legally-binding international treaty on AI.

EU Digital Markets Act (DMA)

Some large online platforms act as gatekeepers in digital markets. The Digital Markets Act aims to ensure that these platforms behave in a fair way online.Together with the Digital Services Act, the Digital Markets Act is one of the centrepieces of the European digital strategy.DMA establishes a set of narrowly defined objective criteria for qualifying a large online platform as a so-called gatekeeper. This allows the DMA to remain well targeted to the problem that it aims to tackle as regards large, systemic online platforms. These criteria will be met if a company:
  • has a strong economic position, significant impact on the internal market and is active in multiple EU countries
  • has a strong intermediation position, meaning that it links a large user base to a large number of businesses
  • has (or is about to have) an entrenched and durable position in the market, meaning that it is stable over time if the company met the two criteria above in each of the last three financial years

The new rules will establish obligations for gatekeepers, “do’s” and “don’ts” they must comply with in their daily operations.
Zakon o digitalnim tržištima DMA predložen je krajem 2020. , kao dopuna DSA da bi detaljnije omogućio primenu prava konkerencije na Internetu.

EU Data Governance Act (DGA)

DGA facilitates data sharing across sectors and EU countries, in order to leverage the potential of data for the benefit of EU citizens and businesses. DGA seeks to increase trust in data sharing, strengthen mechanisms to increase data availability and overcome technical obstacles to the reuse of data.The DGA will also support the set-up and development of common European data spaces in strategic domains, involving both private and public players, in sectors such as health, environment, energy, agriculture, mobility, finance, manufacturing, public administration and skills. The Data Governance entered into force on 23 June 2022.
Zakon o upravljanju podacima predložen je 2020. sa ciljem da se poboljša:
  • stavljanje podataka javnog sektora na raspolaganje
  • razmena podataka među preduzećima uz naknadu u bilo kojem obliku
  • omogućavanje upotrebe ličnih podataka uz pomoć posrednika za razmenu ličnih podataka
  • omogućavanje upotrebe podataka na altruističkim temeljima

EU Data Act

On November 27, 2023, the Council of the EU formally adopted the Data Act, following the European Parliament’s endorsement of November 9, which concludes the EU legislative process.  As noted below, the Data Act will shortly be published in the Official Journal and become enforceable in 2025. The Data Act is designed to require entities to make data, including non-personal data, accessible to other parties, so that it can be re-used for new purposes.  The Data Act’s obligations are broad  and may require significant engineering work to re-design products to ensure compliance.  As previously reported in our blog posts (here and here), the Data Act covers both personal and non-personal data that is obtained, generated, or collected by connected products and/or their components, and related digital services.  It will apply to a variety of entities, including 
  • manufacturers of connected products (i.e., physical products capable of collecting or generating data concerning their use or environment, and of communicating product data), 
  • suppliers of related services (i.e., digital services, including software, integrated into or associated with a connected product); 
  • “data holders” that have the right or obligation to use or make data available;
  • providers of data processing services.

The regulation sets up new rules on who can access and use data generated in the EU across all economic sectors. It aims to:
  • ensure fairness in the allocation of value from data among actors in the digital environment
  • stimulate a competitive data market
  • open opportunities for data-driven innovation, and
  • make data more accessible to all
The new law also aims to ease the switching between providers of data processing services, puts in place safeguards against unlawful data transfer and provides for the development of interoperability standards for data to be reused between sectors.The data act will give both individuals and businesses more control over their data through a reinforced portability right, copying or transferring data easily from across different services, where the data are generated through smart objects, machines, and devices. The new law will empower consumers and companies by giving them a say on what can be done with the data generated by their connected products.
The Data Act sits alongside a growing cast of existing and planned EU data-related laws, such as the GDPR (especially in relation to the right of access and data portability), the Data Governance Act, the proposed European Health Data Space, and the Digital Markets Act.

EU Health Data Space

On March 3, 2022, a version of the proposal for a regulation setting up the European Health Data Space was published. The draft regulation will set up a common framework across EU Member States for the sharing and exchange of quality health data (such as electronic health records, patient registries and genomic data).  The proposal is a lengthy document (126 pages, excluding annexes) that contains within it a number of different sets of rules.  Key requirements that are likely to be of interest to organizations in the life sciences sector are that the draft regulation proposes to:
  • create new patient rights over their electronic health data, and sets out rules regarding use of electronic health data for primary care;
  • establishes a pre-market conformity assessment requirement for electronic health record systems (“EHR systems”);
  • sets out rules that apply to digital health services and wellness apps; and
  • introduces a harmonized scheme for providing access to electronic health data for secondary use.

EU Directive Network and information security (NIS2)

December 2022 - the NIS 2 Directive was published in the Official Journal of the European Union as Directive (EU) 2022/2555.The full name is Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No 910/2014 and Directive (EU) 2018/1972, and repealing Directive (EU) 2016/1148 (NIS 2 Directive).NIS2 aims to get the EU up to speed and establish a higher level of cybersecurity and resilience within organizations of the European Union. The new Directive brings into scope more sectors and focuses on providing guidelines to ensure uniform transposition in local law across EU member states.The NIS 2 replaces the NIS 1 directive, which results in the following differences compared to the NIS1: NIS2 broadens the scope to include more entities, such as chemical and medical device manufacturers, food processors, and social network providers, which were previously not covered by NIS.
Direktiva NIS2 je naslednik direktive o merama za visok zajednički nivo kibersigurnosti u EU. Ima za cilj poboljšanje kibersigurnosti tako što proširuje set obaveza i set obveznika. Sada se veći broj kompanija koje su ključne za dobro funkcionisanje društva (telekomunikacije, društvene platforme, javna administracija) obavezjz na bezbednosne mere.

Opšta uredba o zaštiti podataka o ličnosti GDPR (EU General Data Protection Regulation

Pravo na privatnost i zaštitu ličnih podataka jeste jedno od osnovnih prava. 
  • Lični podaci su svi oni podaci koji se odnose na osobu, a na osnovu kojih se ona može identifikovati (direktno ili indirektno) i ugroziti njena privatnost.
  • Privatnost nije pravo na tajnost, niti na kontrolu, već pravo na odgovarajući protok ličnih podataka. Svaka osoba raspolaže mogućnosti da, u zavisnosti od situacije i konteksta, lično proceni štai sa kim deli u digitalnom okruženju . Ili, drugačije rečeno, raspolaže pravom da zna kako i u koje svrhe se koriste njeni podaci, ko ih čuva i koliko dugo, ko sve njima raspolaže, kao i da može da zatraži brisanje ličnih podataka ili ispravku netačnih podataka.

GDPR se  primenjuje od 25. maja 2018. godine u zemljama Evropske unije, odnosi se na lične podatke i predstavlja novi standard zaštite podataka o ličnosti i privatnosti građana. Ova Uredba mnogo veći akcenat stavlja na zaštitu posebnih kategorija ličnih podataka:
  • informacije o zdravlju (lični podaci koji se odnose na fizičko i mentalno zdravlje pojedinca, uključujući i pružanje zdravstvenih usluga)
  • biometrijski podaci (lični podaci dobijeni posebnom tehničkom obradom, u vezi sa fizičkim, fiziološkim ili karakteristikama ponašanja osobe koji omogućuju ili potvrđuju jedinstvenu identifikaciju te osobe, npr. slike lica ili otisci prstiju)
  • genetski podaci (lični podaci koji se odnose na nasleđene ili stečene genetske karakteristike osobe koji daju jedinstvene informacije o fiziologiji ili zdravlju osobe a dobijeni su analizom njenog biološkog uzorka)

EU-USA DATA PRIVACY FRAMEWORK

July 2023. European Commission adopted its adequacy decision on the EU-U.S. Data Privacy Framework entering into force with immediate effect.Key principles• Based on the adequacy decision, personal data will be able to flow freely and safely between the EU and participating US companies• A new set of rules and binding safeguards to limit access to data by US intelligence authorities to what is necessary and  proportionate to protect national security; US intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards• A new two-tier redress system to investigate and resolve complaints of Europeans on access of data by US Intelligence authorities, which includes a Data Protection Review Court• Strong obligations for companies processing data transferred from the EU, which includes the requirement to self-certify that they adhere to the standards through the US Department of Commerce• Specific monitoring and review mechanismsBenefits of the framework• Adequate protection of Europeans’ data transferred to the US, addressing the requirements of the European Court of Justice• Safe and secure data flows• Durable and reliable legal basis• Competitive digital economy and economic cooperation• Continued data flows underpinning €900 billion in cross-border commerce every year
2024. First periodic review of the framework.

EU Directive on payment services in the internal market

Juna 28, 2023. Europski parlament i Veće objavili su predlog revizije regulative PSD3 (Direktiva o uslugama platnih servisa i servisa za elektronski novac).Osim poboljšanja direktive o platnim uslugama PSD2 2015/2366 u pogledu načina plaćanja i povećane zaštite korisnika, u predlogu je sadržana i inicijativa o integraciji direktive o poslovanju institucija za elektronički novac (Second Electronic Money Directive). PSD3 predviđa u svom proširenom opsegu i uključivanje skorih rešenja novčanika za digitalni identitet (Digital Identity Wallets), koji omogućava identifikovanje klijenta prilikom plaćanja.Istovremeno sa PSD3, Europski parlament i Veće objavili su zakonodavni predlog kojim će se definisati pravni okvir za izdavanje digitalnog eura. Digitalni euro je zamišljen kao elektronski novac za brza, jednostavna i sigurna svakodnevna plaćanja, kao dopuna gotovini (uz novčanice i kovanice), a ne zamena. Emituje Europska centralna banka ECB (European Central Bank), a trebao bi biti dostupan svim građanima i preduzetnicima pod nazivom CBDC (Central Bank Digital Currency) ili digitalni euro.

Ethically Aligned Design (EAD1)

Prioritizing ethical and responsible artificial intelligence has become a widespread goal for society. Important issues of transparency, accountability, algorithmic bias, and value systems are being directly addressed in the design and implementation of autonomous and intelligent systems (A/IS). While this is an encouraging trend, a key question still facing technologists, manufacturers, and policymakers alike is how to assess, understand, measure, monitor, safeguard, and improve the well-being impacts of A/IS on humans.
EAD will provide pragmatic and directional insights and recommendations, serving as a key reference for the work of technologists, educators and policymakers in the coming years. EAD sets forth scientific analysis and resources, high-level principles, and actionable recommendations. It offers specific guidance for standards, certification, regulation or legislation for design, manufacture, and use of A/IS that provably aligns with and improves holistic societal well-being.
As the use and impact of autonomous and intelligent systems (A/IS) become pervasive, we need toestablish societal and policy guidelines in order for such systems to remain human-centric, servinghumanity’s values and ethical principles. These systems must be developed and should operate ina way that is beneficial to people and the environment, beyond simply reaching functional goals and addressing technical problems. This approach will foster the heightened level of trust between people and technology that is needed for its fruitful use in our daily lives.
  • EAD Pillars of Conceptual Framework fall broadly into three areas, reflecting anthropological, political, and technical aspects.: universal human values, political self-determination and data agency, and technical dependability. 
  • EAD General Principles have emerged through the continuous work of dedicated, open communities in a multi-year, creative, consensus-building process. They articulate high-level principles that should apply to all types of autonomous and intelligent systems (A/IS). Created to guide behavior and inform standards and policy making, the General Principles define imperatives for the ethical design, development, deployment, adoption, and decommissioning of autonomous and intelligent systems. The Principles consider the role of A/IS creators, i.e., those who design and manufacture, of operators, i.e., those with expertise specific to use of A/IS, other users, and any other stakeholders or affected parties.

The ethical and values-based design, development, and implementation of autonomous and intelligent systems should be guided by the following General Principles:
  • Human rights.  A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
  • Well-being.  A/IS creators shall adopt increased human well-being as a primary success criterion for development.
  • Data agency.  A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity.
  • Effectiveness.  A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
  • Transparency.  The basis of a particular A/IS decision should always be discoverable.
  • Accountability.  A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
  • Awareness of misuse.  A/IS creators shall guard against all potential misuses and risks of A/IS in operation.
  • Competence.  A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. 

Classical ethicsBy drawing from over two thousand five hundred years of classical ethics traditions, the authors explored established ethics systems, addressing both scientific and religious approaches, including secular philosophical traditions, to address human morality in the digital age. Through reviewing the philosophical foundations that define autonomy and ontology, this work addresses the alleged potential for autonomous capacity of intelligent technical systems, morality in amoral systems, and asks whether decisions made by amoral systems can have moral consequences.In doing so, we critique assumptions around concepts such as good and evil, right and wrong, virtue and vice, and we attempt to carry these inquiries into artificial systems’ decision-making processes. Assigning foundations for morality, autonomy, and intelligence.  Classical theories of economy in the Western tradition, starting with Plato and Aristotle, embrace three domains: the individual, the family, and the polis. The formation of the individual character (ethos) is intrinsically related to the others, as well as to the tasks of administration of work within the family (oikos). Eventually, this all expands into the framework of the polis, or public space (poleis). When we discuss ethical issues of A/IS, it becomes crucial to consider these three traditional dimensions, since western classical ethics was developed from this foundation and has evolved in modernity into an individual morality disconnected from economics and politics. Classical ethics for a technical world:
  • maintaining human autonomy
  • implications of cultural migration in A/IS
  • applying goal-directed behavior (Virtue Ethics) to A/IS
  • eequirement for rule-based ethics in practical programming
Legal and ethical considerationsUsing LLMs raises ethical considerations, including the potential for biased outputs, breaches of privacy, and the risk of misuse. Addressing these concerns requires the adoption of transparent development practices, the responsible handling of data, and the integration of fairness mechanisms.Legality means an act according to the law, while ethics is about right and wrong behaviour. This means that some actions might be legal but, in some people's opinion, not ethical. Legality has its basis in ethics, while ethics has its basis in morals.Legal standards are based on written law, while ethical standards are based on human rights and wrongs. Something can be legal but not ethical. Legal standards are written by government officials, while ethical standards are written by societal norms.