En guide om AI och de legala aspekterna

Artikeln är publicerad på Legal 500 och ger svar på ett antal frågor om artificiell intelligens och de juridiska utmaningarna och möjligheterna. Skriven av Maria Eiderholm och Johan Nyberg, delägare, samt Elin Sandin och Kristoffer Vördgren, biträdande jurister på Glimstedt.

  1. What are your countries legal definitions of “artificial intelligence”?

    There is no legal definition of artificial intelligence (“AI”) in Sweden.

  2. Has your country developed a national strategy for artificial intelligence?

    The Government Offices of Sweden presented a national strategy for AI in 2018 titled “National approach to artificial intelligence”. The primary aim of the strategy is to ensure that Sweden becomes a leader in harnessing the opportunities of AI to benefit the Swedish welfare and competitiveness. To achieve this high-set goal, the key elements of Sweden’s national AI strategy include:

    • enhancing research by strengthening basic and applied AI research in Sweden to foster advancements in AI technology and applications;
    • enhancing innovation and use, including pilot projects, testbeds and environments for the development of AI applications in the public and private sectors, that can contribute to the use of AI evolving in a safe, secure and responsible manner;
    • develop education and training to ensure that Sweden has the necessary skills to continue and develop in an AI-driven economy. This includes integrating AI education at various levels, from primary education to higher education and lifelong learning programs;
    • develop infrastructure and data with robust digital infrastructure and promote access to large, high-quality datasets;
    • encouraging collaboration and partnership between academia, industry, and the public sector to develop AI innovation and ensure practical and favorable applications of AI;
    • addressing ethical and societal implications of AI to ensure that AI is developed and used responsibly. This includes considerations of privacy, security and fairness; and
    • evaluating and updating regulatory frameworks to accommodate AI technologies, ensuring Ai is conducive to innovation while at the same time protecting the rights and interests of the people.

    The Swedish AI strategy does not disclose financial provisions or estimations for its implementation.

  3. Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.

    Sweden has not implemented any specific laws regarding the use of AI or governance of AI. The European Commission’s independent High-Level Expert Group on Artificial Intelligence has issued Ethical Guidelines for Trustworthy AI that can be applied in Sweden. However, the guidelines are voluntary and not legally binding.

    There are several existing laws in Sweden that potentially could be applied to AI and the use of AI, for example the General Data Protection Regulation (the “GDPR”), the Tort Liability Act (SFS 1972:207, the “TLA”), the Product Liability Act (SFS 1992:18, the “PLA”) and the Act on Copyright in Literary and Artistic Works (SFS 1960:729, the “CA”). The Swedish Parliament and Government have long aimed for technology-neutral laws. Nonetheless, there is room for uncertainty when applying existing laws to AI and the use of AI. So far, there are no Swedish legislative proposals governing the use of AI. However, it is likely that national legislation will follow EU’s AI Act.

    Like for most other EU countries, difficulties arise in interpreting existing laws. For example, ensuring compliance with GDPR while leveraging big data for AI is challenging. Existing Intellectual property laws do not clearly address the ownership of AI-generated works or inventions. Whether AI can be an inventor or creator as well as determining liability for harm caused by AI systems so far remains unclear. Traditional concepts of liability may not be directly applicable to autonomous systems that can make decisions without human intervention. There are also ethical and fairness aspects to consider. Ensuring that AI systems are fair, non-discriminatory, and ethical is difficult within the current legal framework.

    In some cases, there is a civil law duty of supervision to monitor AI technology. One such example can be found in Chapter 8, section 23 of the Securities Market Act (2007:528), where a securities institution that applies algorithmic trading in securities is obliged to have effective systems and risk controls for algorithmic trading. If a securities institution fails to comply with the requirements of this section, the offence of market manipulation is committed, in which case the court will decide whether representatives of the securities institution should have intervened and prevented the course of events.

    While not a law or binding legislative initiative, various sectors in Sweden are developing specific guidelines and standards for AI applications. For example, the Swedish Medical Products Agency has presented guidelines for AI in medical devices on 13 September 2023. The guidelines highlights, among other things, the importance of a systematic implementation. The guidance also includes a checklist intended to provide practical support when planning the implementation of AI.

  4. Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?

    There are no specific rules that apply to defective AI. For liability to be imposed according to TLA, it is required that the natural or legal persons behind the AI have been negligent and that there is adequate causation between the negligent act and the damage. It remains uncertain how high the standards for negligence and adequate causation are, and whether liability can be imposed in cases where AI causes personal or property damage. Determining adequate causation in incidents involving AI is particularly challenging, given the complexity and autonomous decision-making capabilities of these systems.

    Defective AI systems could be subject to claims under the PLA, CA as well as the General Product Safety Act (SFS: 2004:451) and the Swedish Consumer Sales Act (SFS 2022:260).

  5. Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.

    AI is not considered a legal entity in Sweden, so the AI itself cannot be responsible for any damages. However, the natural or legal persons behind the AI, e.g. developers or users, can potentially be held responsible for damages caused by AI.

    According to the TLA, it is required that the person causing the damage has acted intentionally or negligently in order for damages to be paid. A common issue on AI is the black box mystery, meaning that it is uncertain and unpredictable to try to understand how AI works. If all possible precautions are taken and the AI does something that could not have been foreseen, harm can still occur. The difficulty of culpability lies in the fact that no one really knows what constitutes negligent programing or use of AI systems. To be liable for damages under the TLA, adequate causation is required, i.e. a sufficient close casual relationship between the tortious act and the resulting damage. The black box mystery becomes an issue when it comes to determining whether the consequence of a tortious act was foreseeable or not. The current regulation may make it difficult or unreasonably costly to identify the responsible person and fulfil the requirements for a successful claim. Claimants may at worst be discouraged from seeking compensation altogether.

    The concept of legal subjects is however changeable and has evolved over time. For example, organizations have transitioned from not being considered legal subjects to being recognized as legal persons with legal capacity.

    In Sweden, only natural persons can be sentenced for an offence. For offences committed in the course of business, a corporate fine can be imposed on the company or legal person as such. The corporate fine is not considered a penalty in the legal sense but is instead a special legal effect of the offence.

  6. Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?

    When AI has caused damage, it can be hard to identify if it is the developer, the deployer or the user who should or could be held responsible for the damage. The starting point is that the TLA requires the liable party to have acted intentionally or negligently for damages to be awarded. If AI is considered as a part of a product, in accordance with PLA, and there is a fault in the AI that causes harm, the product manufacturer may be held liable for the damage, under the PLA.

    If an AI system is sold to consumers, the Swedish Consumer Sales Act (SFS 2022:260) could be applicable. Goods (including “digital content”, “digital service” and ” product with digital elements”) sold to consumers must according to the law meet the expected standards of safety and functionality. Determining who ultimately bears the responsibility is difficult, however the Consumer Sales Act entitles the consumer to claim damages from the retailer who supplied the goods, and the retailer can, in turn, approach the liable manufacturer and demand the amount paid to the consumer.

  7. What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?

    It is in general the victim who bears the burden of proof to demonstrate (i) the existence of a culpable act of omission, (ii) the extent to which one has suffered damage and (iii) the adequate causation between the damage and the culpable act. However, the requirements may vary depending on who has the best opportunity to secure evidence. Particularly regarding causation, the circumstances may be such that it is justified for the victim to be granted some evidentiary relief.

  8. Is the use of artificial intelligence insured and/or insurable in your jurisdiction?

    Considering the technology neutrality sought in the Swedish legislation and other regulations, provided no specific exceptions are made for AI, it is possible that the use of AI is insurable. For example, risks related to data breaches, cyber-attacks, and other digital threats could be covered by a cyber insurance. As AI systems often handle sensitive data, cyber insurance could provide protection in such case. For companies that develop and sell AI products, a product liability insurance could cover damages caused by AI systems, such as malfunctioning autonomous machines or defective AI software.

    In view of the fact that special rules will apply to liability in connection with AI, it does not seem unreasonable that supplementary insurance will soon be required for certain types of activities. However, it can be difficult for insurers to assess risks and willingness to insure AI as such is likely to be low.

  9. Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?

    No. Since AI is not considered as a legal entity in Sweden, AI cannot be named as an inventor in a patent application.

  10. Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?

    If an image has been generated completely by AI no one benefits from copyright protection according to the Swedish Intellectual Property Office. The right to use images generated by AI may however be regulated in the AI system’s terms of use.

    The fundamental principle of copyright in Sweden is that it can only be attributed to a human being. Copyright is based on the premise that the creator has contributed to the work through free and creative choices. Even though AI-generated images are produced based on human instructions, the outcome is unpredictable. Under current legislation, AI-generated images would in most cases not be eligible for copyright protection. For AI-generated material to be protected, the human creator must use the generative AI system as a tool or aid as part of a larger creative process. If it is possible to process AI-generated material to create a predictable result based on free and creative choices by the creator such material may be protected by copyright.

  11. What are the main issues to consider when using artificial intelligence systems in the workplace?

    Key issues may differ from business to business. However, issues which generally should be considered are ensuring compliance with GDPR, such as ensuring that the processing of personal data is done on a legal basis, and ensuring the protection of confidential information in general. Additionally, AI systems may raise ethical issues regarding fairness and bias. Algorithms may inadvertently discriminate against certain groups if not properly designed or trained, which has been proven in for example the AI released by Microsoft Corporation as a Twitter bot in 2016. Ensuring fairness and transparency in AI decision-making is crucial.

    AI systems must also be accurate and reliable. Ensuring that AI models are properly trained, validated, and tested is critical to avoid errors that could potentially impact business operations or decision-making.

  12. What privacy issues arise from the use of artificial intelligence?

    The development, training and use of AI raise several privacy issues. AI systems often rely on vast amounts of data. The collection of personal data, the biased and discriminating outcomes of many AI-systems, the enhanced surveillance capabilities, the black box aspect and the cross-border data flows are some of such issues. Moreover, GDPR emphasizes the principle of fairness and transparency, which includes the safe handling of data.

  13. How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?

    On 1 January 2023 changes were introduced to the CA as part of enforcing the DSM Directive article 4 (and article 3 regarding scientific research. Under the Directive, Member States have introduced an exception that allows for text and data mining – ‘scraping’. It is likely that the training of an AI tool constitutes scraping as it refers to an automated technique used to analyze text and data in digital form in order to generate information. However, this has not been tried by the Swedish courts yet. The basis for applying the exception is set out in Chapter 2 Section 15a-15c of the CA. Anyone, regardless of whether they are natural persons, corporations, organizations, authorities or other public bodies, who has legal access to a work may make copies of the work for scraping purposes. The exception allows for the making of copies but does not cover any form of making available of the copies made. The copies produced under the exception may not be kept longer than is necessary for the purpose – i.e. to access certain text and data.

    Data scraping is a violation of the CA if the website is protected by copyright and the scraped work is copied or made available to the public. The overall appearance of a website can be protected by copyright according to the CA. Since copyright arises automatically and does not require prior registration, it is difficult to know in advance if a certain content on a website is protected by copyright or not. The scraping itself often constitutes an unauthorized copying if the scraped material is protected by copyright.

    The exception in Chapter 2 Section 15a-15c of the CA will have implications for competition, and could potentially be interpreted restrictively, placing Sweden and EU at a competitive disadvantage in relation to other countries’ legislation on scraping. Alternatively, the exception could be interpreted more broadly, in favor of the new works presented to the market through scraping.

    Personal data processing occurs when a website is scraped and the entity initiating the scraping becomes a data controller responsible for the processing of the personal data. Data scraping that involves personal data must comply with GDPR principles, such as lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, and confidentiality. In addition to the requirement of legal basis for the processing of personal data, the data controller must also observe the rights of the data subject, including access to the data subject’s data, rectification, erasure, and the right to object to processing.

  14. To what extent is the prohibition of data scraping in the terms of use of a website enforceable?

    By using a website, the visitor may be obliged to accept the terms and conditions that apply to the website. For the terms and conditions to be legally binding, they must be clearly visible to the visitor. It is advisable that some form of documentation that the visitor has had access to the terms and conditions is available, for example by clearly requiring acceptance of the terms and conditions. If the terms and conditions include a provision prohibiting data scraping, the data scraping is unauthorized, even if the scraped data is not protected by copyright. Data scraping in violation of terms and conditions thus constitutes a breach of contract that may entitle the website owner to claim damages.

  15. Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?

    The Swedish Authority for Privacy Protection (“IMY”) has issued general guidelines on AI and GDPR, available in Swedish here:

    https://www.imy.se/verksamhet/dataskydd/innovationsportalen/vagledning-om-gdpr-och-ai/gdpr-och-ai/.

    In short, IMY’s guidelines on AI and the GDPR aims to create conditions for harmonizing the development and use of AI with strong data protection. IMY wants to promote development and digitalization that occurs in a privacy-friendly manner. The guidance is currently relatively brief but will be continuously updated with more information.

  16. Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?

    In 2021 IMY concluded that the Police through its use of the Clearview AI application had processed personal data in violation of the Criminal Data Act (SFS 2018:1177) and issued the Police an administrative fine (Reference number: DI-2020-2719). IMY found that it had not been possible to clarify what happened to the sensitive personal data that the Police entered into the AI tool Clearview AI, that the Police Authority did not carry out an impact assessment that it was obliged to carry out and did not take appropriate organizational measures to ensure and be able to demonstrate that the authority’s processing of personal data was lawful. IMY’s decision was appealed, and the Administrative Court of Appel overturned IMY’s decision as far as the penalty fee was concerned. The Supreme Administrative Court did not grant leave to appeal.

  17. Have your national courts already managed cases involving artificial intelligence?

    Yes, see question 16 above. There have also been cases involving university students who have been suspended for cheating using AI.

  18. Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?

    There is no Swedish authority or any other regulatory specifically responsible for supervising the use and development of AI. It is still to be decided which authority will become the supervisory authority to EU’s upcoming AI Act. A likely candidate is IMY.

  19. How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?

    Many businesses use AI to some extent, and the use will likely increase rapidly with the inclusion of comprehensive AI tools within the most commonly uses office suites (e.g. Co-pilot within Microsoft 365 and Gemeni within Google One). Also, as AI systems are being customized for both industries and individual companies, widespread use will be accelerated further.

  20. Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?

    Yes. Examples include in M&A-processes for the processing large volumes of data, in legal research to sift through vast amounts of legal documents and case law, and in document review to identify key information (or lack thereof).

    Companies offering legal services in the form of search engines are developing AI tools s to facilitate this type of investigative work.

  21. What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?

    Challenges:

    1. Regulatory compliance. Ensuring that AI applications comply with existing legal and ethical standards is a significant challenge. Lawyers must navigate the complexities of integrating AI into their practices while adhering to regulatory requirements and ethical considerations.
    2. Liability and accountability. There will be legal questions concerning liability in AI-driven decisions.
    3. Privacy/Security. It’s imperative to safeguard client/company information in connection with the use of AI.
    4. Changing landscape for lawyers. Over time, certain questions and matters may not be referred to lawyers, but instead in many cases be handled with the use of AI services.
    5. Understanding the technology. The lack of transparency in many AI-systems can, in combination with the increasing complexity of the technology, make it difficult for lawyers to understand or challenge AI-driven decisions or outcomes.

     

    Opportunities:

    1. AI offers unparalleled opportunities for efficiency and innovation in the provision of legal services.
    2. Risk Management. AI can be used to evaluate and manage risks associated with different legal strategies or client portfolios.
    3. Regulatory compliance. Although a challenge, AI can also help to improve regulatory compliance in many fields by ensuring compliance with complex regulatory frameworks, reducing the risk of non-compliance and associated penalties.
    4. New markets. The adoption of AI can create new opportunities for lawyers to offer innovative services, access new markets, and meet the evolving needs of clients in the digital age.
    5. Dependence on technology: Over-reliance on AI technologies can lead to a dependency that may be detrimental in situations where technology fails or is unavailable. Lawyers must balance the use of AI with the need to maintain core legal skills and judgment.

     

  22. Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?

    The legal development on AI is unlikely to advance significantly in the next 12 months. However, a major development is expected to occur in connection with the entry into force of EU’s AI Act, which is expected to take place in 2026. As the use of AI continues to increase, we can also expect to see more case law regarding e.g. liability issues and copyright infringement related to AI.