En guide om produktansvar 2024
Introduction What are the main causes of action upon which a product liability claim can be brought in your jurisdiction, for example breach of statu…
Artikeln är publicerad på Legal 500 och ger svar på ett antal frågor om artificiell intelligens och de juridiska utmaningarna och möjligheterna. Skriven av Maria Eiderholm och Johan Nyberg, delägare, samt Elin Sandin och Kristoffer Vördgren, biträdande jurister på Glimstedt.
There is no legal definition of artificial intelligence (“AI”) in Sweden.
The Government Offices of Sweden presented a national strategy for AI in 2018 titled “National approach to artificial intelligence”. The primary aim of the strategy is to ensure that Sweden becomes a leader in harnessing the opportunities of AI to benefit the Swedish welfare and competitiveness. To achieve this high-set goal, the key elements of Sweden’s national AI strategy include:
The Swedish AI strategy does not disclose financial provisions or estimations for its implementation.
Sweden has not implemented any specific laws regarding the use of AI or governance of AI. The European Commission’s independent High-Level Expert Group on Artificial Intelligence has issued Ethical Guidelines for Trustworthy AI that can be applied in Sweden. However, the guidelines are voluntary and not legally binding.
There are several existing laws in Sweden that potentially could be applied to AI and the use of AI, for example the General Data Protection Regulation (the “GDPR”), the Tort Liability Act (SFS 1972:207, the “TLA”), the Product Liability Act (SFS 1992:18, the “PLA”) and the Act on Copyright in Literary and Artistic Works (SFS 1960:729, the “CA”). The Swedish Parliament and Government have long aimed for technology-neutral laws. Nonetheless, there is room for uncertainty when applying existing laws to AI and the use of AI. So far, there are no Swedish legislative proposals governing the use of AI. However, it is likely that national legislation will follow EU’s AI Act.
Like for most other EU countries, difficulties arise in interpreting existing laws. For example, ensuring compliance with GDPR while leveraging big data for AI is challenging. Existing Intellectual property laws do not clearly address the ownership of AI-generated works or inventions. Whether AI can be an inventor or creator as well as determining liability for harm caused by AI systems so far remains unclear. Traditional concepts of liability may not be directly applicable to autonomous systems that can make decisions without human intervention. There are also ethical and fairness aspects to consider. Ensuring that AI systems are fair, non-discriminatory, and ethical is difficult within the current legal framework.
In some cases, there is a civil law duty of supervision to monitor AI technology. One such example can be found in Chapter 8, section 23 of the Securities Market Act (2007:528), where a securities institution that applies algorithmic trading in securities is obliged to have effective systems and risk controls for algorithmic trading. If a securities institution fails to comply with the requirements of this section, the offence of market manipulation is committed, in which case the court will decide whether representatives of the securities institution should have intervened and prevented the course of events.
While not a law or binding legislative initiative, various sectors in Sweden are developing specific guidelines and standards for AI applications. For example, the Swedish Medical Products Agency has presented guidelines for AI in medical devices on 13 September 2023. The guidelines highlights, among other things, the importance of a systematic implementation. The guidance also includes a checklist intended to provide practical support when planning the implementation of AI.
There are no specific rules that apply to defective AI. For liability to be imposed according to TLA, it is required that the natural or legal persons behind the AI have been negligent and that there is adequate causation between the negligent act and the damage. It remains uncertain how high the standards for negligence and adequate causation are, and whether liability can be imposed in cases where AI causes personal or property damage. Determining adequate causation in incidents involving AI is particularly challenging, given the complexity and autonomous decision-making capabilities of these systems.
Defective AI systems could be subject to claims under the PLA, CA as well as the General Product Safety Act (SFS: 2004:451) and the Swedish Consumer Sales Act (SFS 2022:260).
AI is not considered a legal entity in Sweden, so the AI itself cannot be responsible for any damages. However, the natural or legal persons behind the AI, e.g. developers or users, can potentially be held responsible for damages caused by AI.
According to the TLA, it is required that the person causing the damage has acted intentionally or negligently in order for damages to be paid. A common issue on AI is the black box mystery, meaning that it is uncertain and unpredictable to try to understand how AI works. If all possible precautions are taken and the AI does something that could not have been foreseen, harm can still occur. The difficulty of culpability lies in the fact that no one really knows what constitutes negligent programing or use of AI systems. To be liable for damages under the TLA, adequate causation is required, i.e. a sufficient close casual relationship between the tortious act and the resulting damage. The black box mystery becomes an issue when it comes to determining whether the consequence of a tortious act was foreseeable or not. The current regulation may make it difficult or unreasonably costly to identify the responsible person and fulfil the requirements for a successful claim. Claimants may at worst be discouraged from seeking compensation altogether.
The concept of legal subjects is however changeable and has evolved over time. For example, organizations have transitioned from not being considered legal subjects to being recognized as legal persons with legal capacity.
In Sweden, only natural persons can be sentenced for an offence. For offences committed in the course of business, a corporate fine can be imposed on the company or legal person as such. The corporate fine is not considered a penalty in the legal sense but is instead a special legal effect of the offence.
When AI has caused damage, it can be hard to identify if it is the developer, the deployer or the user who should or could be held responsible for the damage. The starting point is that the TLA requires the liable party to have acted intentionally or negligently for damages to be awarded. If AI is considered as a part of a product, in accordance with PLA, and there is a fault in the AI that causes harm, the product manufacturer may be held liable for the damage, under the PLA.
If an AI system is sold to consumers, the Swedish Consumer Sales Act (SFS 2022:260) could be applicable. Goods (including “digital content”, “digital service” and ” product with digital elements”) sold to consumers must according to the law meet the expected standards of safety and functionality. Determining who ultimately bears the responsibility is difficult, however the Consumer Sales Act entitles the consumer to claim damages from the retailer who supplied the goods, and the retailer can, in turn, approach the liable manufacturer and demand the amount paid to the consumer.
It is in general the victim who bears the burden of proof to demonstrate (i) the existence of a culpable act of omission, (ii) the extent to which one has suffered damage and (iii) the adequate causation between the damage and the culpable act. However, the requirements may vary depending on who has the best opportunity to secure evidence. Particularly regarding causation, the circumstances may be such that it is justified for the victim to be granted some evidentiary relief.
Considering the technology neutrality sought in the Swedish legislation and other regulations, provided no specific exceptions are made for AI, it is possible that the use of AI is insurable. For example, risks related to data breaches, cyber-attacks, and other digital threats could be covered by a cyber insurance. As AI systems often handle sensitive data, cyber insurance could provide protection in such case. For companies that develop and sell AI products, a product liability insurance could cover damages caused by AI systems, such as malfunctioning autonomous machines or defective AI software.
In view of the fact that special rules will apply to liability in connection with AI, it does not seem unreasonable that supplementary insurance will soon be required for certain types of activities. However, it can be difficult for insurers to assess risks and willingness to insure AI as such is likely to be low.
No. Since AI is not considered as a legal entity in Sweden, AI cannot be named as an inventor in a patent application.
If an image has been generated completely by AI no one benefits from copyright protection according to the Swedish Intellectual Property Office. The right to use images generated by AI may however be regulated in the AI system’s terms of use.
The fundamental principle of copyright in Sweden is that it can only be attributed to a human being. Copyright is based on the premise that the creator has contributed to the work through free and creative choices. Even though AI-generated images are produced based on human instructions, the outcome is unpredictable. Under current legislation, AI-generated images would in most cases not be eligible for copyright protection. For AI-generated material to be protected, the human creator must use the generative AI system as a tool or aid as part of a larger creative process. If it is possible to process AI-generated material to create a predictable result based on free and creative choices by the creator such material may be protected by copyright.
Key issues may differ from business to business. However, issues which generally should be considered are ensuring compliance with GDPR, such as ensuring that the processing of personal data is done on a legal basis, and ensuring the protection of confidential information in general. Additionally, AI systems may raise ethical issues regarding fairness and bias. Algorithms may inadvertently discriminate against certain groups if not properly designed or trained, which has been proven in for example the AI released by Microsoft Corporation as a Twitter bot in 2016. Ensuring fairness and transparency in AI decision-making is crucial.
AI systems must also be accurate and reliable. Ensuring that AI models are properly trained, validated, and tested is critical to avoid errors that could potentially impact business operations or decision-making.
The development, training and use of AI raise several privacy issues. AI systems often rely on vast amounts of data. The collection of personal data, the biased and discriminating outcomes of many AI-systems, the enhanced surveillance capabilities, the black box aspect and the cross-border data flows are some of such issues. Moreover, GDPR emphasizes the principle of fairness and transparency, which includes the safe handling of data.
On 1 January 2023 changes were introduced to the CA as part of enforcing the DSM Directive article 4 (and article 3 regarding scientific research. Under the Directive, Member States have introduced an exception that allows for text and data mining – ‘scraping’. It is likely that the training of an AI tool constitutes scraping as it refers to an automated technique used to analyze text and data in digital form in order to generate information. However, this has not been tried by the Swedish courts yet. The basis for applying the exception is set out in Chapter 2 Section 15a-15c of the CA. Anyone, regardless of whether they are natural persons, corporations, organizations, authorities or other public bodies, who has legal access to a work may make copies of the work for scraping purposes. The exception allows for the making of copies but does not cover any form of making available of the copies made. The copies produced under the exception may not be kept longer than is necessary for the purpose – i.e. to access certain text and data.
Data scraping is a violation of the CA if the website is protected by copyright and the scraped work is copied or made available to the public. The overall appearance of a website can be protected by copyright according to the CA. Since copyright arises automatically and does not require prior registration, it is difficult to know in advance if a certain content on a website is protected by copyright or not. The scraping itself often constitutes an unauthorized copying if the scraped material is protected by copyright.
The exception in Chapter 2 Section 15a-15c of the CA will have implications for competition, and could potentially be interpreted restrictively, placing Sweden and EU at a competitive disadvantage in relation to other countries’ legislation on scraping. Alternatively, the exception could be interpreted more broadly, in favor of the new works presented to the market through scraping.
Personal data processing occurs when a website is scraped and the entity initiating the scraping becomes a data controller responsible for the processing of the personal data. Data scraping that involves personal data must comply with GDPR principles, such as lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, and confidentiality. In addition to the requirement of legal basis for the processing of personal data, the data controller must also observe the rights of the data subject, including access to the data subject’s data, rectification, erasure, and the right to object to processing.
By using a website, the visitor may be obliged to accept the terms and conditions that apply to the website. For the terms and conditions to be legally binding, they must be clearly visible to the visitor. It is advisable that some form of documentation that the visitor has had access to the terms and conditions is available, for example by clearly requiring acceptance of the terms and conditions. If the terms and conditions include a provision prohibiting data scraping, the data scraping is unauthorized, even if the scraped data is not protected by copyright. Data scraping in violation of terms and conditions thus constitutes a breach of contract that may entitle the website owner to claim damages.
The Swedish Authority for Privacy Protection (“IMY”) has issued general guidelines on AI and GDPR, available in Swedish here:
https://www.imy.se/verksamhet/dataskydd/innovationsportalen/vagledning-om-gdpr-och-ai/gdpr-och-ai/.
In short, IMY’s guidelines on AI and the GDPR aims to create conditions for harmonizing the development and use of AI with strong data protection. IMY wants to promote development and digitalization that occurs in a privacy-friendly manner. The guidance is currently relatively brief but will be continuously updated with more information.
In 2021 IMY concluded that the Police through its use of the Clearview AI application had processed personal data in violation of the Criminal Data Act (SFS 2018:1177) and issued the Police an administrative fine (Reference number: DI-2020-2719). IMY found that it had not been possible to clarify what happened to the sensitive personal data that the Police entered into the AI tool Clearview AI, that the Police Authority did not carry out an impact assessment that it was obliged to carry out and did not take appropriate organizational measures to ensure and be able to demonstrate that the authority’s processing of personal data was lawful. IMY’s decision was appealed, and the Administrative Court of Appel overturned IMY’s decision as far as the penalty fee was concerned. The Supreme Administrative Court did not grant leave to appeal.
Yes, see question 16 above. There have also been cases involving university students who have been suspended for cheating using AI.
There is no Swedish authority or any other regulatory specifically responsible for supervising the use and development of AI. It is still to be decided which authority will become the supervisory authority to EU’s upcoming AI Act. A likely candidate is IMY.
Many businesses use AI to some extent, and the use will likely increase rapidly with the inclusion of comprehensive AI tools within the most commonly uses office suites (e.g. Co-pilot within Microsoft 365 and Gemeni within Google One). Also, as AI systems are being customized for both industries and individual companies, widespread use will be accelerated further.
Yes. Examples include in M&A-processes for the processing large volumes of data, in legal research to sift through vast amounts of legal documents and case law, and in document review to identify key information (or lack thereof).
Companies offering legal services in the form of search engines are developing AI tools s to facilitate this type of investigative work.
Challenges:
Opportunities:
The legal development on AI is unlikely to advance significantly in the next 12 months. However, a major development is expected to occur in connection with the entry into force of EU’s AI Act, which is expected to take place in 2026. As the use of AI continues to increase, we can also expect to see more case law regarding e.g. liability issues and copyright infringement related to AI.