OpenAI has introduced a feature known as Copyright Shield, which essentially serves as an indemnification clause in a software license agreement for customers using ChatGPT Enterprise or the ChatGPT API. This means OpenAI will defend users against legal claims for copyright infringement that arise from output generated by these tools and will cover any resulting damages. This is not a new practice in the enterprise technology space, as indemnification clauses have been part of software licenses for a long time. OpenAI's move aims to attract developers and large companies in a highly competitive market for generative AI tools, where indemnification clauses are often a deciding factor in vendor selection. In the background, OpenAI is facing several lawsuits, including those from authors who accuse the company of misusing their books to train ChatGPT, claiming that the output constitutes 'derivative works'. However, OpenAI has countered that the text generated by ChatGPT does not violate the authors' rights and that the training use falls under the 'fair use' provision. They have asked a San Francisco federal court to dismiss parts of these lawsuits. Notably, a group of authors, including names like Jonathan Franzen and George R.R. Martin, have filed class-action lawsuits alleging that OpenAI's use of their books to train ChatGPT is a "systematic theft on a mass scale" and are seeking damages and a permanent injunction. Moreover, the legalities around the use of copyrighted works to train AI models are not entirely settled, and the current lawsuits may test the boundaries of 'fair use'. Rebecca Tushnet, a copyright law expert at Harvard Law School, highlights that the law has often regarded the use of works for training or large-scale data-mining purposes as fair use. The ongoing debate now focuses on who is responsible for the outputs of AI tools: the user who prompts the AI or the creators of the tool. The legal framework, especially concerning 'fair use', is designed to be flexible to accommodate new technological scenarios. However, as Tushnet notes, copyright law may not be the best tool for addressing all the implications of generative AI, suggesting that while the law is adaptable, it may not fully address the challenges posed by AI in creating or disseminating content.