Safe harbour provisions: IT Act, platform immunity; Part 3.2
THE MANILA PRINCIPLES – A COMPARATIVE ANALYSIS
The Manila Principles is a set of guidelines out-lining safeguards that must apply in all legal frameworks on intermediary liability. The document was launched at RightsCon, Southeast Asia – a multi-stakeholder conference held in Manila, Philippines in 2015 – by a coalition of Internet rights activists and civil society organizations. The main purpose of the Manila Principles is to encourage the development of interoperable and harmonized liability regimes that can promote innovation while respecting users’ rights in line with the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the United Nations Guiding Principles on Busi-ness and Human Rights.[1]
The six broad principles are as follows.[2]
(1) Intermediaries should be shielded by law from liability for third-party content.
(2) Content must not be required to be restrict-ed without an order by a judicial authority.
(3) Requests for restrictions of content must be clear, be unambiguous, and follow due process.
(4) Laws and content restriction orders and practices must comply with the tests of necessity and proportionality.
(5) Laws and content restriction policies and practices must respect due process.
(6) Transparency and accountability01 must be built in to laws and content restriction policies and practices.
By virtue of offering safe -harbour protection to Internet intermediaries under Section 79 of the IT Act, India can be said to comply with the first of these principles (Intermediaries should be shielded by law from liability for third-par-ty content). The immunity enjoyed by intermediaries is of course conditional, and there is ambiguity in law that make compliance far from easy, yet immunity under law is nevertheless provided for.
The second of the Manila principles (content must not be required to be restricted without an order by a judicial authority) is more or less respected under Indian intermediary liability laws, as intermediaries are required to take-down content only on receiving a court order or Government directive asking them to do so. Though government directives are not orders by judicial authorities, it must be noted that Indian laws do not ask intermediaries to exercise their own judgment in taking down content. Intermediaries are no longer obligated to remove content on receiving notices from any affected third- party. Rather, an independent determination as to whether or not particular content should be removed is taken by the judiciary or executive, then conveyed to intermediaries (for takedowns in situations of IP disputes, there still exists a notice-and-takedown regime, at the behest of the rightsholder).[3]
The third Manila principle (requests for re-strictions of content must be clear, be unambiguous, and follow due process) does not see much compliance in the Indian legal frame-work. It can be argued that takedown orders issued by the judiciary or executive or those received under the Copyright Act are bound to be clear, unambiguous and in compliance with due process, in that the orders will always clearly direct takedowns, specify the particular content to be removed and be authorized by relevant law. However, intermediaries also receive what are effectively takedown orders through other channels, such as Section 91 of the CrPC, which does not have the requisite checks and balances in place against abuse.
The fourth principle (laws and content restriction orders and practices must comply with the tests of necessity and proportionality) is not fully observed in India, most notably in cases of alleged copyright infringement. Indi-an courts routinely issue website blocking orders to intermediaries like ISPs on the basis of petitions alleging copyright infringement against a large number of websites at once. It is not uncommon for such orders to target thousands of websites and URLs at the same time, a large number of which may not even contain in-fringing material. As the lists of websites and URLs to be blocked are so populous, it can be said with certainty that no detailed examinations of alleged infringements are undertaken before issuing take-down orders, and even in cases where copyright infringement does exist, whole websites are frequently directed to be taken down even if specific URLs within these websites will suffice.
The fifth Manila principle (laws and content restriction policies and practices must respect due process) is not fully observed in India. With respect to legal provisions specifically related to intermediary liability i.e. Section 79 of the IT Act and the Intermediaries Guidelines Rules, deviations from due process include the absence of opportunities for content creators to defend their content, and the absence of means to restore content that has already been removed. As for provisions that are unofficially used to direct content takedowns like Section 91 of the CrPC, the question of due process does not even arise because such provisions are not rightfully to be used for this purpose.
The final Manila principle (transparency and accountability must be built in to laws and content restriction policies and practices) too does not find compliance in India, especially with regard to content taken down under Section 69A of the IT Act. Rules framed under Section 69A stipulate that strict confidentiality must be maintained around complaints made, orders issued and action taken under the provision, and reasons for takedowns are never disclosed to the public. Websites and URLs blocked under Section 69A simply state that blocking order have been received from the Government. Moreover, requests made under the Right to Information Act for details on blocked content are consistently turned down by citing the confidentiality clause built into the regulation.
In review, India’s compliance with the Manila principles, though improved over the past few years, is still wanting in many respects.
INTERMEDIARY LIABILITY IN OTHER JURISDICTIONS
Different jurisdictions may establish different enactments and procedures to restrict content that is considered unlawful. Different regimes also follow different legal frame-works to grant conditional immunity or safe harbour to intermediaries. The notice and notice model obliges intermediaries to direct any complaint of alleged infringement of copyright they get from the owner of copy-right to the user or subscriber in question. This procedure is followed in Canada and is enshrined in the Copyright Modernization Act, that came into effect in January, 2015. According to this model, receiving a notice does not compulsorily mean that the sub-scriber has infringed copyright and does not require the subscriber to contact the copy-right owner or the intermediary.[4] There-fore, the objective of the notice-and-notice regime is to discourage online infringement on the part of Internet subscribers and to raise awareness in instances where Internet subscribers’ accounts are being used for such purposes by others.[5] It enables the complainant and the content owner to resolve the dispute among themselves without the involvement of the intermediary.
The second model is the notice and takedown model. It is followed by countries like South Korea[6] and the United States of America.[7] According to this system, an intermediaryresponds to government notifications, court orders or notices issued by private parties themselves, to take down content by promptly removing or disabling such allegedly illegal content. This self regulatory framework, by which ISPs determine whether or not a website contains illegal or harmful content raises questions of accountability, transparency and the overall appropriateness of delegating content regulation to private actors, who have to act as judges.[8] This could be seen as “privatization of censorship.”[9]
The third model is called the Graduated Response model or the “three strikes system.” Under this system, rights holders may ask intermediaries to send warnings to subscribers identified as engaging in illegal file sharing or infringing copyright. The intermediary may be required to send more than one notice, with repeat infringers risking bandwidth reduction and sometimes even complete suspension of the account. France, New Zealand, Taiwan, South Korea and the United Kingdom have enacted legislations that require intermediaries to exercise certain degree of policing to protect users’ rights. Some countries like the United States and Ireland permit private arrangements between rights holders and intermediaries to accomplish the same end.
United States of America
The law relating to intermediary liability in the United States of America is mostly governed by Section 512(c) of the Digital Millennium Copyright Act (“DMCA”) and Section 230 of the Communications Decency Act (“CDA”). Section 512 of the DMCA was enacted by the US Congress with a view to limit the liability of intermediaries and to check online and copyright infringement, including limitations on liability for compliant service providers to help foster the growth of Internet-based ser-vices.[10] The intermediary must comply with the notice- and-takedown procedure under Section 512 to qualify for protection.
The CDA was originally enacted to restrict freedom of speech and expression but the restrictive sections were later struck down for being unconstitutional. Section 230 is considered one of the most valuable tools for protecting intermediaries from liability for third party generated content. It reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any in-formation provided by another information content provider.” The section encompasses claims of defamation, encroachment of privacy, tor-tious interference, civil liability for criminal law violations, and general negligence claims based on third party content.[11]
The legislation also contains a policy statement from the US government that provides safe harbour at Section 230(B)(4) for any action taken to: “encourage the development of technologies that maximize user control over what information is received by individuals to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.”
In 2018, a new legislation called Stop Enabling Sex Traffickers Act was passed (SESTA) and Al-low States and Victims to Fight Sex Trafficking Online Act (jointly known as FOSTA-SESTA), which expands criminal liability for classifieds websites like Backpage.com which was alleged to host ads from sex traffickers in its adult ser-vices action. Backpage.com had claimed that it is an intermediary and is not responsible for content uploaded by users. Although, the new bill is well-intentioned, it dilutes the protection provided by Section 230 of the Communications Decency Act, which has been considered the most valuable piece of legislation protecting freedom of speech and expression online, by implicating intermediaries for user generated content.
Case Studies
Dart v. Craigslist, Inc.[12]
Craigslist is the largest online classified advertisement service in the United States. Postings on the site include advertisements for jobs, housing, sale of various items and other services. The listings also included a section for “erotic services”, even though Craigslist’s terms and conditions categorically forbid the advertisement of illegal activities.
The “erotic services” section caught the attention of State and local law enforcement. It was seen that some users were using the section to advertise illegal services. In March 2008, the Attorney General of Connecticut, on behalf of the Attorney Generals of forty other states sent a notice to Craigslist to remove the ads that publicized prostitution and other illicit activities prohibited under state law. In November 2008, Craigslist reached an agreement with the Attorney Generals to implement steps to hinder illegal listings on the erotic services section, but not completely remove them. Subsequently, Craigslist announced a ninety per-cent drop in its erotic services listings.
Four months later, Craigslist was sued by one Thomas Dart, a Sheriff for the county in Illinois, claiming that the site created “public nuisance”, under Illinois state law, because its “conduct in creating erotic services, developing twenty -one categories, and providing a word search function causes a significant interference with the public’s health, safety, peace, and welfare.”[13] Craigslist ultimately won that case on the grounds of Section 230(c)(1) of the CDA. The court held that Craigslist was an Internet service provider (Intermediary) and hence, immune from wrongs committed by third parties. However, Craigslist removed the phrase “erotic services” and replaced it with “adult services.” The case is considered a victory for online speech.
Later, due to mounting pressure, Craigslist completely removed the “adult services” section from its website and the link to the section was replaced with a black label reading “censored.”
Viacom International, Inc v. YouTube, Inc.[14]
In March 2007, Viacom filed a lawsuit against Google and YouTube alleging copyright infringements by its users.
It sought USD 1 Billion in damages for the copyright infringement of more than a hundred thousand videos owned by Viacom. Thereafter, several class action lawsuits were also filed against YouTube by sports leagues, music publishers and other copyright owners.
These lawsuits tested the strength of the DMCA safe harbour as applied to online service providers that host text audio and video on behalf of users.[15] In June 2010, the United States District Court for the Southern District of New York held that YouTube, being an intermediary was protected by the DMCA safe harbour. The judge said that compelling online platforms to constantly police videos that are being up-loaded by third parties “would contravene the structure and operation of the DMCA.”[16] Viacom appealed the decision to the Second Cir-cuit Court of Appeals in August 2011, which reversed the earlier decision. In April 2013, the district again ruled in favour of YouTube saying that YouTube could not possibly have known about the copyright infringements and was protected under the DMCA. Viacom again began the process of second appeal but before the date of the hearing, both the parties negotiated a settlement in March 2014.[17]
Matthew Herrick v. Grindr LLC[18]
Plaintiff Herrick alleged that his ex - boyfriend set up several fake profiles on Grindr (a dating app for the LGBTQ community) that claimed to be him and resulted in identity theft/ manipulation. Over a thousand users responded to the impersonating profiles. Herrick’s ex boyfriend, pretending to be Herrick, would then direct the men to Herrick’s’ workplace and home.
The impersonating profiles were reported to Grindr (the app’s operator), but Herrick claimed that Grindr did not respond, other than to send an automated message. Herrick sued Grindr, accusing the company of negligence, intentional infliction of emotional dis-tress, false advertising, and deceptive business practices for allowing him to be impersonated and turned into an unwitting beacon for stalkers and harassers[19] liable to him because of the defective design of the app and the failure to police such conduct on the app.
The Court rejected Herrick’s claim that Grindr is not an interactive computer service as defined in the CDA. With respect to Grindr’s products liability, negligent design and failure to warn claims, the court found that they were all predicated upon content provided by another user of the app. Any assistance, including algorithmic filtering, aggregation and display functions that Grindr provided to his ex-boyfriend was “neutral assistance” that is available to good and bad actors on the app alike.
The court also highlighted that choosing to remove content or to let it stay on an app is an editorial choice, and finding Grindr liable based on its choice to let the impersonating profiles remain would be finding Grindr liable as if it were the publisher of that content.
An appeal has been filed against the court’s ruling to the Second Circuit Court of Appeals, in this matter.
European Union
E-commerce Directive
Articles 12 to 15 of Directive 2000/31/EC of 8 June 2000 on electronic commerce mandate the member states of the EU to establish defenses, under both civil and criminal law for the benefit of certain types of online intermediary.[20] Directive 2001/29/EC on Copyright in the Information Society (as to copyright) and Directive 2004/48/EC on the Enforcement of Intellectual Property Rights (other than copyright) mandate EU member states to give rights holders the right to seek an injunction against those online intermediaries whose services are used by a third party to infringe an intellectual property right.
Articles 12 to 15 of Directive 2000/31/EC is the primary piece of legislation governing intermediary liability. It incorporates a no-tice-and-takedown system for intermediaries to abide to. Articles 12 to 14 categorises intermediaries into “mere conduits” , ‘caching’ ser-vices and ‘hosting’ services. Article 15 states that intermediaries have no general obligation to actively monitor the information which they transmit or store for illegal activity.
The General Data Protection Regulation (“GDPR”) which came into effect from 25th May 2018[21] is aligned with Directive 2000/31/EC. Article 2(4) of the GDPR reads:
“This Regulation shall be without prejudice to the application of Directive 2000/31/EC, in particular of the liability rules of intermediary service providers in Articles 12 to 15 of that Directive.”
Recital 21 of the GDPR[22] reads as follows:
“This Regulation is without prejudice to the application of Directive 2000/31/EC of the European Parliament and of the Council, in particular of the liability rules of intermediary service providers in Articles 12 to 15 of that Directive. That Directive seeks to contribute to the proper functioning of the internal market by ensuring the free movement of information society services between Member States.”
Directive on Copyright in the Digital Single Market COM/2016/0593 final - 2016/0280 (COD) (EU Copyright Directive)
In September of 2016 the EU Commission proposed a new directive to update its existing copyright framework[23] after a number of years of public consultation. Since then several negotiations and amendments have been incorporated in the proposal and a final text[24] was agreed upon by the EU Parliament and Council on 13th of February, 2019.[25]
Two provisions, in particular, in the pro-posed EU copyright directive, warrant red flags: Article 11 and Article 13.
Article 11, grants publishers the right to re-quest payment from online platforms that share their stories. This provision is being called the “link tax” which gives publish-ers the right to ask for paid licenses when online platforms and aggregators such as Google news share their stories.[26]
The Article excludes ‘uses of individual words or very short extracts of a press publication’ from its purview.[27]
The more problematic provision of the proposed directive is, however, Article 13, which makes ‘online content sharing ser-vice providers’[28] liable for copyright infringement for content uploaded by their users. The proposed copyright directive precludes the ‘safe-harbour’ protection afforded to such online content sharing ser-vice providers, under the EU e-commerce directive, for user generated content which is protected by copyright,. For protection against liability, these services must enter into license agreements; make best efforts to get such authorisation for hosting copy-right protected content and make best efforts to ensure unavailability of protected content (this will likely result in the use of upload filters)[29] ; and implement a notice and takedown mechanism, including prevention of future uploads.
This effectively means that intermediaries will have to proactively monitor and pre-screen all the content that users upload. This degree of monitoring for illegal content is not possible manually and can only be handled by automated filters, that are far from perfect and can be easily manipulated. For example, YouTube’s “Content ID” system has been deemed notorious for over -removing innocent material.[30] Article 13 will turn intermediaries into the content police and would hamper the free flow of information on the Internet.[31] There is also the problem of dedicated infringers finding a way around content filters and the possibility of automated tools making errors, specially in cases of fair use like - criticism, reviews and parodies.[32]
The proposed directive is scheduled for voting before the European Parliament either in late March or mid-April of 2019.
Terrorist Content Regulation[33]
On 12th September 2018, the European Com-mission released the draft - ‘Regulation on Pre-venting the Dissemination of Terrorist Content Online.’ which requires tech companies and online intermediaries to remove “terrorist con-tent” within one hour after it has been flagged to the platforms by law enforcement authorities as well as Europol.[34] The proposal needs to be backed by member states and the EU Parliament before it can be passed as law.
Websites that fail to take immediate action will be liable to pay fines. Systematic failure to comply will invite penalties of up to four percent of the company’s global turnover in the last financial year (similar to fines un-der the GDPR)[35]. Requirement for proactive measures, including automated detection, are needed to effectively and swiftly detect, identify and expeditiously remove or disable terrorist content and stop it from reappearing once it has been removed. A human review step before content is removed, so as to avoid unintended or erroneous removal of content which is not illegal has also been recommended in the proposed regulation.[36]
These draft legislations in the EU, namely - the proposed copyright directive and the terrorist content regulation, point towards a shifting trend in European countries wherein governments wanting to hold online intermediaries more accountable and responsible for illegal user generated content generated on their platforms.
In both cases, for copyright and terrorist content, the EU has suggested (through these legislations) the use of automated tools for content filtering, which may lead to over-compliance (to ring- fence themselves against liability), private censorship and resultant dilution of free speech rights on the Internet.
Case studies
Delfi v. Estonia (2015)[37]
The judgment in this case brings to light fascinating issues of both human rights and the law governing intermediary liability in the EU, making it one of the most important judgment of recent times, with respect to intermediary liability.
Delfi is one of the biggest online news portals in Estonia. Readers may comment on the news stories, even though Delfi operates a system to regulate unlawful content within a notice and takedown framework. In January, 2006, a news article was published by Delfi that talked about how a ferry company, namely SLK Ferry had wrecked the pathway that connected Estonia’s mainland to its islands. There were one hundred and eighty five user generated comments on the news article, out of which about twenty were viewed as offensive and threatening towards the company’s sole shareholder L. The comments were asked to be removed and damages were claimed by L. Delfi removed the comments but refused to pay damages. The matter was brought to various lower courts before it reached the Supreme Court in June 2009, which held that Delfi had a legal obligation to prevent unlawful and illegal content from being posted on their website, since it was the publisher of the comments, along with the original author, and therefore was not protected by EU Directive 2000/31/EC. Further, the court stated that defamatory speech is not covered under right to freedom of expression.
Aggrieved by the Supreme Court’s judgment, Delfi moved the European Court of Human Rights. The question before the ECHR was whether the previous court’s decision to hold Delfi liable was an unreasonable and disproportionate restraint on Delfi’s freedom of ex-pression, according to Article 10[38] of the Convention for the Protection of Human Rights and Fundamental Freedoms.
The ECHR was called upon to strike a balance between freedom of expression under Article 10 of the Convention and the preservation of personality rights of third persons under Article 8 of the same Convention.[39] In 2013, in a unanimous judgment, Delfi lost the case at ECHR and the matter was thereafter brought before the Grand Chamber. On 16 June, 2015, the Grand Chamber upheld the decision of the Fifth Section of the ECHR, asserting that the liability against Delfi was justified and proportionate because:
(1) The comments in question were outrageous and defamatory, and had been posted in response to an article that was published by Delfi on its professionally managed online news portal which is of commercial nature; and
(2) Delfi failed to take enough steps to remove the offensive remarks immediately and the fine of 320 Euros was insufficient.[40]
The decision was criticized by digital and civil rights activists for being against Directive 2000/31/EC which protects intermediaries from user generated content and freedom of expression online. It also set a worrying precedent that could change the dynamics of free speech on the Internet and intermediary liability. Furthermore, the decision was condemned for the Court’s fundamental lack of under-standing of the role of intermediaries.
Magyar Tartalomszolgáltatók Egyesülete (“MTE”) and Index.hu Zrt (“Index”) v. Hungary (2016)[41]
After the controversial Delfi judgment, which was considered by many a setback to online free speech and liability of intermediaries with respect to third party generated content, the European Court of Human Rights delivered another landmark judgment, ruling the other way.
The parties, MTE and Index are a Hungarian self regulatory body of Internet content providers and a news website respectively. The organizations had featured an opinion piece on the unethical business practices of a real estate company, which garnered a lot of resentful comments from readers. In response, the real estate company sued MTE and Index for infringing its right to a good reputation. The Hungarian courts declined to apply the safe harbour principles under Directive 2000/31/ EC, stating that the same applies only to commercial transactions of electronic nature i.e purchases made online. According to them the comments were made in a personal capacity and were outside the ambit of economic or professional undertakings, and hence not qualified for safe harbour protection.
The matter was moved to the European Court of Human Rights (ECHR). In a 2016 ruling that was considered an enormous step forward for protection of intermediary from liability and online free speech, the Court held that requiring intermediaries to regulate content posted on their platform “amounts to requiring excessive and impracticable forethought capable of undermining freedom of the right to impart information on the Internet.[42]” The Court also declared that the rulings of the Hungarian courts were against Article 10 of the Convention for the Protection of Human Rights and Fundamental Freedoms.[43]
Right to Be Forgotten in the EU
The GDPR came into force on May 25, 2018, repealing the 1995 Data Protection Directive. It is meant to harmonize data privacy laws across Europe, protect data privacy of EU citizens and provide a comprehensive data privacy framework for organizations that collect and process data.[44]
Article 17 of the GDPR provides for the Right to Erasure or the Right to be Forgotten. This is a development from the Data Protection Directive (Directive 95/46/ec) where there was no mention of this term, although it was implicit under Articles 12 and 14. The grounds under Article 17 of the GDPR are detailed and broader than those provided in the 1995 Data Protection Directive. The data subject has the right to demand erasure of the information concerning her in the following cases:
•personal data is not required for processing;
•(s)he withdraws consent;
•when there has been unlawful processing of data;
•objection is on grounds under Article 21(1) and Article 21(2) of GDPR;[45]
•national laws require erasure of data and; and
•when the data is provided in relation to in-formation society services by a child under Article 8(1).[46]
The Article also provides for situations in which the Right to be Forgotten will not be applicable. The grounds are:
•exercise of the right of freedom of expression and information;
•public interest and public health;
•when the processing is a legal obligation;
•for archiving purposes with respect to public interest, scientific, historical research, or statistical purposes; and
•exercise or defence of legal claims. However, RTBF under the GDPR is plagued with several problems, namely:
•Disproportionate Incentives: The infra-structure in place for Right to be Forgotten is heavily tilted toward Right to Privacy and not toward informational rights and freedom of speech and expression. It provides unbalanced incentives to the Controller, thereby causing them to over comply and favour delisting in order to protect themselves. Article 83(5) provides fines as high as upto EUR 20,000,000, or in the case of an undertaking, upto 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher. The Google Transparency Report which provides anonymized data on Right to be Forgotten requests has in its statistics, which began from May 29, 2014, stated that out of all the URLs they have evaluated for removal, 44.2% have been removed until February 2019.[47]
•Procedural Problems: According to both, the Google-Spain ruling and the GDPR, search engines are the initial adjudicators before whom data subjects file RTBF re-quests. This is similar to the intermediary liability takedown procedure and the same difficulties arise in this case as these questions involve a delicate balance between rights and private companies should not be the entities who make this determination. Publishers generally do not to have the right to approach courts under the GDPR regime. This leads to a clear tilt in the system towards the rights of the data subject’s privacy rather than the freedom of speech and expression of the content writer or the publisher.[48]
• Hosting Platforms as Controllers: While it is settled that search engines are Controllers, there exists a lack of clarity on whether hosting platforms will have RTBF obligation on user content. This has not been resolved by the 2016 Resolution. It is probable that hosting platforms will process Right to be Forgotten requests to avoid liability and the risk of being included in the definition of Controller with all the obligations which come along with it.
• Applicability of E-commerce Directive[49] on Intermediaries: Article 2(4) of the GDPR states that the GDPR would be applicable without prejudice to the 2000 E -commerce directive, in particular Article 12 to 15 which pertain to intermediary liability. Often Intermediaries face dual liability under both data protection laws and intermediary liability laws where exists potential for such overlap.
EU cases on Right to Be Forgotten
Google v. Spain[50] (2014)
The landmark case of Google v. Spain before the Court of Justice of the European Union read in the Right to be forgotten from Articles 12 and 14 of the Data Protection Directive specifically with respect to delisting of search results by search engines, and laid down several important principles in this regard. The complainant in this case, one Mr. Costeja Gonzalez filed a case against Google for showing search results related to the auction of his property for recovery of social security debts, that took place ten years ago and was published in the Spanish newspaper La Vanguardia. He wanted the search engine to delist these links from the search engine as it was no longer relevant and harmed his reputation.
The following questions arose during the proceedings of the case:
(1) Whether search engines are ‘Processors/ Controllers’ of data?
Google stated that it is neither the Processor nor the Controller of data. It is not the Processor as it does not discriminate between personal data and general data while undertaking its activities and as it does not exercise any control over the data, it is not the Controller.[51] The Court, however, rejected this reasoning. Google was held to collect, record, retrieve, organize, store, disclose and make data available to the public, which comes under the definition of processing. The fact that the data is already published and not altered makes no difference.[52] The Court also held that because the search engine exercises control and determines the purpose and means of the activities that it undertakes during processing, it will be the Controller with respect to these activities and cannot be excluded only on the basis that it exercises no control over the personal data on the website of third parties. The Court also emphasized that entering a person’s name into a search engine and getting all information pertaining to that person would enable profiling of that individual.[53] It was held that it is irrelevant that publishers possess the means to block search engines from accessing their data. The duty on search engines was separate from that of publishers of data.[54]
(2) What are the duties on the search engine operator as per the 1995 Data Protection Directive?
Google stated that as per the principle of proportionality, the publishers of the websites must take a call on whether the information should be erased or not as they are in the best position to make this determination and take further action for removal of such information. Google further contended that its fundamental right to free speech and expression along with that of the Publisher will be negatively affect-ed if it is asked to delist such links. Additionally, informational rights of Internet users will also be under threat.
The Court once again emphasized the role of search engines in profiling of data subjects and the threat it poses to the right to privacy of individuals. The court further explained that the processing of data cannot be justified solely by the economic interests of the search engine. The rights of other Internet users are also to be considered. The rights of the data subject and that of other Internet users must be balanced by considering factors such as nature of information, sensitivity of the data in the data subject’s life, the role of the data subject in public life and public interest.[55] The court also noted that because of ease in replication of data on the Internet, it may spread to websites over which the court does not have jurisdiction. Due to this, it may not be an effective remedy to mandate that there be parallel erasure of the data from both the publisher or to require erasure of data from the publisher’s website first. There may also be situations where the data subject has the Right to be Forgotten against the search engine but not the publisher (Eg: If the data is solely for journalistic purpose[56]).
(3) Scope of data subjects rights under the Data Protection Directive
The question referred to the court was whether the data subject can exercise his Right to be Forgotten on the grounds that the data is prejudicial or that he wishes that the data be deleted after a reasonable time.
Google submitted that it is only in cases where the processing violates the Data Protection Directive or on compelling legitimate grounds particular to the data subject’s situation that the individual be allowed to exercise the Right to be Forgotten.
The Court held that data collected could be lawful initially, but, may, in the course of time become irrelevant, inaccurate, inadequate, excessive with respect to the purpose for which it was collected.[57] The Court also stated that it is not necessary that the data sought to be erased has to be prejudicial to the data subject.[58]
6.5.2 Google v. Equustek[59] (2017)
In 2011(Canada), Equustek Solutions Inc. filed a lawsuit against its distributor, Datalink Technologies, claiming that Datalink illegally obtained Equustek’s trade secrets and other confidential information. Thereafter, Datalink allegedly began to pass off Equustek’s products as its own by re-labelling them, and also started selling competing products by using Equustek’s trade secrets. In response to this, Equustek procured several interlocutory injunctions against Datalink. However, Datalink disregarded the orders and moved its jurisdiction to some other location and continued its business.
In 2012, Equustek requested Google to de-index Datalink’s websites from appearing on Google’s search results. As a result, Google voluntarily blocked more than three hundred web pages from Google Canada but refused to do the same on an international scale.
The matter came up before the British Columbia Supreme Court, which, consequently, ruled that Google has to remove all of Data-link’s web domains from its global search index. This was essentially a global takedown order. Google appealed the order in the Supreme Court of Canada, contending that the order was against the right to freedom of speech and expression. In a landmark 7-2 ruling, the Supreme Court upheld the lower court’s worldwide takedown order, that re-quired Google to delist Datalink’s websites and domains from its global search index.
The ruling has received widespread criticism from various civil rights organizations and Internet advocates for violating the free speech rights of Internet users. Also, the question that arose was whether a country can enforce its laws in other countries to limit speech and access to information.
Google, Inc v. Commission nationale de l’informatique et des libertés (CNIL)[60](2018)
Google was once again involved in a long le-gal wrangle; this time with the French data protection authority, Commission nationale de l’informatique et des libertés, commonly referred to as CNIL.
In this case, CNIL had ordered Google to delist certain items from its search results. Google had complied with the order, and delisted the concerned articles from its domains in the European Union (google.fr, google.de, etc). The delisted results, however, were still available on the “.com” and non European extensions. Subsequently, in May 2015, a formal injunction was issued against Google by the CNIL chair, ordering the search engine to extend delisting to all “Google Search” extensions within a period of fifteen days.[61] On failure to comply with the injunction order, Google was asked to pay a fine of EUR 10,000.
Google appealed the order in France’s highest administrative court, Couseil d’Etat, and contended that the right to censor web results globally will seriously impair freedom of speech and expression and the right to access information. It was also argued that French authorities have no right to enforce their order worldwide, and doing so would set a dangerous precedent for other countries.
The French court referred the case to Europe’s highest court, Court of Justice of the Europe-an Union (CJEU) for answers to certain legal questions and to arrive at a preliminary ruling before coming to a judgment on the case itself. Arguments were heard in September, 2018 and judgement is awaited.
The Court published the Advocate General’s opinion in January, which stated that de-referencing search results on a global basis will under freedom of speech and expression:[62]
“(T)here is a danger that the Union will prevent people in third countries from accessing infor-mation. If an authority within the Union could order a global deference, a fatal signal would be sent to third countries, which could also order a dereferencing under their own laws. … There is a real risk of reducing freedom of expression to the lowest common denominator across Europe and the world.”
Google v. CNIL highlights the incompatibility between principles of territorial jurisdiction and global data flows.[63]
FAKE NEWS AND SOCIAL MEDIA: WHO IS RESPONSIBLE?
Social media and messaging platforms pro-vide the perfect condition for the creation of cascades of information of all kinds. The power of these platforms has been leveraged to create social movements like Black Lives Matter, MeToo and TimesUp campaigns. This power has also been exploited to sow dis-cord and manipulate elections.
The emergence of social media saw shifts in the media ecosystem with Facebook and Twitter becoming important tools for relaying information to the public. Anyone with a smartphone can be a broadcaster of information. Political parties are investing millions of dollars on research, development and implementation of psychological operations to create their own computational propaganda campaigns.[64] The use of automated bots to spread disinformation with the objective of moulding public opinion is a growing threat to the public sphere in countries around the world.[65] This raises new concerns about the vulnerability of democratic societies to fake news and the public’s limited ability to contain it.[66]
Fake news[67] is not a recent phenomenon. The issue of disinformation has existed since time immemorial in both traditional print and broadcast media. The advent of the Internet during the 90s opened the doors to a vast repository of information for people. The unimaginable growth of the Internet in a few years made it a host for a plethora of false and unwanted information. The World Economic Forum in 2013 had warned that ‘digital wild-fires’ i.e. unreliable information going viral will be one of the biggest threats faced by society and democracy: “The global risk of massive digital misinformation sits at the centre of a con-stellation of technological and geopolitical risks ranging from terrorism to cyber attacks and the failure of global governance”.[68]
In the 2016 United States Presidential elections[69] and the 2018 Brazilian Presidential elections,[70] the power of social media and messaging platforms was leveraged to sway elections in the favour of a particular candidate. The incidents commenced a global de-bate on tackling fake news and whether tech platforms are complicit in the issue. In the wake of these controversial elections there has been mounting pressure on online platforms such as Facebook and Twitter to actively regulate their platforms.
Governments around the world have been grappling with the question of how existing laws that limit free speech for reasons such as incitement to violence can be applied in the digital sphere.[71] Increasing calls for the platforms to take a more proactive role in weeding out disinformation and hate speech have raised fears that they might become the ultimate arbiters of what constitutes unacceptable content.[72]
India has been reeling from the consequences of fake news floating on social media and messaging platforms, especially WhatsApp that has more than 200 million active Indi-an users.[73] Rumours related to possession of beef and child kidnapping have led to the deaths of thirty-three innocent people.[74] BBC conducted a research in India on the cause and motivation behind the viral dis-semination of fake news. The study found that the rising tide of nationalism along with a distrust in mainstream media has pushed people to spread information from alterna-tive sources without attempting to verify the information, under the belief that they were helping to spread a real story.[75]
Following the spate of mob lynchings, the Indian Government asked WhatsApp to devise ways to trace the origin of fake messages circulated on its platform.[76] The government cautioned WhatsApp that it cannot evade responsibility if its services are being used to spread disinformation and will be treated as an “abettor” for failing to take any action.[77]
In India, as mentioned in chapter, the Draft In-formation Technology [Intermediaries Guidelines (Amendment) Rules], 2018 (“Draft Rules”) have been proposed by the government to fight ‘fake news’, terrorist content and obscene content, among others. They place obligations on intermediaries to pro-actively monitor content uploaded on their platforms and enable traceability to deter-mine the originator of information.
The Election Commission of India announced that all candidates contesting the 2019 general elections will have to submit details of their social media accounts and all political advertisements on social media will require prior certification.[78] All expenditure of campaigning on social media is to be included in the candidates election expenditure disclosure.[79]
The growing pressure worldwide on intermediaries to implement gatekeeping policies led Germany to pass “Netzwerkdurchsetzu-ngsgesetz” (NetzDG), also known as the Net-work Enforcement Act, which requires social networks with more than 2 million users to take down content that is “obviously illegal” within 24- hours after it is notified.[80] The law imposes fines of up to EUR 50 million on social media companies that fail to remove unlawful content from their websites.
In its latest transparency report[81] on removals under the NetzDG, Google stated that it received 465,784 requests in 2018 from users and reporting agencies to remove undesirable content from YouTube. The reasons provided for the complaints include: privacy, defamation, hate speech, political extremism, sexual content, terrorism-related and unconstitutional content, amongst others. In response to the removal requests, 112,941 items were removed by Google. Facebook, in its NetzDG Transparency Report, mentioned that it received 1,386 removal requests identifying a total of 2,752 pieces of content be-tween Jan-Dec, 2018.[82]
In 2018, the French Parliament passed a controversial legislation that empowers judges to order the immediate removal of “fake news” during election campaigns. The law allows the French national broadcasting agency to render the authority to suspend television channels “controlled by a foreign state or under the in-fluence” of that state if they “deliberately disseminate false information likely to affect the sincerity of the ballot.[83]
The European Commission and four major social media platforms - Facebook, Twitter, YouTube and Microsoft announced a Code of Conduct on countering illegal online hate speech.[84] The Code met with opposition from a number of rights groups like Index of Censorship[85] and EFF for being in vi-olation of the fundamental right to freedom of ex-pression.[86] The Code of Conduct is part of a trend where states are pressuring private corporations to censor content without any independent adjudication of the legality of the content.[87]
After the Cambridge-Analytica debacle, the Honest Ads Act was introduced in the United States Senate which would hold social media and other online platforms to the same political advertising transparency requirements that bind cable and broadcast systems.[88] The bill would require companies to disclose how advertisements were targeted as well as how much they cost.[89]
While governments are struggling to implement regulations that would address the significant challenge of combating the rising instances of fake news without jeopardising the right to free expression, there are difficult questions that arise: What should be the extent to which limits on free speech online should be imposed so that the utility of the Internet is not compromised? Does today’s digital capitalism make it profitable for tech companies to circulate click-worthy narratives?[90] Would regulating intermediaries without addressing the deeper and structural issues of lack of user education and media literacy be enough to solve the problem?
In 2017, in a ‘Joint declaration on freedom of expression and ‘Fake News’, disinformation and propaganda’, United Nations Special Rapporteur on Freedom of opinion and expression, David Kaye, stated that “General prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective infomation”, are incompatible with international standards for restrictions on freedom of ex-pression, and should be abolished.”[91]
The UK House of Commons, Digital, Culture, Media and Sports Committee in its final report on disinformation and fake news recommend-ed that digital literacy should be the fourth pillar of education, alongside reading, writing and maths. An educational levy can be raised on social media companies to finance a com-prehensive educational framework—developed by charities, NGOs, and the regulators themselves—and based online.[92]
It was also recommended that social media companies should be more transparent about their sites and how they work. Instead of hiding behind complex agreements, they should inform users about how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile.[93] The Committee advised the enactment of a compulsory code of ethics, overseen by an independent regulator which would have statutory powers to monitor tech companies.[94] On advertisements related to political campaigning, the Committee was of the view that the government should define ‘digital campaigning’ including online political advertising, and that paid political advertising should be publicly accessible, clear and easily recognisable.[95]
In January 2018, the European Commission set up a high-level group of experts to advise on policy initiatives to counter fake news and disinformation spread online.[96] The High Level Committee recommended enhancing transparency, promoting media and information literacy, developing tools for empowering users and journalists, safeguarding the diversity and sustainability of the news media ecosystem and promoting continued research on the impact of disinformation.[97]
Oliver Sylvain in Connecticut Law Review proposes that courts should scrutinize the manner in which each website elicit user content and the extent to which they exploit that data in secondary or ancillary markets. Based on that, the level of protection un-der the intermediary liability legal regime should be decided, depending on whether a particular provider qualifies as an active or passive intermediary.[98]
Governments should enact a regulatory framework that ensures accountability and transparency of digital platforms without curbing free speech and innovation. The answer to bad speech should not be censorship. Such a regulatory framework should be developed as a result of multi- stakeholder consultations that involves the government, legal community, tech companies, civil society and regular users of social media.
Multi- stakeholder Perspectives on Combating Fake News
SFLC.in conducted a series of discussions on fake news and intermediary liability across India in January 2019 including New Delhi (Jan 11, 18 and Feb 13), Bengaluru (Jan 15), Mumbai (Jan 16),Kochi (Jan 30), Hyderabad (Feb 12) .[99]
Some of the key findings from the discussions are:
· The definition of ‘fake news’ is vague and ambiguous and has to be deconstructed. There is no real agreement as to what the expression means. It is being used in an elastic manner and is being brandished as an all purpose slogan to describe everything from errors to deliberate falsehoods. World leaders have been seen weaponizing this term and using it against news organizations and journalists whose coverage they find disagreeable. It was agreed that the best way to under-stand the term Fake News is to deconstruct it into three terms: misinformation, disinformation and mal-information. Misinformation was construed as circulation of incorrect information without any bad intention, Mal-information was defined as spread of real information to cause harm to a person, organization or society. Disinformation was understood to be false narrative deliberately spread to inflict harm.
· Information diet is coming from algorithms on social media platforms. There is a real problem of filter bubbles on these platforms. Therefore, it is important to think about algorithmic transparency and algorithm accountability.
· Regarding deployment of artificial intelligence, industry experts dealing with AI on a regular basis claimed that AI was nowhere near being ready for the task of solving human and political problems.
· Fact checking must be the foundation of journalism. There are very few independent fact checkers in India. After verifying the facts of a particular story, the next step must be to put the fact checked story back on the platform it emanated from and make it as viral as the fake news. It has to be packaged in a manner similar to the fake news with catchy / clickbait headlines. The government must encourage and empower independent journalism which is the backbone of a democratic setup.
· Vernacular media sources are witnessing higher viewership compared to English me-dia. Navbharat Times, one of the largest circulated Hindi newspapers is progressing to-wards highest online subscribers. However, fact-checking is limited to English media only. There is a lack of incentives to fact-checkers in advertisement based business models of online media groups.
· Social media giants should scale up their efforts to fact check and down-rank information proliferating on their platforms by collaborating with third party fact checkers.
· There is a problem in the education system. Apart from digital literacy, there is a need to teach critical thinking skills to young people. A culture of questioning and skepticism should be encouraged.
· Decentralization of technology is important to break information monopolies.
· Other suggested solutions included providing incentives to startups that do fact checking, giving tax breaks to small organizations that bring truth back as an important value in the digital realm.
· The proposed Draft Rules can act like a minesweeper and have the potential to be misused. Regulation should be such that it aids in the growth of a free Internet instead of restricting it.
While digital and media literacy is indispensable in ensuring that consumers of information on social media do not fall prey to disinformation, we cannot dismiss the roles that tech companies should play in addressing the issue by ramping up their efforts to keep their platforms clean. Platforms should expand their endeavours to work jointly with third party fact checkers and invest in educating users and developing tools to help them distinguish between news that comes from a reliable source and stories coming from outlets that are regarded as unreliable. WhatsApp recently limited forwarding messages to five chats to contain the virality of messages on their platform.[100] The messaging platform launched TV and Radio campaigns to spread awareness[101] and partnered with local NGOs to educate users about the need to verify in-formation.[102]
Facebook is working with their community and third-party fact-checking organizations to identify false/fake news and limit the spread. Ahead of the General Elections 2019, Facebook has partnered with seven third party fact -checkers namely: BOOM-Live, AFP, India Today Group, Vishvas. news, Factly, Newsmobile and Fact Crescendo covering six languages, to review and rate the correctness of stories on Facebook.[103] The platform is also in the process of setting up an operations centre in Delhi which would be responsible to monitor election content 24X7. To achieve this, the centre will be co-ordinating with global Facebook offices located at Menlo Park (California), Dublin and Singapore.[104]
Facebook has devised new features to bring more transparency in advertisements on its platform in India.[105] The platform will allow its users to view the publishers and sponsors of the advertisement they are accessing.[106] It has rolled out a searchable ad libraryfor its viewers to analyze political ads. The information provided by this ad library includes range of impressions, expenditure on the said ads and the demographics of who saw the ad.[107]
Any efforts to label and identify question-able stories or sources should be consistent across platforms.[108] Voters should be able to identify untrustworthy content across platforms and trust that all platforms use the same standards to classify it.[109]
Transparency about algorithms, content moderation techniques and political advertising will go a long way in countering the problem.[110] Large social media platforms are generally founded on the economic model of surveillance capitalism rooted in delivering advertisements based on data collection.[111] Decentralized, user owned, free and open-source platforms that do not rely on wide-spread data collection can potentially limit the spread of fake news.[112]
It is short-sighted to think that laws can completely fix the problem, it is nevertheless necessary to have a discussion about a regulatory framework that ensures accountability and transparency of digital platforms without curbing free speech and innovation. The answer to bad speech should not be censorship. Such a regulatory framework should be developed as a result of multi-stakeholder consultations that involves the government, legal community, tech companies, civil society and regular users of social media.
The objective of the Draft Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018 (“the Draft Rules”) seemed to be to counter disinformation / fake news on social media and messaging platforms but its purpose would not have been served by such arbitrary and sweeping provisions. The Draft Rules seemed to be violative of the fundamental rights to free speech and privacy and the dictum of the judgment of the Supreme Court in Shreya Singhal v Union of India. While transparency and accountability of platforms is the need of the hour, the government should enact a less-invasive and proportional means of regulation of the internet.
The Intermediary Guidelines 2021 at a glance[113]
Salient Features
Guidelines Related to Social Media to Be Administered by Ministry of Electronics and IT:
Due Diligence To Be Followed By Intermediaries: The Rules prescribe due diligence that must be followed by intermediaries, including social media intermediaries. In case, due diligence is not followed by the intermediary, safe harbour provisions will not apply to them.
Grievance Redressal Mechanism: The Rules seek to empower the users by mandating the intermediaries, including social media intermediaries, to establish a grievance redressal mechanism for receiving resolving complaints from the users or victims. Intermediaries shall appoint a Grievance Officer to deal with such complaints and share the name and contact details of such officer. Grievance Officer shall acknowledge the complaint within twenty four hours and resolve it within fifteen days from its receipt.
Ensuring Online Safety and Dignity of Users, Specially Women Users: Intermediaries shall remove or disable access withing 24 hours of receipt of complaints of contents that exposes the private areas of individuals, show such individuals in full or partial nudity or in sexual act or is in the nature of impersonation including morphed images etc. Such a complaint can be filed either by the individual or by any other person on his/her behalf.
Two Categories of Social Media Intermediaries: To encourage innovations and enable growth of new social media intermediaries without subjecting smaller platforms to significant compliance requirement, the Rules make a distinction between social media intermediaries and significant social media intermediaries. This distinction is based on the number of users on the social media platform. Government is empowered to notify the threshold of user base that will distinguish between social media intermediaries and significant social media intermediaries. The Rules require the significant social media intermediaries to follow certain additional due diligence.
Additional Due Diligence to Be Followed by Significant Social Media Intermediary:
Appoint a Chief Compliance Officer who shall be responsible for ensuring compliance with the Act and Rules. Such a person should be a resident in India.
Appoint a Nodal Contact Person for 24x7 coordination with law enforcement agencies. Such a person shall be a resident in India.
Appoint a Resident Grievance Officer who shall perform the functions mentioned under Grievance Redressal Mechanism. Such a person shall be a resident in India.
Publish a monthly compliance report mentioning the details of complaints received and action taken on the complaints as well as details of contents removed proactively by the significant social media intermediary.
Significant social media intermediaries providing services primarily in the nature of messaging shall enable identification of the first originator of the information that is required only for the purposes of prevention, detection, investigation, prosecution or punishment of an offence related to sovereignty and integrity of India, the security of the State, friendly relations with foreign States, or public order or of incitement to an offence relating to the above or in relation with rape, sexually explicit material or child sexual abuse material punishable with imprisonment for a term of not less than five years. Intermediary shall not be required to disclose the contents of any message or any other information to the first originator.
Significant social media intermediary shall have a physical contact address in India published on its website or mobile app or both.
Voluntary User Verification Mechanism: Users who wish to verify their accounts voluntarily shall be provided an appropriate mechanism to verify their accounts and provided with demonstrable and visible mark of verification.
Giving Users An Opportunity to Be Heard: In cases where significant social media intermediaries removes or disables access to any information on their own accord, then a prior intimation for the same shall be communicated to the user who has shared that information with a notice explaining the grounds and reasons for such action. Users must be provided an adequate and reasonable opportunity to dispute the action taken by the intermediary.
Removal of Unlawful Information:An intermediary upon receiving actual knowledge in the form of an order by a court or being notified by the Appropriate Govt. or its agencies through authorized officer should not host or publish any information which is prohibited under any law in relation to the interest of the sovereignty and integrity of India, public order, friendly relations with foreign countries etc.
The Rules will come in effect from the date of their publication in the gazette, except for the additional due diligence for significant social media intermediaries, which shall come in effect 3 months after publication of these Rules.
Digital Media Ethics Code Relating to Digital Media and OTT Platforms to Be Administered by Ministry of Information and Broadcasting:
There have been widespread concerns about issues relating to digital contents both on digital media and OTT platforms. Civil Society, film makers, political leaders including Chief Minister, trade organizations and associations have all voiced their concerns and highlighted the imperative need for an appropriate institutional mechanism. The Government also received many complaints from civil society and parents requesting interventions. There were many court proceedings in the Supreme Court and High Courts, where courts also urged the Government to take suitable measures.
Since the matter relates to digital platforms, therefore, a conscious decision was taken that issues relating to digital media and OTT and other creative programmes on Internet shall be administered by the Ministry of Information and Broadcasting but the overall architecture shall be under the Information Technology Act, which governs digital platforms.
Consultations:
Ministry of Information and Broadcasting held consultations in Delhi, Mumbai and Chennai over the last one and half years wherein OTT players have been urged to develop “self-regulatory mechanism”. The Government also studied the models in other countries including Singapore, Australia, EU and UK and has gathered that most of them either have an institutional mechanism to regulate digital content or are in the process of setting-up one.
The Rules establish a soft-touch self-regulatory architecture and a Code of Ethics and three tier grievance redressal mechanism for news publishers and OTT Platforms and digital media.
Notified under section 87 of Information Technology Act, these Rules empower the Ministry of Information and Broadcasting to implement Part-III of the Rules which prescribe the following:
Code of Ethicsfor online news, OTT platforms and digital media:This Code of Ethics prescribe the guidelines to be followed by OTT platforms and online news and digital media entities.
Self-Classification of Content: The OTT platforms, called as the publishers of online curated content in the rules, would self-classify the content into five age based categories- U (Universal), U/A 7+, U/A 13+, U/A 16+, and A (Adult). Platforms would be required to implement parental locks for content classified as U/A 13+ or higher, and reliable age verification mechanisms for content classified as “A”. The publisher of online curated content shall prominently display the classification rating specific to each content or programme together with a content descriptor informing the user about the nature of the content, and advising on viewer description (if applicable) at the beginning of every programme enabling the user to make an informed decision, prior to watching the programme.
Publishers of news on digital media would be required to observe Norms of Journalistic Conduct of the Press Council of India and the Programme Code under the Cable Television Networks Regulation Act thereby providing a level playing field between the offline (Print, TV) and digital media.
A three-level grievance redressal mechanism has been established under the rules with different levels of self-regulation.
Level-I: Self-regulation by the publishers;
Level-II: Self-regulation by the self-regulating bodies of the publishers;
Level-III: Oversight mechanism.
Self-regulation by the Publisher: Publisher shall appoint a Grievance Redressal Officer based in India who shall be responsible for the redressal of grievances received by it. The officer shall take decision on every grievance received by it within 15 days.
Self-Regulatory Body: There may be one or more self-regulatory bodies of publishers. Such a body shall be headed by a retired judge of the Supreme Court, a High Court or independent eminent person and have not more than six members. Such a body will have to register with the Ministry of Information and Broadcasting. This body will oversee the adherence by the publisher to the Code of Ethics and address grievances that have not be been resolved by the publisher within 15 days.
Oversight Mechanism: Ministry of Information and Broadcasting shall formulate an oversight mechanism. It shall publish a charter for self-regulating bodies, including Codes of Practices. It shall establish an Inter-Departmental Committee for hearing grievances.
[1] Manila Principles on Intermediary Liability, MANILA PRINCIPLES, https://www.manilap-rinciples.org/ also accessible at Appendix [.] [2] Ibid [3] IP Effect - Distinguishing Actual Knowledge from Shreya Singhal, discussed previously [4] The US Copyright Modernization Act 2015, also accessible at Appendix [.] [5] Office of Consumer Affairs (OCA), Notice and Notice Regime, Innovation Science and Economic Development Canada (Mar 9, 2019, 1:48 PM), https://ic.gc.ca/eic/site/oca-bc.nsf/eng/ca02920.html, also accessible at Appendix [.] [6] The South Korea Copyright Act 1957 § 103, also accessible at Appendix [.] [7] Digital Millennium Copyright Act 1998 § 512(c), also accessible at Appendix [.] [8] Christian Ahlert, Chris Mrasden and Chester Yung, How Liberty Disappeared from Cyber space: The Mystery Shopper Tests Internet Content Self Regulation, THE PROGRAMME IN COMPARATIVE MEDIA LAW AND POLICY, UNI-VERSITY OF OXFORD , http://pcmlp.socleg.ox.ac.uk/wp-content/uploads/2014/12/liberty.pdf, also accessible at Appendix [.] [9] Ibid [10] The US Copyright Modernization Act 2015 U.S. Copyright Office, Section 512 Study, COPYRIGHT.GOV, https://www.copyright.gov/policy/ section512/, also accessible at Appendix [.] [11] Adam Holland, Chris Bavitz, Jeff Hermes, Andy Sellars, Ryan Budish, Michael Lambert and Nick Decoster Berkman Center for Internet & Society at Harvard University, Intermediary Liability in the United States, PUBLIXPHERE, https://publixphere.net/i/noc/page/OI_Case_Study_Intermediary_Liability_in_the_United_States, also accessible at Appendix [.] [12] [665 F. Supp. 2D 961], accessible at Appendix [.] [13] Thomas Dart, Sheriff of Cook County V. Craigslist, Inc, 665 F. Supp. 2d 961, accessible at Appendix [.] [14] [No. 07 Civ. 2103 2010 WL 2532404 (S.D.N.Y 2010)], accessible at Appendix [.] [15] Viacom v. YouTube, Electronic Frontier Foundation https://www.eff. org/cases/viacom-v-youtube, also accessible at Appendix [.] [16] Miguel Helft, Judge Sides With Google in Viacom Suit Over Videos, NEW YORK TIMES, http://www.nytimes.com/2010/06/24/technology/24google.html?_r=0, also accessible at Appendix [.] [17] Jonathan Stempel, Google, Viacom settle landmark YouTube lawsuit, REUTERS, http://www.reuters.com/article/us-google-viacom-lawsuit-idUSBREA2H11220140318, also accessible at Appendix [.] [18] 17-CV-932 (VEC), also accessible at Appendix [.] [19] Andy Greenberg, Spoofed Grindr Accounts Turned One Man’s Life Into a ‘Living Hell’ , WIRED, https://www.wired.com/2017/01/grinder-lawsuit-spoofed-accounts/, also accessible at Appendix [.] [20] Trevor Cook, Online Intermediary Liability in the European Union, 17, JOURNAL OF INTELLECTUAL PROPERTY RIGHTS, 157-159 (2012), also accessible at Appendix [.] [21] The History of the General Data Protection Regulation, EUROPEAN DATA PROTECTION SUPERVISOR https://edps.europa.eu/data-protection/data-protection/legislation/history-general-data-protection-reg-ulation_en [22] A copy of the GDPR can be downloaded from here - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CEL-EX:32016R0679, also accessible at Appendix [.] [23] Hayleigh Bosher, Keeping up with the Copyright Directive, IPKITTEN, https://ipkit-ten.blogspot.com/2019/02/keeping -up-with -copyright- directive.html, also accessible at Appendix [.] [24] Proposal For A Directive Of The European Parliament And Of The Council On Copyright In The Digital Single Market, JULIA REDA, https://juliareda.eu/wp-content/uploads/2019/02/Copyright_Fi-nal_compromise.pdf, also accessible at Appendix [.] [25] Ibid [26] Matt Reynolds, What Is Article 13? The EU’s Divisive New Copyright Plan Explained, WIRED, https://www.wired.co.uk/article/what-is-article- 13-article-11-european-directive-on-copyright-ex-plained-meme-ban, also accessible at Appendix [.] [27] Christiane Stuetzle and Patricia C. Ernst, European Union: The EU Copyright Directive Hits The Homestretch, MONDAQ (9 Mar, 2019, 2:26 PM), http://www.mondaq.com/unitedstates/x/786366/Copyright/The+EU+Copy-right+Directive+Hits+The+Homestretch, also accessible at Appendix [.] [28] This includes - online intermediaries who store and give access to a large amount of copyright protected con-tent or other protected content uploaded by its users. This specifically excludes - non profit online encyclopedias, non profit educational and scientific repositories, open source software developing and sharing platforms, online marketplaces and B2B cloud services and cloud services for users. Kindly refer to Article 2(5) of the proposed copy-right directive. [29] Christiane Stuetzle and Patricia C. Ernst, European Union: The EU Copyright Directive Hits The Homestretch, MONDAQ, http://www.mondaq.com/unitedstates/x/786366/Copyright/The+EU+Copy-right+Directive+Hits+The+Homestretch, also accessible at Appendix [.] [30] Cory Doctorow, Artists Against Article 13: When Big Tech and Big Content Make a Meal of Creators, It Doesn’t Matter Who Gets the Bigger Piece, ELECTRONIC FRONTIER FOUNDATION (9 Mar, 2019, 2:26 PM), https://www. eff.org/deeplinks/2019/02/artists- against-article -13-when-big-tech-and-big -content -make -meal -creators-it, also accessible at Appendix [.] [31] Ibid [32] Ibid [33] European Commission, Proposal For A Regulation Of The European Parliament And Of The Council On Pre-venting The Dissemination Of Terrorist Content Online, EUROPEAN COMMISSION (9 Mar, 2019, 2:35 PM), https:// ec.europa.eu/commission/sites/beta-political/files/soteu2018-preventing-terrorist-content-online-regulation-640_ en.pdf, also accessible at Appendix [.] [34] Ibid [35] Ibid [36] Ibid [37] Delfi v. Estonia, 64569/09, ECtHR (2015), also accessible at Appendix [.] [38] Council of Europe, European Convention on Human Rights, EUROPEAN COURT OF HUMAN RIGHTS, https://www.echr.coe.int/Documents/Convention_ENG.pdf, also accessible at Appendix [.] [39] Giancario Frosio, The European Court Of Human Rights Holds Delfi Liable For Anonymous Defamation, CEN-TRE FOR INTERNET AND SOCIETY STANFORD LAW SCHOOL http://cyberlaw.stanford. edu/blog/2013/10/european-court-human-rights-holds-delfiee- liable -anonymous-defamation, also accessible at Appendix [.] [40] HUDOC - European Court of Human Rights, HUDOC (Feb 10, 2019, 6.25PM), http://hudoc.echr.coe.int/eng? i=001 -126635#{“itemid”:[“001 -126635”], also accessible at Appendix [.] [41] Application no. 22947/13, also accessible at Appendix [.] [42] Daphne Keller, New Intermediary Liability Cases from the European Court of Human Rights: What Will They Mean in the Real World? STANFORD LAW SCHOOL- CENTER FOR INTERNET AND SOCIETY, http://cyberlaw.stanford.edu/blog/2016/04/new-intermediary-liability-cases-european-court -hu-man-rights-what -will-they-mean-real, also accessible at Appendix [.] [43] Council of Europe, European Convention on Human Rights, EUROPEAN COURT OF HUMAN RIGHTS, https://www.echr.coe.int/Documents/Convention_ENG.pdf, also accessible at Appendix [.] [44] The European Union General Data Protection Regulation, http://www.eugdpr.org/ [45] European Commission, Article 21 - Right to Object, EUROPEAN COMMISSION (9 Mar, 2019, 2:55 PM), http://ec.europa.eu/justice/data-protection/reform/files/regulation_oj_en.pdf, accessible at Appendix [.] [46] European Commission, Article 8 - Conditions Applicable to Child’s Consent in Relation to Information Society Services, EUROPEAN COMMISSION (9 Mar, 2019, 2:58 PM), http://ec.europa.eu/justice/data -protection/reform/ files/regulation_oj_en.pdf, accessible at Appendix [.] [47] Search Removals under European Privacy Law, Google Transparency Report, GOOGLE, https://transparencyreport.google.com/eu-privacy/overview?hl=en, also accessible at Appendix [.] [48] Daphne Keller, The Right Tools: Europe’s Intermediary Liability Laws and the 2016 General Data Protection Regulation, STANFORD LAW SCHOOL CENTER FOR INTERNET AND SOCIETY, https://pa-pers.ssrn.com/sol3/papers.cfm?abstract_id=2914684, also accessible at Appendix [.] [49] Electronic Commerce Directive 2000/31/EC, also accessible at Appendix [.] [50] [C-131/12], also accessible at Appendix [.] [51] Para 22 of the Google Spain vs AEPD and Mario Costeja Gonzalez decision [52] Ibid. at Paras 28, 29 [53] Ibid. at Para 37 [54] Ibid. at Paras 39, 40 [55] Ibid. at Para 81 [56] Ibid. at Paras 84, 85 [57] Ibid. at Para 93 [58] Ibid. at Para 99 [59] 2017 SCC 34, also accessible at Appendix [.] [60] [C-507/17], also accessible at Appendix [.] [61] Right to be delisted: the CNIL Restricted Committee imposes a €100,000 fine on Google, CNIL, https://www.cnil.fr/en/right-be-delisted- cnil-restricted-committee-imposes-eu100000-fine-google, also accessible at Appendix [.] [62] ‘Right to be forgotten’ by Google should apply only in EU, says court opinion, THE GUARDIAN, https://www.theguardian.com/technology/2019/jan/10/right-to-be-forgotten-by-google- should- apply-only-in-eu-says-court [63] Michele Finck, Google v CNIL: Defining the Territorial Scope of European Data Protection Law, THE GUARD-IAN (Feb 7, 2019, 2:00PM), https://www.theguardian.com/technology/2019/jan/10/right-to-be -forgotten -by-goo-gle-should-apply-only-in-eu- says-court [64] Philip N. Howard, Samantha Bradshaw, The Global Organization of Social Media Disinformation Campaigns, COLUMBIA JOURNAL OF INTERNATIONAL AFFAIRS https://jia.sipa.columbia.edu/global-orga-nization-social-media-disinformation-campaigns, accessible on Appendix [.] [65] Dean Jackson, How Disinformation Impacts Politics and Publics, NATIONAL ENDOWMENT FOR DEMOCRACY https://www.ned.org/issue-brief-how-disinformation-impacts-politics-and-publics/, accessible on Appendix [.] [66] David Lazer, Matthew Baum, Nir Grinberg, Lisa Friedland, Kenneth Joseph, Will Hobbs, Carolina Mattsson, Com-bating Fake News: An Agenda for Research and Action, HARVARD KENNEDY SCHOOL, SHORENSTEIN CENTER, https://shorensteincenter.org/combating-fake-news-agenda-for-research/, also accessible on Appendix [.] [67] the term “Fake News” refers to news that is deliberately and verifiably false and created with the intention to mislead readers. [68] Lee Howell, Global Risks, Insight Report, WEF, 23 (2013), also accessible on Appendix [.] [69] Richard Gunther, Paul A. Beck, Erik C. Nisbet, Fake News Did Have a Significant Impact on the Vote in the 2016 Election, OHIO STATE UNIVERSITY, https://cpb-us-w2.wpmucdn.com/u.osu.edu/dist/d/12059/files/2015/03/ Fake-News-Piece-for-The-Conversation-with-methodological-appendix-11d0ni9.pdf, also accessible on Appendix [.] [70] Anthony Broadle, Explainer: Facebook’s WhatsApp flooded with fake news in Brazil election, REUTERS, https://in.reuters.com/article/brazil-election-whatsapp/explainer-facebooks-whatsapp-flooded-with-fake-news-in-brazil-election-idINKCN1MV04J, also accessible on Appendix [.] [71] Lee Howell, Global Risks, Insight Report, WEF, 23 (2013), also accessible on Appendix [.] [72] Platform responsibility, The London School of Economics and Political Science, Department of Media and Communications, LSE http://www.lse.ac.uk/media-and-communications/truth-trust-and-technology-commission/ platform-responsibility, also accessible on Appendix [.] [73] WhatsApp now has 1.5 billion monthly active users, 200 million users in India, FINANCIAL EXPRESS, https://www.financialexpress.com/industry/technology/whatsapp-now-has-1-5-billion-monthly-active-users-200-million-users-in-india/1044468/, also accessible on Appendix [.] [74] Alison Saldanah, Pranav Rajput, Jay Hazare, Child-Lifting Rumours: 33 Killed In 69 Mob Attacks Since Jan 2017. Before That Only 1 Attack In 2012, INDIA SPEND, https://www.indiaspend.com/child-lifting-rumours-33-killed-in-69-mob-attacks-since-jan-2017-before-that-only-1-attack-in-2012-2012/, also accessible on Appendix [.] [75] Santanu Chakrabarti, Lucile Stengel, Sapna Solanki, Duty, Identity, Credibility: Fake News and the Ordinary Citizen in India, BBC, http://downloads.bbc.co.uk/mediacentre/duty-identity-credibility.pdf, also accessible on Appendix [.] [76] PTI, Mob Lynchings: WhatsApp At Risk Of Being Labelled “Abettor”, BLOOMBERG QUINT, https://www.bloombergquint.com/law-and-policy/mob-lynchings-whatsapp-at-risk-of-being-labelled-abettor#gs.UdkfqXqo, also accessible on Appendix [.] [77] Ibid [78] Scroll Staff, Lok Sabha polls: All political ads on social media will need prior certification, says ECI, SCROLL, https://scroll.in/latest/916091/lok-sabha-polls-all-political-ads-on-social-media-will-need-prior-certification-says-eci, also accessible on Appendix [.] [79] Nikhil Pahwa, Key takeways from Election Commission’s 2019 India’s 2019 Elections announcement: On Fake News, Online Political Advertising and Model Code of Conduct, MEDIANAMA, https://www.medianama. com/2019/03/223-key-takeways-from-election-commissions-2019-indias-2019-elections-announcement-on-fake-news-on-line-political-advertising-and-model-code-of-conduct/, also accessible on Appendix [.] [80] BBC, Germany starts enforcing hate speech law, BBC https://www.bbc.com/news/technolo-gy-42510868, also accessible on Appendix [.] [81] Removals under the Network Enforcement Law, Google Transparency Report, GOOGLE, https:// transparencyreport.google.com/netzdg/youtube?hl=en, also accessible on Appendix [.] [82] NetzDG Transparency Report, FACEBOOK, https://fbnewsroomus.files.wordpress. com/2018/07/facebook_netzdg_july_2018_english-1.pdf, also accessible on Appendix [.] [83] Michael-Ross Florentino, France passes controversial ‘fake news’ law, EURONEWS https://www. euronews.com/2018/11/22/france-passes-controversial-fake-news-law, also accessible on Appendix [.] [84] Code of Conduct on countering online hate speech – results of evaluation show important progress, EUROPEAN COM-MISSION https://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=71674, also available on Appendix [.] [85] EU agreement with tech firms on hate speech guaranteed to stifle free expression, INDEX ON CENSORSHIP https://www.indexoncensorship.org/2016/05/eu-agreement-tech-firms-hate-speech-guaranteed-stifle-free-expression/, also available on Appendix [.] [86] Jillain York, European Commission’s Hate Speech Deal With Companies Will Chill Speech, EFF, https://www.eff.org/deeplinks/2016/06/european-commissions-hate-speech-deal-companies-will-chill-speech, also available on Appendix [.] [87] Responding to ‘hate speech’: Comparative overview of six EU countries, ARTICLE 19 https:// www.article19.org/wp-content/uploads/2018/03/ECA-hate-speech-compilation-report_March-2018.pdf, also available on Appendix [.] [88] Ellen P. Goodman, Lyndsay Wajert, The Honest Ads Act Won’t End Social Media Disinformation, but It’s a Start, SSRN, (2017), also available on Appendix [.] [89] Jack Nicas, Facebook to require verified identities for future political ads, NYTIMES https:// www.nytimes.com/2018/04/06/business/facebook-verification-ads.html, also available on Appendix [.] [90] Evgeny Morozov, Moral panic over fake news hides the real enemy – the digital giants, The Guardian, https://www.theguardian.com/commentisfree/2017/jan/08/blaming-fake-news-not-the-answer-democracy-crisis, also available on Appendix [.] [91] David Kaye, Freedom of Expression Monitors Issue Joint Declaration on ‘Fake News’, Disinformation and Propaganda, OHCHR https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=21287&LangID=E, also available on Appendix [.] [92] Disinformation and ‘fake news’: Final Report, House of Commons Digital, Culture, Media and Sport Committee, UK PAR-LIAMENT, https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179102. Htm, also available on Appendix [.] [93] Ibid [94] Ibid [95] Ibid [96] A multi-dimensional approach to disinformation, Report of the independent High level Group on fake news and online disinformation, EUROPEAN COMMISSION, https://ec.europa.eu/digital-single-market/en/news/fi-nal-report-high-level-expert-group-fake-news-and-online-disinformation, also available on Appendix [.] [97] Ibid [98] Olivier Sylvain, Intermediary Design Duties, 50, CONNECTICUT LAW REVIEW, 1 (2018), also available on Appendix [.] [99] Blue Paper: Misinformation and Intermediary Liability, SFLC.in, https://sflc.in/blue-paper-misinfor-mation-and-draft-intermediary-guidelines/, also available on Appendix [.] [100] WhatsApp Blog, More changes to forwarding, WHATSAPP, https://blog.whatsapp.com/10000647/ More-changes-to-forwarding, accessible on Appendix [.] [101] PTI, WhatsApp rolls out TV campaign in India to tackle fake news, LIVEMINT, https://www.livemint. com/Companies/QU7LWGcHf0m49uiBqDRzlN/WhatsApp-rolls-out-TV-campaign-in-India-to-tackle-fake-news.html, also available on Appendix [.] [102] WhatsApp embarks on user-education drive to curb fake messages, HINDU BUSINESS LINE, https:// www.thehindubusinessline.com/news/whatsapp-embarks-on-user-education-drive-to-curb-fake-messages/article24812353.ece, also accessible at Appendix[.] [103] Nandita Mathur, Facebook planning a ‘war room’ in Delhi to monitor Elections 2019, LIVE MINT , https://www.livemint.com/elections/lok-sabha-elections/facebook-wants-to-set-up-a-war-room-in-delhi-to-monitor-elec-tions-2019-1552246631884.html, also accessible at Appendix [.] [104] Ibid [105] Shivnath Thukral, Bringing More Transparency to Political Ads in India, https://newsroom. fb.com/news/2018/12/ad-transparency-in-india/, also accessible at Appendix [.] [106] Ibid [107] Ibid [108] Abby K. Wood, Ann M. Ravel, Fool Me Once: Regulating “Fake News” and other Online Advertising, 1227, SOUTHERN CAL-IFORNIA LAW REV., 55 (2018), also accessible on Appendix [.] [109] Ibid [110] Matteo Monti, Perspectives on the Regulation of Search Engine Algorithms and Social Networks: The Necessity of Protecting the Freedom of Information, 1, OPINIO JURIS IN COMPARATIONE, 10 (2017), also accessible on Appendix [.] (211) [111] Natasha Singer, The Week in Tech: How Google and Facebook Spawned Surveillance Capitalism, NYTIMES, https://www.nytimes.com/2019/01/18/technology/google-facebook-surveillance-capitalism.html, also accessible on Appendix [.] [112] Mark Verstraete, Derek E. Bambauer, Jane R. Bambauer, Identifying and Countering Fake News, Discussion Paper 17-15, ARIZONA LAW JOURNAL, 25 (2017), also accessible on Appendix [.] [113] Accessible at Appendix [.] or https://pib.gov.in/PressReleseDetailm.aspx?PRID=1700749.