In an age where technology intertwines seamlessly with our daily lives, safeguarding personal data has become a paramount concern. Recently, Zoom, a prominent player in the virtual communication realm, found itself at the heart of a controversy that shed light on the delicate balance between AI advancement and customer data privacy. The company’s policy changes related to AI training on customer data sent shockwaves through the tech community, prompting a swift reversal and a renewed commitment to protecting user information.
In March 2023, Zoom introduced amendments to its terms and conditions which seemingly granted the company extensive latitude in utilising customer data for training artificial intelligence (AI) models. These amendments were not noticed until early August and once they came to public attention they set off a storm of public concern and scrutiny. Reports from various media outlets questioned the potential ramifications of these policy shifts on user privacy and the ethics of data usage.
The uproar sparked by the policy changes compelled Zoom to respond swiftly and decisively. The company published a blog post on 7 August 2023 which it subsequently edited on 11 August 2023 outlining its stance. In the post, Zoom clarified that it had no intention of exercising the sweeping rights granted by the revised terms. The company went further, asserting its commitment to customer data privacy and its respect for user concerns.
The company’s subsequent policy update explicitly stated that AI models would not be trained using customer video, audio, or chats without obtaining consent from the customers themselves. This commitment to obtaining explicit permission before utilizing personal data for AI training purposes marked a significant step toward safeguarding user information.
Zoom’s experience serves as a poignant reminder of the growing tension between technological advancement and individual privacy rights. The incident has broader implications for the tech industry as a whole. It highlights the importance of transparent communication, robust privacy policies, and a proactive approach to addressing user concerns in the face of evolving technologies.
Find out more from Ann-Maree Blake and our Data Protection and Privacy service.
Love or hate the idea (and many people fall into the latter category), AI language and text to image models have arrived. Now anyone can create prose, programmes, and pictures in mere seconds simply by entering a few instructions on a website. You may be thinking “wonderful, no more dull report and contract writing”. However, there are serious concerns around the accuracy of the information ChatGPT is producing. In addition, the lawsuits by artists, engineers, and other creatives against AI language and art model developers are mounting. There are also potential legal issues for users of ChatGPT, such as copyright infringement and defamation.
Before exploring these legal challenges, it is useful to explain what AI language and art models are. For ease of reference, I will refer to the most well-known, ChatGPT, but the basic principles apply to most other chatbots such as Meta’s Llama, and Google’s Bard.
ChatGPT, which stands for “Chat Generative Pre-trained Transformer”, was created by Open AI and launched in November 2022. It is considered the most significant technological development since the launch of the Apple iPhone in 2007. It can produce human-like responses to a vast range of questions and is often (but not always) accurate.
ChatGPT works by predicting the next word in a series of words. It is underpinned by an enormous language model, created by Open AI feeding into it some 300 billion words systematically scraped from the internet in the form of books, articles, websites, and blog posts. ChatGPT used the data provided to learn how to predict the next word. Eventually, it became sufficiently trained to produce human-like responses to tasks given to it via the front-end ‘Chat’.
Preston Gralla provided a brilliant analogy for how AI language and text to image models operate in a recent article:
“To do its work, AI needs to constantly ingest data, lots of it. Think of it as the monster plant Audrey II in Little Shop of Horrors, constantly crying out “Feed me!”
Open AI and other developers of AI text and image-generating models did not seek permission to use third-party words and art to feed their creations. This fact forms the basis of several class legal actions currently underway around the world.
The basis for legal claims against ChatGPT and other language and image-generating models fall into several categories:
Although ChatGPT and its offshoots may seem like a productivity dream come true, caution must be taken when using it to produce written text and images for business purposes. There may be issues concerning copyright and breach of the GDPR and Data Protection Act 2018. In addition, as demonstrated by the defamation lawsuit brought by the mayor of Hepburn Shire, there may be serious legal consequences for organisations if ChatGPT makes mistakes or demonstrates bias, both of which it can do. To avoid potential claims, businesses and individuals must undertake a risk assessment before utilising ChatGPT for particular projects and establish robust due diligence checks on the accuracy and impartiality of the content it produces.
ChatGPT represents an exciting and unknown future for businesses and people alike. To discuss any of the points raised in this article, including undertaking risk assessments, please contact Ann-Maree Blake.
The General Data Protection Regulation (GDPR) is a comprehensive privacy law that was implemented by the European Union (EU) in 2018. Its purpose is to protect the personal data of EU citizens by establishing strict rules for the collection, processing, and storage of personal information by organisations.
The GDPR applies not only to organisations based in the EU but also to any organisation that processes the personal data of EU citizens, regardless of where the organisation is located. Non-compliance with GDPR can result in significant fines and penalties.
According to recent research, supervising authorities across Europe have markedly increased the level of fines issued to companies found in breach of the GDPR. Latest figures show:
These figures show that GDPR enforcement is here to stay and regulators are increasing the number of investigated cases and penalty levels year on year. No business can afford to be complacent when it comes to implementing GDPR policies and procedures.
Find out more in our post Five Ways To Protect Your Company from a GDPR fine
The following sectors received the highest number of GDPR fines:
It is imperative to note that this does not mean these sectors are necessarily shirking their data protection and privacy compliance obligations, rather it is an indication that these industries are the most exposed in terms of GDPR-related risk. Although the average fines levied in the Transportation and Energy sectors were high, the number of fines issued was relatively low. This signifies that although breaches in this sector are relatively rare, when they occur they are serious and thus attract large penalties.
The top areas of GDPR non-compliance leading to fines were:
This shows that many companies are still unsure of what constitutes a lawful basis for processing personal data. The lawful foundations for processing data are set out in Article 6 of the GDPR and at least one of the following must be present whenever personal data is processed:
If none of the above apply to your reason for processing personal data, the processing is unlawful and therefore a breach of Article 6.
The data is clear – all companies, especially those in high-risk sectors such as advertising, technology, telecommunications, and general communications (for example direct marketing) need to implement consistent, proactive training programmes to ensure all employees understand what is required for GDPR compliance. As supervising authorities become more confident with enforcing data protection and privacy regulations, the scope for fines and reputational damage leading to a loss of consumer trust will continue to increase.
To find out how we can assist you on all matters relating to GDPR and data protection law, please contact Ann-Maree Blake to make an appointment.
trusted legal excellence
Contact us today to discover how we can support you with legal solutions that stand out from the rest.
Get in Touch