Zooms recent reversal on changes to its terms of service illustrates both data security and privacy minefields particular to the growth of generative AI.
Previously, the terms of service of the popular videoconferencing technology stated that it would treat users non-public information as confidential. On March 31, Zoom quietly amended those terms, including by giving itself the right to preserve, process, and disclose Customer Content for a range of purposes, including machine learning and artificial intelligence. Customer Content included any data or materials originating from one of Zooms users. The amendments became subject to widespread public scrutiny after being picked up by a technology blog. A few days later, Zoom further amended its terms of service, including a new specification that Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.
Zoom is not the only popular tool to have created concerns due to its terms of service referencing AI. Microsoft is reportedly also planning changes to its terms of use to permit it to process certain user data to train AI technologies, while Amazon Web Services already employs user content, though not personal data, for its machine-learning algorithms. And ChatGPT, with its minimal privacy policy, has been banned internally by some companies, including major banks and tech giants. Those companies may be concerned that employees who use ChatGPT could inadvertently divulge sensitive information, such as customer data or proprietary code, which the technology collects and treats as training data by default.
These instances reflect what may be a growing trend of businesses looking to pull ahead in the generative AI revolution, a pattern that continues to grab headlines. Generative AI is often able to create new digital content using complex computer models trained on vast amounts of data sourced from users. Businesses and developers are, no doubt, eager to explore possible applications of the new technology, which requires obtaining data, potentially from their customer base, and permission from that base to use the data. That permission is commonly obtained from the terms of service agreed to by users, which is why some companies are seeking to amend those terms.
But the amendments also raise familiar questions for businesses trying to protect sensitive user information while achieving mission-critical objectives. Businesses may not have a reasonable expectation of privacy over information provided to third-party technologies with less rigorous terms of service, which could matter if regulatorsor in some cases, consumerscome knocking. Those businesses will need to carefully vet new technologies, their terms of use, and any other privacy policies prior to authorizing use of those technologies by employees to prevent inadvertent collection or use of private or otherwise confidential information. They should also monitor changes in the terms of use of existing technologies to ensure that they are not surrendering important privacy protections through those changes.
We will continue to monitor and report on the data security implications of generative AI as they develop.
See the rest here:
Zoom Reverses Course on Contemplated Use of Customer Content ... - JD Supra