Biggest risks of using gen AI like ChatGPT, Google Gemini, Microsoft Copilot, Apple Intelligence in your private life

Technology
Friday, July 12th, 2024 2:43 pm EDT

Key Points

  • Awareness and Understanding of Privacy Policies: Many consumers are unaware of how their data is used and retained by generative AI tools like ChatGPT, Gemini, Microsoft Copilot, and Apple Intelligence. It is crucial for users to read and understand privacy policies, ask pertinent questions, and know their options for data control and opt-outs.
  • Caution with Sensitive Data: Consumers should avoid entering sensitive or confidential information into generative AI models. This caution extends to personal, work-related, and non-public data to prevent potential misuse or unintended access. Corporations often implement custom AI tools to safeguard proprietary information.
  • Utilizing Opt-Outs and Privacy Controls: Each generative AI tool has specific privacy settings and opt-out options. Users should take advantage of these features to control data usage and retention. For example, ChatGPT allows users to disable data-sharing for model training, and Microsoft Copilot offers opt-in options with the ability to withdraw consent. Setting short retention periods and deleting chats can further protect privacy.

As generative AI tools like OpenAI’s ChatGPT, Google’s Gemini, Microsoft Copilot, and Apple Intelligence become increasingly accessible to consumers, privacy concerns are significant. Despite the widespread use of these tools, many users are unaware of how their data is utilized and retained, as privacy policies and data-sharing practices vary among different AI services. Jodi Daniels, CEO and privacy consultant at Red Clover Advisors, emphasizes the importance of being an informed consumer and understanding the specific controls available for each tool, as there is no universal opt-out option. The integration of AI into everyday devices and applications, such as Microsoft’s Surface PCs with a dedicated Copilot button and Apple’s AI models running on their devices, raises pertinent privacy questions. Consumers must ask critical privacy questions before choosing an AI tool, such as how their information is used, whether data-sharing can be turned off, and if data can be deleted. Red flags should be raised if these questions cannot be readily answered.

Additionally, consumers should be cautious about entering sensitive data into AI models. Andrew Frost Moroz, founder of Aloha Browser, advises against using generative AI for sensitive or confidential information, as the data could be misused or improperly accessed. Corporations also express concerns about employees using AI models for work-related tasks, fearing that proprietary information could be compromised. To mitigate these risks, companies often use custom versions of AI tools with strict data separation. Consumers should avoid using AI for non-public information and be mindful of the potential consequences of sharing sensitive data with AI models.

Each generative AI tool has unique privacy policies and opt-out options. For instance, ChatGPT allows users to opt out of data being used for model training by disabling the “Improve the model for everyone” feature in the settings. Jacob Hoffman-Andrews, a senior staff technologist at the Electronic Frontier Foundation, suggests that there is no significant benefit for consumers to allow AI to train on their data, as the risks are still being studied. While consumers can remove personal data from the web, untraining AI models is much more complex and an area of ongoing research.

For tools like Microsoft Copilot, which integrates AI into everyday applications like Word, Excel, and PowerPoint, users have the option to opt in for enhanced functionality. However, privacy professionals caution that opting in means relinquishing some control over data use. Microsoft states that it does not share consumer data with third parties without permission and does not use customer data to train AI without consent. Users can opt in via the Power Platform admin center and can withdraw consent at any time.

When using generative AI for search purposes, consumers should set a short retention period and delete chats after retrieving the necessary information. This practice helps minimize the risk of sensitive data becoming part of the model training and reduces the chances of third-party access. Overall, being vigilant about privacy settings and understanding the specific policies of each AI tool are crucial steps in protecting personal information in the age of generative AI.

For the full original article on CNBC, please click here: https://www.cnbc.com/2024/07/12/biggest-risks-of-gen-ai-in-your-private-life-chatgpt-gemini-copilot.html