LinkedIn Trains Generative AI Models on User Data

0

LinkedIn, the professional networking platform, has recently come under scrutiny for its decision to train generative AI models using user data without explicitly obtaining consent. The social network has pre-registered user accounts to this training process, which has left many users frowning at the privacy considerations that the move implies.

This became public knowledge when LinkedIn advanced a new privacy setting and opt-out form before change its privacy policy. The updated policy clearly states that data from the platform is being used to train generative AI models. This has raised the eye brow of privacy activist and LinkedIn users as the platform seemed to make this change infmtus without adequate notice to all its users.

 

The Purpose of Generative AI Models on LinkedIn

According to LinkedIn, the use of generative AI models is intended to improve and develop products and services, as well as to personalize the user experience. The company asserts that through the training of these models on the user data they can provide their users with more relevant solution.

LinkedIn has stated that it uses generative AI models for features such as writing assistants, which aim to help users craft more effective and engaging content on the platform. However, the process of opting in is not transparent, leading to more worrismone users’ personal data being used to train AI.

You Might Be Interested In;  OpenAI Unveils Strawberry: AI That Thinks Step by Step

LinkedIn Trains Generative AI Models on User Data

 

Opting Out of Generative AI Model Training

For users who wish to revoke permission for their data to be used in training generative AI models, LinkedIn has provided an opt-out process. The feature lies beneath the Data privacy tab in the account settings, and the users can easily find the “Data for Generative AI Improvement” switch. This toggle can be set to “off” so that its users will not have their data used again for future AI training purposes.

However, it is important to note that opting out only prevents future use of personal data for training generative AI models. Any data that has already been used for training purposes prior to opting out will not be affected by this change.

LinkedIn Trains Generative AI Models on User Data

 

Privacy Enhancing Technologies and Geographic Restrictions

LinkedIn has tried to calm the users and claimed that it uses privacy-preserving techniques to blur or delete users’ data from the training sets. The company also claims that it does not train its generative AI models on data belonging to users who reside in the European Union, European Economic Area, or Switzerland.

These measures may temporarily give some comfort to users who may be worried about their privacy but the fact that LinkedIn decided to turn on its users into an opt-in without their consent has sparked major concern on how individual’s data is being handled on the platform.

LinkedIn Trains Generative AI Models on User Data

 

Additional Opt-Out for Other Machine Learning Tools

In addition to the opt-out process for generative AI models, LinkedIn has also provided users with the option to exclude their data from being used to train other machine learning tools. These tools are used when it is needed for personalization and moderation; however, they do not create content as generative AI models do.

You Might Be Interested In;  Gemini Live Available for Free on Android

However, for the purpose of usage of data in these other machine learning tools, users are required to put down an Objection to LinkedIn Data Processing. This two step opt out has been put down as being too complicated and possibly misleading to users by some quarters.

LinkedIn Trains Generative AI Models on User Data

 

The Broader Context of AI and Privacy

LinkedIn’s decision to opt-in users for training generative AI models comes amidst a broader conversation about the use of personal data in the development of artificial intelligence. Days before LinkedIn published the scandal, Meta revealed that it had harvested non-private users’ data for model training purposes since 2007.

Such revelations have only fueled a bigger conversation about the permissionlessness of using people’s personal data to train artificial intelligence systems. Therefore, as artificial intelligence remains an ever-invading aspect in the society, there is a need for corporations to be more intelligible in the way they collect and use personal information from their users.

Leave A Reply

Your email address will not be published.