How LinkedIn Uses Your Data to Train Its AI Systems

ADN
LinkedIn is set to use user data to train its artificial intelligence, raising questions about which specific information will be included in this process. The move highlights ongoing concerns around privacy and the handling of personal details on professional networking platforms.
TL;DR
- LinkedIn uses public data for AI training in Europe.
- Minors and private messages are excluded from data collection.
- Users can opt out through privacy settings.
LinkedIn’s Expansion of AI Training Sparks Privacy Debate
After quietly introducing similar measures across the United States, LinkedIn, the professional networking platform owned by Microsoft, is now extending its use of members’ public data to train generative AI models in the European Union and several additional regions. This development, effective from Monday, signals a new era in how user-generated content is leveraged—and not without controversy.
The Scope and Limits of Data Collection
According to details shared on the company’s blog, a range of publicly available information will be subject to algorithmic analysis. This includes users’ profiles, posts, articles, comments, and resumes uploaded for job applications. However, there are explicit boundaries: neither private messages nor salary information will be included in this initiative.
The collected data fuels new advances in artificial intelligence, particularly those developed using Microsoft Azure OpenAI technology. The intent? To enhance generative capabilities that support recruitment tools and other business-focused applications across the platform.
A Global Rollout with Notable Exclusions
Following its introduction in North America, LinkedIn’s policy now applies to users in the United Kingdom, Switzerland, Canada, Hong Kong and beyond. Importantly—perhaps as a response to growing concerns around child safety online—minors are categorically exempt from this data harvesting process. This exclusion holds even if a young person’s privacy settings would otherwise permit it.
User Autonomy and Broader Industry Trends
For those unsettled by these changes, opting out remains an option. Users may review their preferences within LinkedIn’s privacy settings to prevent their public data from being used for AI training purposes. This move aligns with actions taken by other digital giants: since late May, Meta, parent company of Facebook and Instagram, has also begun utilizing user content to refine its own generative AI—unless individuals formally object via a dedicated form.
Several factors explain this trend among leading tech firms:
- The competitive race to build more sophisticated AI systems.
- An abundance of user-generated content as training material.
- The increasing scrutiny over digital privacy rights worldwide.
As debates around personal data usage intensify in tandem with rapid advancements in artificial intelligence, such policies highlight the ongoing tension between innovation and individual privacy rights on major digital platforms. The choices that users make today could well define their relationship with these networks—and their data—for years to come.