
LinkedIn Develops New Recommendation Techniques with Smaller Models
TL;DR
LinkedIn, a leader in AI-based recommendation systems, implements innovative techniques to optimize its job platform by focusing on smaller model distillation.
LinkedIn Innovates in Job Recommendation Systems
LinkedIn, a leader in AI-based recommendation systems, implements innovative techniques to optimize its job platform. The company has moved away from the traditional **prompting** approach and focused on the distillation of smaller models to increase accuracy and efficiency. This change was motivated by the need to meet the new demands of users.
Erran Berger, Vice President of Product Engineering at LinkedIn, shares on a podcast that the company recognized that the prompting method was not viable. "There would be no way to achieve our goals through this approach, so we did not consider it for the new recommendation systems," he states.
As a result, the team drafted a detailed product policy document to refine an initial model with 7 billion parameters, which was later transformed into smaller models, with hundreds of millions of parameters. This technique generated a reusable guide for the company's AI products.
Innovation Through Multi-Teacher Distillation
The team's focus was to create a large language model (LLM) capable of interpreting job queries and candidate profiles in real-time. Collaborating with the product management team, they developed a document outlining the scoring of job description and profile pairs.
"We went through several iterations," recounts Berger. The document was combined with a dataset consisting of thousands of pairs of queries and profiles. This material was used to train the 7 billion parameter model.
However, the model needs more than just product policies; click prediction and personalization are essential for an effective recommendation system. Therefore, the team developed a second model focused on this prediction before creating an even smaller version for training.
This distillation technique allowed the team to maintain fidelity to the original product policy while enhancing click prediction. Now, the approach modulates and components the training process, facilitating adjustments to each element.
The Collaboration Between Teams in the Age of AI
Berger highlights the importance of alignment between product management teams and machine learning engineers. Creating a solid product policy translates the expertise of managers into a unified document.
Traditionally, teams focused on distinct areas, but now they work together to create aligned models. "The way product managers interact with machine learning engineers has changed radically," he comments.
For more information, listen to the full podcast and learn about the optimization of the research and development process, the importance of experimentation pipelines, and traditional debugging practices in engineering.
Content selected and edited with AI assistance. Original sources referenced above.


