Skip to content
#

supervised-finetuning

Here are 35 public repositories matching this topic...

This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.

  • Updated Oct 22, 2024

Improve this page

Add a description, image, and links to the supervised-finetuning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the supervised-finetuning topic, visit your repo's landing page and select "manage topics."

Learn more