Author: Enrique Saiz Oubiña
Advisor: Manuel Antonio Sánchez-Montañés Isla
Date: June 2024
All the notebooks in this repository can be run in Google Colab session with T4 GPU.
Collection of the resources i found useful during the elaboration of my work. These are different from the main document bibliography. There is something to learn from each; i recommend to take a look.
- Question Answering
- PEFT ParameterEfficient FineTuning
- Intro to BitsAndBytes and Quantization
- BitsAndBytes: Quantization and matrix multiplication
- Phi-2 finetuning. Adapters
- Generation with LLMs: strategies and pitfalls
- Performance and scalability: gradient acc, FP16, ...
- About training in half precision (BF16 & FP16)
- Half-precision in T5
- About batch sizes (Nvidia)
- About learning rate schedulers
- Methods and tools for efficient training on a single GPU
- Finetuning on consumer hardware (PyTorch)
- Large research of training parameters (StableDiffusion)
- Evaluation considerations
- The compute_metrics function
- Further evaluation: BERT Score
- Further evaluation: Semantic Answer Similarity
- Further evaluation: Eval Harness
- The illustrated transformer
- Some motivation
- SQuAD 2 and the Null Response
- CO2 Impact calculator
- AI yearly report