Generative AI Engineering and Fine-Tuning Transformers
My thirteenth module in my IBM course!
💡
What I learned!
- Delving into the practical aspects of working with large language models (LLMs) using industry-standard tools like Hugging Face and PyTorch;
- Exploring the distinctions between these frameworks, learn how to load and perform inference with pretrained models, and understand the processes of pretraining and fine-tuning LLMs;
- Exploring cutting-edge methods for fine-tuning large language models using parameter-efficient fine-tuning (PEFT) techniques : adapters, low-rank adaptation (LoRA), and quantization, along with practical applications of PyTorch and Hugging Face libraries.
Subscribe to my monthly newsletter
No spam, no sharing to third party. Only you and me.
Member discussion