Course Outline

Introduction to Open-Source LLMs

  • What are open-weight models and why they're important
  • Overview of LLaMA, Mistral, Qwen, and other community models
  • Use cases for private, on-premise, or secure deployments

Environment Setup and Tools

  • Installing and configuring Transformers, Datasets, and PEFT libraries
  • Choosing appropriate hardware for fine-tuning
  • Loading pre-trained models from Hugging Face or other repositories

Data Preparation and Preprocessing

  • Dataset formats (instruction tuning, chat data, text-only)
  • Tokenization and sequence management
  • Creating custom datasets and data loaders

Fine-Tuning Techniques

  • Standard full fine-tuning vs. parameter-efficient methods
  • Applying LoRA and QLoRA for efficient fine-tuning
  • Using Trainer API for quick experimentation

Model Evaluation and Optimization

  • Assessing fine-tuned models with generation and accuracy metrics
  • Managing overfitting, generalization, and validation sets
  • Performance tuning tips and logging

Deployment and Private Use

  • Saving and loading models for inference
  • Deploying fine-tuned models in secure enterprise environments
  • On-premise vs. cloud deployment strategies

Case Studies and Use Cases

  • Examples of enterprise use of LLaMA, Mistral, and Qwen
  • Handling multilingual and domain-specific fine-tuning
  • Discussion: Trade-offs between open and closed models

Summary and Next Steps

Requirements

  • An understanding of large language models (LLMs) and their architecture
  • Experience with Python and PyTorch
  • Basic familiarity with the Hugging Face ecosystem

Audience

  • ML practitioners
  • AI developers
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories