hands-on-llms

🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦

open-sourceagent-frameworks
Visit WebsiteView on GitHub
3.4k
Stars
+284
Stars/month
0
Releases (6m)

Overview

Hands-on LLMs is an educational open-source course that teaches practitioners how to build production-ready LLM systems through a real-world financial advisor project. The course covers the complete MLOps pipeline including training, deployment, and real-time inference using modern tools like QLoRA for fine-tuning, vector databases, and serverless GPU infrastructure. Students learn to implement a 3-pipeline architecture: training pipeline for fine-tuning open-source LLMs on proprietary Q&A datasets, streaming real-time pipeline for data processing, and inference pipeline for serving the model. The course emphasizes practical LLMOps practices including experiment tracking with Comet ML, model registry management, and serverless deployment using Beam. Note that this original course has been archived and replaced with a new 'LLM Twin' course for an improved learning experience.

Pros

  • + Complete end-to-end LLM system architecture with real production deployment examples using modern MLOps tools
  • + Hands-on approach with practical financial advisor use case that demonstrates real-world application patterns
  • + Comprehensive coverage of LLMOps including experiment tracking, model registry, and serverless GPU infrastructure deployment

Cons

  • - Requires significant hardware resources (10GB VRAM, CUDA GPU) for local training, though cloud alternatives are provided
  • - Course has been archived in favor of a newer 'LLM Twin' course, potentially indicating outdated content or approaches

Use Cases

Getting Started

1. Set up external services (Alpaca for data, Qdrant for vector storage, Comet ML for experiment tracking, Beam for GPU deployment), 2. Clone the repository and install dependencies following the setup instructions in the modules/training_pipeline directory, 3. Run the training pipeline to fine-tune your first LLM model using the provided financial Q&A dataset