Class 2: Understanding Large Language Models (LLMs)
Summary: Deep dive into the architecture, capabilities, and limitations of large language models that power generative AI applications.
Learning Objectives:
- Comprehend transformer architecture and attention mechanisms
- Understand training processes including pre-training and fine-tuning
- Recognize capabilities and constraints of current LLM technology
Key Topics:
- Transformer architecture fundamentals and self-attention mechanisms5
- Token processing, embeddings, and context windows
- Popular LLM families: GPT, Claude, Gemini, LLaMA architectures
- Model parameters, temperature settings, and generation controls
Activities:
- Hands-on exploration of multiple LLM platforms
- Comparative analysis of different model responses to identical prompts
- Discussion on model hallucinations and reliability considerations

