SPR 2025 – Tuning/Constraining LLMs

I developed this course for the Master of Science in Computational Linguistics (CLMS) program at the University of Washington.

With the recent explosion of generative AI as consumer-facing products, it is clear that Large Language Models (LLMs) are a powerful tool for implementing new systems that have greatly improved natural language understanding (NLU); but still suffer from a variety of shortcomings, particularly when used for specialized applications.

This course was designed to outline the current (as of Spring 2025) challenges to implementing LLM-based systems, and the techniques to address these challenges.

Course Content

Week 1: Motivation & Background

  • Course Content & Structure
  • Topic Overview

Week 2: Evaluation

  • Importance of Evaluation
  • Evaluation Methods (Similarity, Language Quality, Answer Quality, Safety & Reliability…)
  • Quantitative vs. Qualititative
  • Human Evaluation & Annotator Agreement

Week 3: Prompt Engineering & In-Context Learning

  • CLEAR Method (Concise, Logical, Explicit, Adaptive, Reflective)
  • Zero-shot vs. Few-shot Methods
  • Auto-ICL Methods
  • Optimizing ICL
    • Selection Strategy
    • Ordering Strategies
    • Example Generation Comparisons

Week 4: Retrieval Augmented Generation (RAG)

Week 5: Guardrails

  • Guardrails By Implementation Point
    • e.g. Pre-training vs. Inference Time, etc.
  • Guardrails By Implementation Type
    • Classification vs. Judge LLM
  • Guardrails By Content Type

Week 6: Parameter-Efficient Fine-Tuning

  • LLM Architecture
  • PEFT Methods (Prefix-Tuning, LoRA, Adapters)
    • Comparison

Week 7: Chains, Agents, Tool Use, Planning & Constrained Output

Week 8: Code Examples & Implementation

Week 9: Watermarks, Adversarial Attacks, Intellectual Property Protection

  • Adversarial challenges for LLMs
  • Steganography techniques
  • Linguistic steganography methods
  • Watermarking approaches
  • Data poisoning and backdoor attacks

Week 10: Recap, What’s Next? Ethical & Legal Issues

Course Projects

Students’ final project was to either:

  1. Write a Term paper with 6+ sources either focusing on either a task that used various LLM approaches, or a deep dive on one method (e.g. LoRA) and various aspects.
  2. Implement a pilot system using one of the covered techniques, and create a short writeup.

See the students’ projects here