North Coast Synthesis Ltd.

Tag: training

Memorization and generalization

2026-01-19 Video memorization text theory training How much arbitrary information, like random bits, can a language model memorize during training? This paper suggests the answer is 3.6 bits per parameter. Access: Free account (logged in)

Cross-entropy

2026-01-12 Video basics math theory training Entropy is the negative logarithm of probability, averaged over all outcomes. Cross-entropy is a similar calculation, involving logs of probabilities from one distribution averaged over a different distribution. These concepts form an excuse for reading Claude Shannon's classic paper A Mathematical Theory of Communication; and cross-entropy in particular is the most popular loss function for language model training. Access: $$$ Pro

Optimization with Adam

2025-12-15 Video basics math theory training Training consists of finding the parameters for a model that will give the lowest possible value of the loss function. How do we actually do that, and do it efficiently? The Adam algorithm, from 2015, is one way, and still popular today. Access: $$$ Pro

Automatic differentiation

2025-11-03 Video basics math theory training Training a machine learning model is one case of the larger class of "optimization" problems; to solve it, you need to calculate how the output (i.e. the loss) changes in relation to inputs (such as weights). I introduce the calculus topic of the derivative, and discuss how to calculate the derivative of a piece of software by augmenting the compiler or interpreter to do it during execution. Access: $ Basic

Goldfish loss

2025-10-27 Video training theory text Apertus copyright memorization It may be a problem for text models to generate exact quotes from training data. This paper looks at a simple modification to the training loss function, intended to prevent models from being able to generate exact quotes. The technique was adopted by the recent Apertus models in their pursuit of "compliance." Access: Free account (logged in)

Quis custodiet reward models

2025-09-29 Video alignment training text LLaMA Gemma Large language models are "aligned" using smaller, specially trained reward models. These are often secret, and poorly studied even if public. This paper opens the door to exploring reward models by asking them about their values. Access: Free account (logged in)

Data for testing logical inference

2025-09-08 Video training tools text logic This short paper introduces a dataset, or software for generating such, to test language models' handling of chains of logical inference Access: $ Basic

Dog-whistle GANs

2025-05-21 Video basics training security theory image GAN Generative Adversarial Nets, and their implications for watermarking generated text. Access: Public