Watermarking LLM output
◀ Prev | 2025-11-17, access: $$$ Pro | Next ▶
copyright sampling security text If we're running an LLM service, maybe we don't want users to be able to pass off the model's output as human-written. A simple modification to the search can make the text easily recognizable as LLM output, without disrupting the content or (legitimate) usefulness of the text very much. But will it withstand intelligent attack?
