“Researchers have identified and formalized sources of nondeterminism in large language models that persist even when using deterministic decoding settings. The team introduces "background temperature" to quantify this hidden randomness caused by implementation-level factors like floating-point arithmetic and kernel behavior.”
Key Takeaways
- LLMs exhibit nondeterministic behavior despite temperature T=0 settings due to implementation factors
- Sources include batch-size variation, kernel non-invariance, and floating-point non-associativity
- Background temperature concept formalizes and characterizes this previously unexplained hidden randomness
LLMs produce different outputs even at temperature zero due to hidden computational randomness.
trending_upWhy It Matters
Understanding hidden sources of randomness in LLMs is critical for reliability, reproducibility, and deployment in high-stakes applications. This research provides a framework for quantifying and potentially controlling nondeterminism that practitioners currently cannot fully predict or manage, improving model robustness and consistency across different computational environments.



