ai-for-less-suffering.com

← all claims

descriptive claim

Under the assumption that a generative model's output entropy is at most the true entropy of its training distribution, the relative mutual information between training data and model outputs can be lower-bounded by 1 - H(model outputs) / H(training distribution), implying that lower-entropy model outputs carry proportionally more information from the training dataset.

desc_model_output_entropy_training_info_bound

confidence
0.70

Evidence (1)

supports (1)

Camps holding this claim (3)