Why bigger training sets must yield diminishing returns: An information-theoretic speed limit for AI

This is a preprint and may not have undergone formal peer review

Stone, J. (Submitted: 2025) Why bigger training sets must yield diminishing returns: An information-theoretic speed limit for AI. [Preprint - Neural Computation] (Submitted)

Abstract

Metadata

Item Type: Preprint
Authors/Creators:
  • Stone, J.
Copyright, Publisher and Additional Information:

© 2025 The Author(s). This is an author-produced version of a paper submitted for publication in Neural Computation. Uploaded in accordance with the publisher's self-archiving policy.

Dates:
  • Submitted: 25 April 2025
Institution: The University of Sheffield
Academic Units: The University of Sheffield > Faculty of Science (Sheffield) > Department of Psychology (Sheffield)
Depositing User: Symplectic Sheffield
Date Deposited: 09 May 2025 15:21
Last Modified: 09 May 2025 15:21
Status: Submitted
Publisher: Massachusetts Institute of Technology Press
Open Archives Initiative ID (OAI ID):

Export

Statistics