Kim, J orcid.org/0000-0002-3456-6614 (2019) Sequential training algorithm for neural networks. [Preprint - arXiv]
Abstract
A sequential training method for large-scale feedforward neural networks is presented. Each layer of the neural network is decoupled and trained separately. After the training is completed for each layer, they are combined together. The performance of the network would be sub-optimal compared to the full network training if the optimal solution would be achieved. However, achieving the optimal solution for the full network would be infeasible or require long computing time. The proposed sequential approach reduces the required computer resources significantly and would have better convergences as a single layer is optimised for each optimisation step. The required modifications of existing algorithms to implement the sequential training are minimal. The performance is verified by a simple example.
Metadata
Item Type: | Preprint |
---|---|
Authors/Creators: | |
Dates: |
|
Institution: | The University of Leeds |
Academic Units: | The University of Leeds > Faculty of Engineering & Physical Sciences (Leeds) > School of Mechanical Engineering (Leeds) > Institute of Engineering Systems and Design (iESD) (Leeds) |
Depositing User: | Symplectic Publications |
Date Deposited: | 08 Nov 2024 11:29 |
Last Modified: | 08 Nov 2024 11:29 |
Identification Number: | 10.48550/arXiv.1905.07490 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:163423 |