Abstract
A novel batch gradient descent algorithm for parameterized quantum circuits that significantly reduces the time complexity in terms of batch size for training quantum neural networks is proposed. Batch data constructed to quantum random access memory (qRAM) structure is mapped to one circuit that estimates average loss. As the number of circuits decreases, the range to which quantum amplitude estimation can be applied increases, speeding up with a quadratic scale in batch size.
| Original language | English |
|---|---|
| Article number | e70162 |
| Journal | Electronics Letters |
| Volume | 61 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 2025 Jan 1 |
Bibliographical note
Publisher Copyright:© 2025 The Author(s). Electronics Letters published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
Keywords
- computational complexity
- gradient methods
- learning (artificial intelligence)
- quantum computing
ASJC Scopus subject areas
- Electrical and Electronic Engineering
Fingerprint
Dive into the research topics of 'Fast batch gradient descent in quantum neural networks'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS