[1] E. Bullmore, O. Sporns. The economy of brain network organization.
[2] D.D. Cox, T. Dean. Neural networks and neuroscience-inspired computer vision.
[3] C.E. Leiserson, et al.. There’s plenty of room at the top: what will drive computer performance after Moore’s law?.
[4] M.Z. Alom, et al.. A state-of-the-art survey on deep learning theory and architectures.
[5] Y. LeCun, et al.. Backpropagation applied to handwritten zip code recognition.
[6] A. Krizhevsky, I. Sutskever, G.E. Hinton. ImageNet classification with deep convolutional neural networks.
[7] Mahardi, I.-H. Wang, K.-C. Lee, S.-L. Chang. Images classification of dogs and cats using fine-tuned VGG models.
[8] K. He, X. Zhang, S. Ren, J. Sun. Deep residual learning for image recognition.
[9] Y. Goldberg.
[10] B. Hu, Z. Lu, H. Li, Q. Chen. Convolutional neural network architectures for matching natural language sentences. (2015).
[11] G. Hinton, et al.. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups.
[12] T. Wu, et al.. A brief overview of ChatGPT: the history, status quo and potential future development.
[14] D. Patterson, et al.. Carbon emissions and large neural network training. (2021).
[15] M. Horowitz. 1.1 Computing's energy problem (and what we can do about it).
[16] Z. Yan, J. Zhou, W.-F. Wong. Energy efficient ECG classification with spiking neural network.
[17] E. Sadovsky, R. Jarina, R. Orjesek. Image recognition using spiking neural networks.
[18] F. Martinelli, G. Dellaferrera, P. Mainar, M. Cernak. Spiking neural networks trained with backpropagation for low power neuromorphic implementation of voice activity detection.
[19] G. Foderaro, C. Henriquez, S. Ferrari. Indirect training of a spiking neural network for flight control via spike-timing-dependent synaptic plasticity.
[20] V. Vanhoucke, A. Senior, M.Z. Mao. Improving the speed of neural networks on CPUs. Deep Learning and Unsupervised Feature Learning Workshop (NIPS), (MIT Press (2011), pp. 1-8.
[21] L. Wang, et al.. Superneurons: dynamic GPU memory management for training deep neural networks.
[22] S. Mittal. A survey of FPGA-based accelerators for convolutional neural networks.
[23] E. Nurvitadhi, et al.. Accelerating recurrent neural networks in analytics servers: comparison of FPGA, CPU, GPU, and ASIC.
[24] X. Ju, B. Fang, R. Yan, X. Xu, H. Tang. An FPGA implementation of deep spiking neural networks for low-power and fast classification.
[25] L.P. Maguire, et al.. Challenges for large-scale implementations of spiking neural networks on FPGAs.
[26] P.A. Merolla, et al.. A million spiking-neuron integrated circuit with a scalable communication network and interface.
[27] B. Govoreanu, et al.. 10×10 nm2 Hf/HfOx crossbar resistive RAM with excellent performance, reliability and low-energy operation.
[28] B.J. Choi, et al.. High-speed and low-energy nitride memristors. Adv. Funct. Mater., 26 (2016), pp. 5290-5296.
[29] J. Zhou, et al.. Very low-programming-current RRAM with self-rectifying characteristics.
[30] M.-J. Lee, et al.. A fast, high-endurance and scalable non-volatile memory device made from asymmetric Ta2O(5-x)/TaO(2-x) bilayer structures.
[31] J.R. Brink, C.R. Haden. The computer and the brain.
[32] W. Jeon, et al.. Chapter Six - Deep Learning with GPUs.
[33] M. Capra, et al.. Hardware and software optimizations for accelerating deep neural networks: survey of current trends, challenges, and the road ahead.
[34] A. Sebastian, M. Le Gallo, R. Khaddam-Aljameh, E. Eleftheriou. Memory devices and applications for in-memory computing.
[35] J. Ahn, S. Yoo, O. Mutlu, K. Choi. PIM-enabled instructions: a low-overhead, locality-aware processing-in-memory architecture.
[36] M. Prezioso, et al.. Training and operation of an integrated neuromorphic network based on metal-oxide memristors.
[37] G.W. Burr, et al.. Neuromorphic computing using non-volatile memory.
[38] X. Zhu, W.D. Lu. Optogenetics-inspired tunable synaptic functions in memristors.
[39] O. Krestinskaya, A.P. James, L.O. Chua. Neuromemristive circuits for edge computing: a review.
[40] W. Maass. Networks of spiking neurons: the third generation of neural network models.
[41] W.S. McCulloch, W. Pitts. A logical calculus of the ideas immanent in nervous activity.
[42] V. Nair, G.E. Hinton. Rectified linear units improve restricted Boltzmann machines. Proceedings of the 27th International Conference on International Conference on Machine Learning (ICML-10),, Omnipress (2010), pp. 807-814.
[43] D.E. Rumelhart, G.E. Hinton, R.J. Williams. Learning representations by back-propagating errors.
[44] K. Roy, A. Jaiswal, P. Panda. Towards spike-based machine intelligence with neuromorphic computing.
[45] A. Taherkhani, et al.. A review of learning in biologically plausible spiking neural networks.
[46] Y. Cao, Y. Chen, D. Khosla. Spiking deep convolutional neural networks for energy-efficient object recognition.
[47] P.U. Diehl, et al.. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing.
[48] A.L. Hodgkin, A.F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve.
[49] W. Gerstner, W.M. Kistler.
[50] E.M. Izhikevich. Simple model of spiking neurons.
[51] B. Segee. Methods in neuronal modeling: from ions to networks, 2nd Edition.
[52] W. Fang, et al.. Incorporating learnable membrane time constant to enhance learning of spiking neural networks.
[53] Q. Duan, et al.. Spiking neurons with spatiotemporal dynamics and gain modulation for monolithically integrated memristive neural networks.
[54] R. Yang, H.-M. Huang, X. Guo. Memristive synapses and neurons for bioinspired computing.
[55] M. Xu, et al.. Recent advances on neuromorphic devices based on chalcogenide phase-change materials.
[56] J.-N. Huang, T. Wang, H.-M. Huang, X. Guo. Adaptive SRM neuron based on NbO memristive device for neuromorphic computing.
[57] A. Shaban, S.S. Bezugam, M. Suri. An adaptive threshold neuron for recurrent spiking neural networks with nanodevice hardware implementation.
[58] S. Hochreiter, J. Schmidhuber. Long short-term memory.
[59] J. Tang, et al.. Bridging biological and artificial neural networks with emerging neuromorphic devices: fundamentals, progress, and challenges.
[60] T. Masquelier, R. Guyonneau, S.J. Thorpe. Competitive STDP-based spike pattern learning.
[61] S.R. Kheradpisheh, M. Ganjtabesh, S.J. Thorpe, T. Masquelier. TDP-based spiking deep convolutional neural networks for object recognition.
[62] M. Mozafari, S.R. Kheradpisheh, T. Masquelier, A. Nowzari-Dalini, M. Ganjtabesh. First-spike-based visual categorization using reward-modulated STDP.
[63] M. Mozafari, M. Ganjtabesh, A. Nowzari-Dalini, S.J. Thorpe, T. Masquelier. Bio-inspired digit recognition using reward-modulated spike-timing-dependent plasticity in deep convolutional networks.
[64] P.U. Diehl, M. Cook. Unsupervised learning of digit recognition using spike-timing-dependent plasticity.
[65] A. Tavanaei, A. Maida. BP-STDP: approximating backpropagation using spike timing dependent plasticity.
[66] H. Zheng, Y. Wu, L. Deng, Y. Hu, G. Li. Going deeper with directly-trained larger spiking neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 11062–11070 (AAAI, 2021), IEEE (2021), pp. 230-233.
[67] Y. Wu, et al.. Direct training for spiking neural networks: faster, larger, better.
[68] P. Gu, R. Xiao, G. Pan, H. Tang. STCA: spatio-temporal credit assignment with delayed feedback in deep spiking neural networks. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19),, Morgan Kaufmann (2019), pp. 1366-1372.
[69] Y. Yan, et al.. Graph-based spatio-temporal backpropagation for training spiking neural networks. 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), IEEE (2021), pp. 1-4.
[70] L. Liang, et al.. H2Learn: high-efficiency learning accelerator for high-accuracy spiking neural networks.
[71] J. Han, Z. Wang, J. Shen, H. Tang. Symmetric-threshold ReLU for fast and nearly lossless ANN-SNN conversion.
[72] J. Ding, Z. Yu, Y. Tian, T. Huang. Optimal ANN-SNN conversion for fast and accurate inference in deep spiking neural networks. Preprint at.
[73] W. Fang, et al.. SpikingJelly: an open-source machine learning infrastructure platform for spike-based intelligence.
[74] J.K. Eshraghian, et al.. Training spiking neural networks using lessons from deep learning.
[75] M. Stimberg, R. Brette, D.F. Goodman. Brian 2, an intuitive and efficient neural simulator.
[76] D.B. Strukov, G.S. Snider, D.R. Stewart, R.S. Williams. The missing memristor found.
[77] Y.-T. Su, et al.. A method to reduce forming voltage without degrading device performance in hafnium oxide-based 1T1R resistive random access memory.
[78] S.-X. Chen, S.-P. Chang, S.-J. Chang, W.-K. Hsieh, C.-H. Lin. Highly stable ultrathin TiO2 based resistive random access memory with low operation voltage.
[79] A. Prakash, D. Deleruyelle, J. Song, M. Bocquet, H. Hwang. Resistance controllability and variability improvement in a TaOx-based resistive memory for multilevel storage application.
[80] F.M. Simanjuntak, D. Panda, K.-H. Wei, T.-Y. Tseng. Status and prospects of ZnO-based resistive switching memory devices.
[81] W. Banerjee, et al.. Occurrence of resistive switching and threshold switching in atomic layer deposited ultrathin (2 nm) aluminium oxide crossbar resistive random access memory.
[82] Y. Li, et al.. Ultrafast synaptic events in a chalcogenide memristor.
[83] Y. Li, et al.. Activity-dependent synaptic plasticity of a chalcogenide electronic synapse for neuromorphic systems.
[84] X. Xia, et al.. 2D-Material-Based volatile and nonvolatile memristive devices for neuromorphic computing.
[85] G. Cao, et al.. 2D material based synaptic devices for neuromorphic computing.
[86] Y. van de Burgt, A. Melianas, S.T. Keene, G. Malliaras, A. Salleo. Organic electronics for neuromorphic computing.
[87] L. Yuan, S. Liu, W. Chen, F. Fan, G. Liu. Organic memory and memristors: from mechanisms, materials to devices.
[88] R. Waser, R. Dittmann, G. Staikov, K. Szot. Redox-based resistive switching memories – nanoionic mechanisms, prospects, and challenges.
[89] F. Zahoor, T.Z. Azni Zulkifli, F.A. Khanday. Resistive random access memory (RRAM): an overview of materials, switching mechanism, performance, multilevel cell (mlc) storage, modeling, and applications. Nanoscale Res. Lett., 15 (2020), p. 90.
[90] M.A. Zidan, J.P. Strachan, W.D. Lu. The future of electronics based on memristive systems.
[91] R. Schmitt, et al.. Accelerated ionic motion in amorphous memristor oxides for nonvolatile memories and neuromorphic computing.
[92] D.-H. Kwon, et al.. Atomic structure of conducting nanofilaments in TiO2 resistive switching memory.
[93] I. Valov, R. Waser, J.R. Jameson, M.N. Kozicki. Electrochemical metallization memories—fundamentals, applications, prospects.
[94] F. Qin, Y. Zhang, H.W. Song, S. Lee. Enhancing memristor fundamentals through instrumental characterization and understanding reliability issues.
[95] I. Chakraborty, A. Jaiswal, A.K. Saha, S.K. Gupta, K. Roy. Pathways to efficient neuromorphic computing with non-volatile memory technologies.
[96] A.R. Patil, T.D. Dongale, R.K. Kamat, K.Y. Rajpure. Binary metal oxide-based resistive switching memory devices: a status review.
[97] Z. Wang, et al.. Engineering incremental resistive switching in TaOx based memristors for brain-inspired computing.
[98] L. Wu, H. Liu, J. Li, S. Wang, X. Wang. A multi-level memristor based on Al-doped HfO2 thin film.
[99] W.S. Choi, et al.. Influence of Al2O3 layer on InGaZnO memristor crossbar array for neuromorphic applications.
[100] Y. Xiao, et al.. Improved artificial synapse performance of Pt/HfO2/BiFeO3/HfO2/TiN memristor through N2 annealing.
[101] Y.-L. Zhu, et al.. Uniform and robust TiN/HfO2/Pt memristor through interfacial Al-doping engineering.
[102] C. Li, et al.. Large memristor crossbars for analog computing.
[103] C. Li, et al.. Analogue signal and image processing with large memristor crossbars.
[104] M. Hu, et al.. Memristor-based analog computation and neural network classification with a dot product engine.
[105] C. Li, et al.. Efficient and self-adaptive in-situ learning in multilayer memristor neural networks.
[106] Q. Liu, et al.. 33.2 A fully integrated analog ReRAM based 78.4 TOPS/W compute-in-memory chip with fully parallel MAC computing. 2020 IEEE International Solid-State Circuits Conference (ISSCC), 500–502 (IEEE, 2020), IEEE (2020), pp. 500-502.
[107] P. Yao, et al.. Fully hardware-implemented memristor convolutional neural network.
[108] K. Jeon, et al.. Purely self-rectifying memristor-based passive crossbar array for artificial neural network accelerators.
[109] S. Ambrogio, et al.. Equivalent-accuracy accelerated neural-network training using analogue memory.
[110] F.M. Bayat, et al.. Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits.
[111] R. Wang, et al.. Implementing in-situ self-organizing maps with memristor crossbar arrays for data mining and optimization.
[112] H. Zhao, et al.. Energy-efficient high-fidelity image reconstruction with memristor arrays for medical diagnosis.
[113] F. Ye, F. Kiani, Y. Huang, Q. Xia. Diffusive memristors with uniform and tunable relaxation time for spike generation in event-based pattern recognition.
[114] S. Kumar, R.S. Williams, Z. Wang. Third-order nanocircuit elements for neuromorphic engineering.
[115] Z. Wang, et al.. Fully memristive neural networks for pattern classification with unsupervised learning.
[116] Z. Wu, et al.. A habituation sensory nervous system with memristors.
[117] R. Yuan, et al.. A neuromorphic physiological signal processing system based on VO2 memristor for next-generation human-machine interface.
[118] H. Wang, Y. Xu, R. Yang, X. Miao. A LIF neuron with adaptive firing frequency based on the GaSe memristor.
[119] J. Zhao, et al.. Memristors based on NdNiO3 nanocrystals film as sensory neurons for neuromorphic computing.
[120] M. Song, et al.. Self-compliant threshold switching devices with high on/off ratio by control of quantized conductance in Ag filaments.
[121] Q. Hua, H. Wu, B. Gao, H. Qian. Enhanced performance of Ag-filament threshold switching selector by rapid thermal processing.
[122] S. Agarwal, et al.. Resistive memory device requirements for a neural algorithm accelerator.
[123] T. Kim, et al.. Spiking neural network (SNN) with memristor synapses having non-linear weight update.
[124] M. Rao, et al.. Thousands of conductance levels in memristors integrated on CMOS.
[125] N. Rathi, P. Panda, K. Roy. STDP-based pruning of connections and weight quantization in spiking neural networks for energy-efficient recognition.
[126] W.C. Shen, et al.. High-K metal gate contact RRAM (CRRAM) in pure 28 nm CMOS logic process.
[127] R. Zhao, et al.. Accelerating binarized convolutional neural networks with software-programmable FPGAs. 2017 International Symposium on Field Programmable Gate Arrays (FPGA), ACM (2017), pp. 15-24.
[128] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, Y. Bengio. Binarized neural networks. Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS), MIP Press (2016), pp. 4114-4122.
[129] E. Nurvitadhi, et al.. Accelerating binarized neural networks: comparison of FPGA, CPU, GPU, and ASIC. In 2016 International Conference on Field-Programmable Technology (FPT), 77–84.
[130] S. Liang, S. Yin, L. Liu, W. Luk, S. Wei. FP-BNN: binarized neural network on FPGA.
[131] X. Sun, et al.. XNOR-RRAM: a scalable and parallel resistive synaptic architecture for binary neural networks.
[132] T. Simons, D.-J. Lee. A review of binarized neural networks.
[133] G.C. Qiao, et al.. Direct training of hardware-friendly weight binarized spiking neural network with surrogate gradient learning towards spatio-temporal event-based dynamic data recognition.
[134] V.-T. Nguyen, Q.-K. T, R. Zhang, Y. Nakashima. XNOR-BSNN: in-memory computing model for deep binarized spiking neural network. 2021 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), 17–21 (IEEE, 2021), IEEE (2021), pp. 17-21.
[135] M. Abu Lebdeh, H. Abunahla, B. Mohammad, M. Al-Qutayri. An efficient heterogeneous memristive xnor for in-memory computing.
[136] X.-Y. Wang, et al.. High-density memristor-CMOS ternary logic family.
[137] W. Zhang, et al.. Edge learning using a fully integrated neuro-inspired memristor chip.
[138] A. Balaji, et al.. Mapping spiking neural networks to neuromorphic hardware.
[139] A. Ankit, A. Sengupta, P. Panda, K. Roy. RESPARC: a reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks. 2017 Design Automation Conference (DAC), 1–6 (ACM, 2017), ACM (2017), pp. 1-6.
[140] G. Boquet, et al.. Offline training for memristor-based neural networks. 2020 28th European Signal Processing Conference (EUSIPCO), 1547–1551 (IEEE, 2021), , 1547-1551 (IEEE, 2021). (2021), pp. 1547-1551.
[141] P. Wijesinghe, A. Ankit, A. Sengupta, K. Roy. An all-memristor deep spiking neural computing system: a step toward realizing the low-power stochastic brain.
[142] W. Maass. Lower bounds for the computational power of networks of spiking neurons.
[143] M. Beyeler, N.D. Dutt, J.L. Krichmar. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule.
[144] A. Tavanaei, T. Masquelier, A.S. Maida. Acquisition of visual features through probabilistic spike-timing-dependent plasticity. 2016 International Joint Conference on Neural Networks (IJCNN), 307–314 (IEEE, 2016), IEEE (2016), pp. 307-314.
[145] A. Tavanaei, M. Ghodrati, S.R. Kheradpisheh, T. Masquelier, A. Maida. Deep learning in spiking neural networks.
[146] L. Deng, et al.. Rethinking the performance comparison between SNNS and ANNS.
[147] G. Gallego, et al.. Event-based vision: a survey.
[148] M. Abadi, et al.. TensorFlow: large-scale machine learning on heterogeneous distributed systems. Preprint at.
[149] A. Paszke, et al.. Automatic Differentiation in PyTorch. 31st Conference on Neural Information Processing Systems (NIPS), 1–4 (MIT Press, 2017), MIT Press (2017).
[150] M. Sivan, et al.. All WSe2 1T1R resistive RAM cell for future monolithic 3D embedded memory integration.
[151] E.J. Merced-Grafals, N. Dávila, N. Ge, R.S. Williams, J.P. Strachan. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.
[152] J. Song, J. Woo, A. Prakash, D. Lee, H. Hwang. Threshold selector with high selectivity and steep slope for cross-point memory array.
[153] Y. Wang, Z. Zhang, H. Li, L. Shi. Realizing bidirectional threshold switching in Ag/Ta2O5/Pt diffusive devices for selector applications.
[154] K.M. Kim, et al.. self-rectifying, and forming-free memristor with an asymmetric programing voltage for a high-density crossbar application.
[155] J.J. Yang, D.B. Strukov, D.R. Stewart. Memristive devices for computing.
[156] A. Grossi, et al.. Fundamental variability limits of filament-based RRAM.
[157] A. Chen. Utilizing the variability of resistive random access memory to implement reconfigurable physical unclonable functions.
[158] F. Pan, S. Gao, C. Chen, C. Song, F. Zeng. Recent progress in resistive random access memories: materials, switching mechanisms, and performance.