Hardware Especializado para Redes Neuronales: Una Revisión Sistemática

Autores/as

DOI:

https://doi.org/10.59801/tyn.v3i2.224

Palabras clave:

Hardware, Inteligencia Artificial, Redes Neuronales

Resumen

Las redes neuronales han realizado grandes avances gracias a la evolución del hardware a través de los años, sin embargo, al crecer la complejidad de las redes neuronales también crecen sus requerimientos de hardware, además del tiempo y coste para su despliegue. En este Paper se realiza una revisión sistemática y discute el uso de hardware especializado para redes neuronales y sus aplicaciones en el mundo real. Se establece que el hardware específico en aplicaciones de inteligencia artificial y aprendizaje profundo ofrece diversas ventajas significativas comparado con el entrenamiento y despliegue de aplicaciones de inteligencia artificial. Se determina que las arquitecturas más utilizadas son los FPGA, ASIC y ReRAM. Se señala que las mayores desventajas abarcan el limitado ancho de banda y flujo de datos. Se identifican como mayores ventajas paralelismo, menor consumo de energía, reducción de tiempo de ejecución, tolerancia a fallos y optimización de costes.

Citas

Barr, K. (2006). ASIC design in the silicon sandbox: A complete guide to building mixed-signal integrated circuits. McGraw-Hill Professional.

Beal, V. (2021, abril 6). What is a CPU? (Central Processing Unit). Webopedia. https://www.webopedia.com/definitions/cpu/

Bolhasani, H., & Jassbi, S. J. (2020). Deep learning accelerators: a case study with MAESTRO. Journal of Big Data, 7(1). https://doi.org/10.1186/s40537-020-00377-8

Carbon, A., Philippe, J.-M., Bichler, O., Schmit, R., Tain, B., Briand, D., Ventroux, N., Paindavoine, M., & Brousse, O. (2018). PNeuro: A scalable energy-efficient programmable hardware accelerator for neural networks. 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE).

Curzel, S., Agostini, N. B., Song, S., Dagli, I., Limaye, A., Tan, C., Minutoli, M., Castellana, V. G., Amatya, V., Manzano, J., Das, A., Ferrandi, F., & Tumeo, A. (2021). Automated generation of integrated digital and spiking neuromorphic machine learning accelerators. 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD).

Du, L., & Du, Y. (2018). Hardware Accelerator Design for Machine Learning. En Machine Learning - Advanced Techniques and Emerging Applications. InTech.

El-Shafie, A.-H. A., Zaki, M., & Habib, S. E. D. (2022). An efficient hardware implementation of CNN-based object trackers for real-time applications. Neural Computing & Applications. https://doi.org/10.1007/s00521-022-07538-1

Jokic, P., Azarkhish, E., Bonetti, A., Pons, M., Emery, S., & Benini, L. (2022). A construction kit for efficient low power neural network accelerator designs: Construction kit for low power NN accelerators. ACM Transactions on Embedded Computing Systems, 21(5), 1–36. https://doi.org/10.1145/3520127

Kee, M., & Park, G.-H. (2022). A low-power programmable machine learning hardware accelerator design for intelligent edge devices. ACM Transactions on Design Automation of Electronic Systems, 27(5), 1–13. https://doi.org/10.1145/3531479

Khacef, L., Abderrahmane, N., & Miramond, B. (2018). Confronting machine-learning with neuroscience for neuromorphic architectures design. 2018 International Joint Conference on Neural Networks (IJCNN).

Kitano, H. (1994). Massively parallel artificial intelligence (H. Kitano & J. A. Hendler, Eds.). MIT Press.

Lee, J., & Yoo, H.-J. (2021). An overview of energy-efficient hardware accelerators for on-device deep-neural-network training. IEEE open journal of the Solid-State Circuits Society, 1, 115–128. https://doi.org/10.1109/ojsscs.2021.3119554

Machupalli, R., Hossain, M., & Mandal, M. (2022). Review of ASIC accelerators for deep neural network. Microprocessors and Microsystems, 89(104441), 104441. https://doi.org/10.1016/j.micpro.2022.104441

Park, H., & Kim, S. (2021). Hardware accelerator systems for artificial intelligence and machine learning. En Advances in Computers (pp. 51–95). Elsevier.

Rapuano, E., Meoni, G., Pacini, T., Dinelli, G., Furano, G., Giuffrida, G., & Fanucci, L. (2021). An FPGA-based hardware accelerator for CNNs inference on board satellites: Benchmarking with Myriad 2-based solution for the CloudScout case study. Remote Sensing, 13(8), 1518. https://doi.org/10.3390/rs13081518

Shawahna, A., Sait, S. M., & El-Maleh, A. (2019). FPGA-based accelerators of deep learning networks for learning and classification: A review. IEEE access: practical innovations, open solutions, 7, 7823–7859. https://doi.org/10.1109/access.2018.2890150

Spyrou, T., El-Sayed, S. A., Afacan, E., Camunas-Mesa, L. A., Linares-Barranco, B., & Stratigopoulos, H.-G. (2022). Reliability analysis of a spiking neural network hardware accelerator. 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE).

Sunny, F. P., Mirza, A., Nikdast, M., & Pasricha, S. (2021). ROBIN: A robust optical binary neural network accelerator. ACM Transactions on Embedded Computing Systems, 20(5s), 1–24. https://doi.org/10.1145/3476988

Vranesic, Z. G. (2002). The FPGA challenge. Proceedings. 1998 28th IEEE International Symposium on Multiple- Valued Logic (Cat. No.98CB36138).

Wang, W., & Lin, B. (2019). Trained biased number representation for ReRAM-based neural network accelerators. ACM Journal on Emerging Technologies in Computing Systems, 15(2), 1–17. https://doi.org/10.1145/3304107

Wei, T. C., Sheikh, U. U., & Rahman, A. A.-H. A. (2018). Improved optical character recognition with deep neural network. 2018 IEEE 14th International Colloquium on Signal Processing & Its Applications (CSPA).

Yu, S. (2019). Special topic on nonvolatile memory for efficient implementation of neural/neuromorphic computing. IEEE journal on exploratory solid-state computational devices and circuits, 5(1), ii–iii. https://doi.org/10.1109/jxcdc.2019.2913526

Zhang, Y., Banta, A., Fu, Y., John, M. M., Post, A., Razavi, M., Cavallaro, J., Aazhang, B., & Lin, Y. (2022). RT-RCG: Neural network and accelerator search towards effective and real-time ECG reconstruction from intracardiac electrograms. ACM Journal on Emerging Technologies in Computing Systems, 18(2), 1–25. https://doi.org/10.1145/3465372

Descargas

Publicado

2023-09-12

Cómo citar

Gómez Cordioli , D. . (2023). Hardware Especializado para Redes Neuronales: Una Revisión Sistemática. Revista Boaciencia. Negocios E Tecnología, 3(2), 01–19. https://doi.org/10.59801/tyn.v3i2.224

Número

Sección

Artículos