Asynchronous recurrent neural networks with block splitting for distributed partitioned optimization

Jingxin Liu, Jun Peng, Amin Mansoori, Chaoran Zhan, Ye Huang, Huanbin Wang

Article ID: 8499
Vol 1, Issue 1, 2024

VIEWS - 4 (Abstract) 1 (PDF)

Abstract


This paper presents a class of novel recurrent neural network approaches for a distributed partitioned optimization scenario, where the objective function is separable, strongly convex, and possibly nonsmooth, with the computation of a part of the solution being distributed to a vertex for execution. In our proposed algorithmic framework, the block splitting method allows the solution to be partitioned among vertices according to the divisible structure of the problem, so that each neuron only holds a local memory of the decision variable rather than the memory of the entire decision variable. A local timer is installed for each neuron. If a neuron is triggered by its own timer and a neighbor timer, it will reach an activated state and then update and transmit its own variable information. This asynchronous evolution strategy with time helps to save computational resources. The proposed algorithm is distributed and scalable, with the computation of a single neuron not depending on the size of the vertex network, and the convergence of the algorithm can be guaranteed.


Keywords


recurrent neural network; distributed partitioned optimization; block splitting; asynchronous evolution; local memory

Full Text:

PDF


References


1. Best G, Cliff OM, Patten T, et al. Dec-MCTS: Decentralized planning for multi-robot active perception. The International Journal of Robotics Research. 2019; 38(2-3): 316-337.

2. Li B, Cen S, Chen Y, et al. Communication-efficient distributed optimization in networks with gradient tracking and variance reduction. Journal of Machine Learning Research. 2020; 21(180): 1-51.

3. Leng K, Li S. Distribution path optimization for intelligent logistics vehicles of urban rail transportation using VRP optimization model. IEEE Transactions on Intelligent Transportation Systems. 2021; 23(2): 1661-1669.

4. Wang X, Yang S, Guo Z, et al. A distributed dynamical system for optimal resource allocation over state-dependent networks. IEEE Transactions on Network Science and Engineering. 2022; 9(4): 2940-2951.

5. Cao Y, Sun B, Tsang DHK. Online network utility maximization: Algorithm, competitive analysis, and applications. IEEE Transactions on Control of Network Systems. 2022; 10(1): 274-284.

6. Romijn TCJ, Donkers MCF, Kessels JTBA, et al. A distributed optimization approach for complete vehicle energy management. IEEE Transactions on Control Systems Technology. 2018; 27(3): 964-980.

7. Li Q, Liao Y, Wu K, et al. Parallel and distributed optimization method with constraint decomposition for energy management of microgrids. IEEE Transactions on Smart Grid. 2021; 12(6): 4627-4640.

8. Cheng J, Liu Y, Ye Q, et al. DISCS: A distributed coordinate system based on robust nonnegative matrix completion. IEEE/ACM Transactions on Networking. 2016; 25(2): 934-947.

9. Choi KW, Aziz AA, Setiawan D, et al. Distributed wireless power transfer system for Internet of Things devices. IEEE Internet of Things Journal. 2018; 5(4): 2657-2671.

10. Zhang M, Zhang H, Yuan D, et al. Learning-based sparse data reconstruction for compressed data aggregation in IoT networks. IEEE Internet of Things Journal. 2021; 8(14): 11732-11742.

11. Scaman K, Bach F, Bubeck S, et al. Optimal algorithms for smooth and strongly convex distributed optimization in networks. In: Proceedings of the 34th International Conference on Machine Learning (ICML’17); 6–11 August 2017; Sydney, Australia.

12. Kovalev D, Salim A, Richtárik P. Optimal and practical algorithms for smooth and strongly convex decentralized optimization. In: Proceedings of: Advances in Neural Information Processing Systems 33 (NeurIPS’20); 6-12 December 2020.

13. Necoara I, Clipici D. Parallel random coordinate descent method for composite minimization: Convergence analysis and error bounds. SIAM Journal on Optimization. 2016; 26(1): 197-226.

14. Notarnicola I, Carli R, Notarstefano G. Distributed partitioned big-data optimization via asynchronous dual decomposition. IEEE Transactions on Control of Network Systems. 2018; 5(4): 1910-1919.

15. Bastianello N, Carli R, Schenato L, et al. A partition-based implementation of the relaxed ADMM for distributed convex optimization over lossy networks. In: Proceedings of: 2018 IEEE Conference on Decision and Control (CDC); 17-19 December 2018; Miami, USA.

16. Liang XB, Wang J. A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints. IEEE Transactions on Neural Networks. 2000; 11(6): 1251-1262.

17. Xia Y, Feng G, Wang J. A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. IEEE Transactions on Neural Networks. 2008; 19(8): 1340-1353.

18. Liu Q, Wang J. A one-layer recurrent neural network for constrained nonsmooth optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). 2011; 41(5): 1323-1333.

19. Cheng L, Hou ZG, Lin Y, et al. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks. IEEE Transactions on Neural Networks. 2011; 22(5): 714-726.

20. Qin S, Xue X. A two-layer recurrent neural network for nonsmooth convex optimization problems. IEEE Transactions on Neural Networks and Learning Systems. 2014; 26(6): 1149-1160.

21. Li G, Yan Z, Wang J. A one-layer recurrent neural network for constrained nonconvex optimization. Neural Networks. 2015; 61: 10-21.

22. Xia Z, Liu Y, Kou KI, et al. Clifford-valued distributed optimization based on recurrent neural networks. IEEE Transactions on Neural Networks and Learning Systems. 2023; 34(10): 7248-7259.

23. Jia W, Qin S, Xue X. A generalized neural network for distributed nonsmooth optimization with inequality constraint. Neural Networks. 2019; 119: 46-56.

24. Rajesh P, Muthubalaji S, Srinivasan S, et al. Leveraging a dynamic differential annealed optimization and recalling enhanced recurrent neural network for maximum power point tracking in wind energy conversion system. Technology and Economics of Smart Grids and Sustainable Energy. 2022; 7(1): 19.

25. Liu J, Liao X, Dong JS. A recurrent neural network approach for constrained distributed fuzzy convex optimization. IEEE Transactions on Neural Networks and Learning Systems. 2024; 35(7): 9743-9757.

26. Sherstinsky A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena. 2020; 404: 132306.

27. Lin Y, Hu G, Wang L, et al. A multi-AGV routing planning method based on deep reinforcement learning and recurrent neural network. IEEE/CAA Journal of Automatica Sinica. 2024; 11(7): 1720-1722.

28. Parikh N, Boyd S. Block splitting for large-scale distributed learning. In: Proceedings of: Advances in Neural Information Processing Systems 24 (NeurIPS’11); 12-14 December 2011; Granada, Spain.

29. O’Connor M, Zhang G, Kleijn WB, et al. Function splitting and quadratic approximation of the primal-dual method of multipliers for distributed optimization over graphs. IEEE Transactions on Signal and Information Processing over Networks. 2018; 4(4): 656-666.

30. Latafat P, Freris NM, Patrinos P. A new randomized block-coordinate primal-dual proximal algorithm for distributed optimization. IEEE Transactions on Automatic Control. 2019; 64(10): 4050-4065.

31. Li H, Wu X, Wang Z, et al. Distributed primal-dual splitting algorithm for multiblock separable optimization problems. IEEE Transactions on Automatic Control. 2021; 67(8): 4264-4271.

32. Moradi S, Indiveri G. An event-based neural network architecture with an asynchronous programmable synaptic memory. IEEE Transactions on Biomedical Circuits and Systems. 2013; 8(1): 98-107.

33. Gaunt AL, Johnson MA, Riechert M, et al. AMPNet: Asynchronous model-parallel training for dynamic neural networks. arXiv preprint arXiv:1705.09786. 2017.

34. Urruty JBH, Lemaréchal C. Conjugacy in Convex Analysis. In: Convex analysis and minimization algorithms II: Advanced theory and bundle methods. Springer Science & Business Media; 1993. pp. 35-82.

35. Sen S, Sherali HD. A class of convergent primal-dual subgradient algorithms for decomposable convex programs. Mathematical Programming. 1986; 35: 279-297.

36. Richtárik P, Takáč M. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming. 2014; 144(1): 1-38.

37. Li JY, Du KJ, Zhan ZH, et al. Distributed differential evolution with adaptive resource allocation. IEEE Transactions on Cybernetics. 2022; 53(5): 2791-2804.

38. Tang P, Wang C, Sun D, et al. A sparse semismooth Newton based proximal majorization-minimization algorithm for nonconvex square-root-loss regression problems. Journal of Machine Learning Research. 2020; 21(226): 1-38.

39. Jiang H, Luo S, Dong Y. Simultaneous feature selection and clustering based on square root optimization. European Journal of Operational Research. 2021; 289(1): 214-231.




DOI: https://doi.org/10.24294/pnmai8499

Refbacks

  • There are currently no refbacks.


License URL: https://creativecommons.org/licenses/by/4.0/

This site is licensed under a Creative Commons Attribution 4.0 International License.