Modelling and Control Human Arm with Fuzzy-Genetic Muscle Model Based on Reinforcement Learning: The Muscle Activation Method
International Clinical Neuroscience Journal,
Vol. 7 No. 3 (2020),
21 June 2020
Background: The central nervous system (CNS) is optimizing arm movements to reduce some kind of cost function. Simulating parts of the nervous system is one way of obtaining accurate information about the neurological and treatment of neuromuscular diseases. The main purpose of this paper is to model and control the human arm in a reaching movement based on reinforcement learning theory (RL).
Methods: First, Zajac’s muscle model is improved by a fuzzy system. Second, the proposed muscle model applied to the six muscles which are responsible for a two-link arm that move in the horizontal plane. Third, the model parameters are approximated based on genetic algorithm (GA). Experimental data recorded from normal subjects for assessing the approach. At last, reinforcement learning algorithm is utilized to guide the arm for reaching task.
Results: The results show that: 1) The proposed system is temporally similar to a real arm movement. 2) The reinforcement learning algorithm has the ability to generate the motor commands which are obtained from EMGs. 3) The similarity of obtained activation function from the system is compared with the real data activation function which may prove the possibility of reinforcement learning in the central nervous system (Basal Ganglia). Finally, in order to have a graphical and effective representation of the arm model, virtual reality environment of MATLAB has been used.
Conclusions: Since reinforcement learning method is a representative of the brain's control function, it has some features, such as better settling time, not having any peek overshoot and robustness.
- Musculoskeletal model
- Reinforcement learning
- Upper limb
- Hill-type muscle model
- Virtual reality
How to Cite
T. Flash, N. Hogan, The coordination of arm movements: An experimentally confirmed mathmatical model. The Journal of Neuroscience. 1985; 5(7), 1688_1703. DOI: https://doi.org/10.1523/JNEUROSCI.05-07-01688.1985.
C.M. Harris, D.M. Wolpert, Signal-dependent noise determines motor planning. Nature. 1998; 394(6695), 780_794. doi:10.1038/29528.
Y. Uno, M. Kawato, R. Suzuki, Formation and control of optimal trajectory in human multijoint arm movement. Biological Cybernetics. 1989; 61(2), 89_101. doi: https://doi.org/10.1007/BF00204593.
H. Kambara, K. KIM, D. SHIN, M. SATO, Y. KOIKE, Learning and generation of goal-directed arm reaching from scratch. Neural Netw. 2009; 22(4), 348–61. doi: https://doi.org/10.1016/j.neunet.2008.11.004.
T. Flash, The control of hand equilibrium trajectories in multi-joint arm movements. Biological Cybernetics. 1987; 57(45), 257_274.doi: https://doi.org/10.1007/BF00338819.
P.L. Gribble, D.J. Ostry, V. Sanguineti, R. Laboissiere, Are complex controlsignals required for human arm movement? Journal of Neurophysiology. 1998; 79(3), 1409_1424. doi: https://doi.org/10.1152/jn.1922.214.171.1249
N. Hogan, An organizing principle for a class of voluntary movements. The Journal of Neuroscience. 1984; 4(11), 2745_2754. doi: https://doi.org/10.1523/JNEUROSCI.04-11-02745.1984
H. Miyamoto, E. Nakano, D.M. Wolpert, M. Kawato, Tops (task optimization in the presence of signal-dependent noise) model. Systems and Computers in Japan. 2004; 35(11), 48_58.doi: https://doi.org/10.1002/scj.10377.
E. Todorov, M. Jordan, Optimal feedback control as a theory of motor coordination. Nature Neuroscience. 2002; 5(11), 1226_1235. doi: doi:10.1038/nn963.
Y. Wada, M. Kawato, A neural network model for arm trajectory formation using forward and inverse dynamics models. Neural Networks, 6(7), 919_932.J.
R.S. Sutton and A.G Barto., Reinforcement learning: An Introduction, MIT Press; 1998. doi: https://doi.org/10.1017/S0263574799271172.
A. Stocco, C. Lebiere, and J.R. Anderson, Dopamine, Learning, and Production Rules: The Basal Ganglia and the Flexible Control of Information Transfer in the Brain, Association for the Advancement of Artificial Intelligence. 2009; https://doi.org/ /851/1318
J. IZAWA, T. KONDO, K. ITO Biological arm motion through reinforcement learning. Biol Cybern. 2004; 91(1), 10–22. DOI 10.1007/s00422-004-0485-3.
C.J.C.H. Watkins, Learning from Delayed Rewards. Ph.D. thesis, Cambridge University 1989.
A.L. Strehl, L. Li, E. Wiewiora, J. Langford, and M.L. Littman. Pac model-free reinforcement learning. 2006; In Proc. 23nd ICML 2006, 881–888.
J.A. Martin, J. de Lope, A distributed reinforcement learning architecture for multi-link robots. 4th Int. Conf. on Informatics in Control, Automation and Robotics, ICINCO 2007.
F. Nowshiravan Rahatabad, A. Fallah, and A.H. Jafari, Human arm modeling with antagonistic muscle, 16th Iranian Conference on Biomedical Engineering, Tehran University of Medical Science. 2009; 30-31. (In Persian with English abstract).
K. Tahara, Z.W. Luo, S. Arimoto, H. Kino, Task-Space Feedback Control for A Two-Link Arm Driven by Six Muscles With Variable Clamping and Elastic Properties, IEEE International Conference on Robotics and Automation. 2005; 223-227. DOI:10.1109/ROBOT.2005.1570123.
H.D. Taghirad, An Introduction to Modern Control, K. N. Toosi University Publication 2003. (In Persian with English abstract).
Biopac Systems, Available: http://biopac.com 2010.
P. Konrad, The ABC of EMG - A Practical Introduction to Kinesiological Electromyography Version 1.0. USA: Noraxon INC 2005.
L.Weiss, J.K. Silver, and J. Weiss Easy EMG, New York: Elsevier 2004.
H.K. Kim, J.M. Carmena, S.J. Biggs, T.L. Hanson, M.A.L Nicolelis, M.A. Srinivasan The Muscle Activation Method: An Approach to Impedance Control of Brain-Machine Interfaces Through a Musculoskeletal Model of the Arm, IEEE Trans. Biomed. Eng. 2007; 54)8(, 1520-1529. DOI:10.1109/TBME.2007.900818
J. Rosen, M. Brand, M.B. Fuchs and M. Arcan A myosignal-based powered exoskeleton system, IEEE Trans Syst Man Cybern – Part A Systems Humans 2001. doi: 10.1109/3468.92566.
F.E. Zajac, Muscle and Tendon: Properties, Models, Scaling, and Application to Biomechanics and Motor Control, Crit. Rev. Biomed. Eng. 1989; 17.,359–411.
J.S.R. Jang, C.T. Sun and E. Mizutani. Neuro-Fuzzy Soft Computing. Englewood Cliffs, NJ: Prentice-Hall 1997.
K. Doya,H. Kimura and M. Kawato. Neural mechanisms of learning and control.Control Systems Magazine. IEEE, 2001;, 21(4),42-54,. doi: 10.1109/37.939943.
F. Nowshiravan Rahatabad, P. Rangraz, Combination of reinforcement learning and bee algorithm for controlling two-link arm with six muscle: simplified human arm model in the horizontal plane. Australas Phys Eng Sci Med 2019. doi:10.1007/s13246-019-00828-4
F. Nowshiravan Rahatabad, A. Fallah, A.H. Jafari, A Study Of Chaotic Phenomena In Human-Like Reaching Movements, International Journal Of Bifurcation And Chaos, 2011, Doi: 10.1142/S0218127411030532
Ting Wang, Xiuxiang Chen, Wen Qin,A novel adaptive control for reaching movements of an anthropomorphic arm driven by pneumatic artificial muscles, Applied Soft Computing, 2019;83, doi: 10.1016/j.asoc.2019.105623.
Nowshiravan Rahatabad, F., Maghooli, K., Rahimi Balan, T. Relationship between muscle synergies and skill of the Basketball players. Iranian Journal of Medical Physics, 2020. doi: 10.22038/ijmp.2020.42349.1631.
- Abstract Viewed: 67 times
- PDF Downloaded: 68 times