PRE2017 3 Groep7: Difference between revisions
m (→Physical Model) |
|||
(36 intermediate revisions by 5 users not shown) | |||
Line 55: | Line 55: | ||
|- style="vertical-align: top;" | |- style="vertical-align: top;" | ||
! style="font-style: italic;" | Karsten | ! style="font-style: italic;" | Karsten | ||
| Write about deliverables; Do research on control mechanism for prosthetic hand || Start on the design (FuMo); Contact Rommers || Get dynamixels and do research on them || Contact companies for EEG results/equipment; Test dynamixel | | Write about deliverables; Do research on control mechanism for prosthetic hand || Start on the design (FuMo); Contact Rommers || Get dynamixels and do research on them; Do research on RS485; Order USB2Dynamixel || Contact companies for EEG results/equipment; Test dynamixel | ||
|- style="vertical-align: top;" | |- style="vertical-align: top;" | ||
! style="font-style: italic;" | Steven | ! style="font-style: italic;" | Steven | ||
Line 76: | Line 76: | ||
|- style="vertical-align: top;" | |- style="vertical-align: top;" | ||
! style="font-style: italic;" | Karsten | ! style="font-style: italic;" | Karsten | ||
| | | Work on EEG-results from internet; Test dynamixel || style="background: yellow;" | Finish wrist movement dynamixel/CAD; Work on presentation|| style="background: yellow;" | Presentation and finish Wiki|| - | ||
|- style="vertical-align: top;" | |- style="vertical-align: top;" | ||
! style="font-style: italic;" | Steven | ! style="font-style: italic;" | Steven | ||
Line 90: | Line 90: | ||
=== The human basics === | === The human basics === | ||
==== The human hand ==== | |||
The human hand can be separated into three main parts. The forearm, the wrist and the fingers. For clarity we will look at each part separately. | The human hand can be separated into three main parts. The forearm, the wrist and the fingers. For clarity we will look at each part separately. | ||
Line 97: | Line 97: | ||
The thumb is the only digit that slightly differs from this. By allowing one more degree of freedom to the first joint and adding a muscle in the hand that allows movement in this direction. | The thumb is the only digit that slightly differs from this. By allowing one more degree of freedom to the first joint and adding a muscle in the hand that allows movement in this direction. | ||
The wrist consists of several bones that together function as a single condyloid joint. This allows for it to flex, extend and deviate to both sides. The degrees of motion for the joint are 60° for flexing and extending and 20-30° deviation to both sides. .<ref>C. Donna, B. A. Boone, P. Azen. Normal Range of Motion of Joints in Male Subjects. | The wrist consists of several bones that together function as a single condyloid joint. This allows for it to flex, extend and deviate to both sides. The degrees of motion for the joint are 60° for flexing and extending and 20-30° deviation to both sides. .<ref>C. Donna, B. A. Boone, P. Azen. Normal Range of Motion of Joints in Male Subjects. ''The Journal of Bone and Joint Surgery: incorparated'', 1979 | ||
.</ref> | .</ref> | ||
The forearm acts like a pivot joint using the two bones there to rotate the wrist. This gives the wrist about 180° of rotation. The forearm is also the place where almost all of the muscles controlling the hand are situated. This allows for the muscles to become larger and therefore stronger than if they were confined within the hand. | The forearm acts like a pivot joint using the two bones there to rotate the wrist. This gives the wrist about 180° of rotation. The forearm is also the place where almost all of the muscles controlling the hand are situated. This allows for the muscles to become larger and therefore stronger than if they were confined within the hand. | ||
==== Movement in the brain ==== | |||
Generating movement is very difficult. It starts with the brain, in the primary motor cortex. This has to decide not only which muscles to contract for the movement, but also how much power will be needed. For example, to pick up a glass the brain has to detect where the hand is relative to the glass, so it knows how the hand should move towards it, and it should estimate the weight of the glass in order the know the force needed to pick it up. If a limb needs more control, there is more space reserved for it in the brain. Hands and wrists are quite complex and therefore need a lot of control, so they are largely represented in the motor cortex. When a signal is produced in the brain, it goes through the spinal cord to the muscles. Signals in the left half of the brain go to the right side of the body. <ref>S. Schwerin. The Anatomy of Movement. ''Brain Connection''. Retrieved 31 March 2018, from https://brainconnection.brainhq.com/2013/03/05/the-anatomy-of-movement/.</ref> | |||
=== State of the Art === | === State of the Art === | ||
==== Robot hands ==== | |||
In prosthetic and robot arms, the hand and wrist are difficult to develop. Many prosthetics do not even have a wrist motion and fingers cannot move separately, since these are difficult to mimic. Below, three state-of-the-art robots are discussed: first the SoftHand Pro-H, a dexterous hand and the wrist mechanism of the humanoid robot SARA. | In prosthetic and robot arms, the hand and wrist are difficult to develop. Many prosthetics do not even have a wrist motion and fingers cannot move separately, since these are difficult to mimic. Below, three state-of-the-art robots are discussed: first the SoftHand Pro-H, a dexterous hand and the wrist mechanism of the humanoid robot SARA. | ||
Line 116: | Line 121: | ||
==== Control of prosthetics ==== | |||
Several ways to control prosthetics have been used in the past. Some were mechanic, but lately also electromyography and electroencephalography are used. Prosthetic arms working with the spinal cord have been developed, so the wearer only has to think about the movement to do it. Many existing prosthetics work on the twitching of remaining muscles. Other research is looking at gripping intuitively, so the user can grip something without consciously thinking about it. Working with the neural network could also allow sensory feedback, so the user can have ‘a sense of touch’. This way, the imposed pressure can be better regulated. It will give the user their feeling back. <ref>J. Edwards. Signal Processing Powers Next-Generation Prosthetics. ''IEEE Signal Processing Magazine'', 2018, p.13-16</ref> | Several ways to control prosthetics have been used in the past. Some were mechanic, but lately also electromyography and electroencephalography are used. Prosthetic arms working with the spinal cord have been developed, so the wearer only has to think about the movement to do it. Many existing prosthetics work on the twitching of remaining muscles. Other research is looking at gripping intuitively, so the user can grip something without consciously thinking about it. Working with the neural network could also allow sensory feedback, so the user can have ‘a sense of touch’. This way, the imposed pressure can be better regulated. It will give the user their feeling back. <ref>J. Edwards. Signal Processing Powers Next-Generation Prosthetics. ''IEEE Signal Processing Magazine'', 2018, p.13-16</ref> | ||
Line 132: | Line 137: | ||
<ref>A. Delorme, S. Makeig. EEGLAB: an open source toolbox for analsis of single-trial EEG dynamics including independent component analysis. ''Journal of Neuroscience Methods'', 2004, p.9-21</ref> | <ref>A. Delorme, S. Makeig. EEGLAB: an open source toolbox for analsis of single-trial EEG dynamics including independent component analysis. ''Journal of Neuroscience Methods'', 2004, p.9-21</ref> | ||
Since EEGs are measured by putting electrodes on the scalp, there will be no difference for many paralysed people. Mostly, these people are paralysed because of a fault in the spinal cord. Movement already starts in the brain, so when controlling a prosthetic with EEG-data, there will be no blockade in the spine. Hence, paralysed people will be able to use them as well. | |||
==== Feedback control for prosthetics research ==== | |||
Sensors for the measurement of applied force can be used in prosthetic hands to control the strength of the hand when grabbing objects. This technology is already implemented in 'soft' grippers in the form of a pneumatic soft sensor (PSS), consisting of a silicon body and a flexible pressure sensor and has been tested succesfully on rubber balls.<ref>H.Yang, Y.Chen, Y.Sun, L.Hao. A novel pneumatic soft sensor for measuring contact force and curvature of a soft gripper. ''Sensors and Actuators A: Physical'', 2017, p.318-327</ref><ref>Y. Zhu, J. Li, H. Cai, Y. Wu, H. Ding, N. Pan, X. Wang. Highly sensitive and skin-like pressure sensor based on asymmetric double-layered structures of reduced graphite oxide. ''Sensors and Actuators B: Chemical'', 2018, p.1262-1267 | Sensors for the measurement of applied force can be used in prosthetic hands to control the strength of the hand when grabbing objects. This technology is already implemented in 'soft' grippers in the form of a pneumatic soft sensor (PSS), consisting of a silicon body and a flexible pressure sensor and has been tested succesfully on rubber balls.<ref>H.Yang, Y.Chen, Y.Sun, L.Hao. A novel pneumatic soft sensor for measuring contact force and curvature of a soft gripper. ''Sensors and Actuators A: Physical'', 2017, p.318-327</ref><ref>Y. Zhu, J. Li, H. Cai, Y. Wu, H. Ding, N. Pan, X. Wang. Highly sensitive and skin-like pressure sensor based on asymmetric double-layered structures of reduced graphite oxide. ''Sensors and Actuators B: Chemical'', 2018, p.1262-1267 | ||
Line 145: | Line 150: | ||
Force feedback turns out to be difficult for haptic devices, since they have limited force accuracy. New research is conducted to use a variable motion mapping method. Now the motion mapping coefficient could be regulated according to the object stiffness. This method could lead to identifying objects without having to see them. However, the stiffness of the object has to be estimated beforehand. It could still be more efficient than current methods. <ref>L. Liu, Y. Zhang, G. Liu, W. Xu. Variable motion mapping to enhance stiffness discrimination and identification in robot hand teleoperation. ''Robotics and Computer–Integrated Manufacturing 51'', 2018, p.202–208</ref> | Force feedback turns out to be difficult for haptic devices, since they have limited force accuracy. New research is conducted to use a variable motion mapping method. Now the motion mapping coefficient could be regulated according to the object stiffness. This method could lead to identifying objects without having to see them. However, the stiffness of the object has to be estimated beforehand. It could still be more efficient than current methods. <ref>L. Liu, Y. Zhang, G. Liu, W. Xu. Variable motion mapping to enhance stiffness discrimination and identification in robot hand teleoperation. ''Robotics and Computer–Integrated Manufacturing 51'', 2018, p.202–208</ref> | ||
=== Machine Learning === | |||
A machine learning algorithm is an algorithm that trains itself on patterns in the data it is given. There are different kinds of ways to train such an algorithm. There is supervised and unsupervised learning. In supervised learning the algorithm is trained using a data set for which the result is already known. This means that the network can get to a result and check its own performance and optimize for this training set data. This way it can become very good at predicting the results which were given. However, it will not recognise any results that were not provided in the training data. | A machine learning algorithm is an algorithm that trains itself on patterns in the data it is given. There are different kinds of ways to train such an algorithm. There is supervised and unsupervised learning. In supervised learning the algorithm is trained using a data set for which the result is already known. This means that the network can get to a result and check its own performance and optimize for this training set data. This way it can become very good at predicting the results which were given. However, it will not recognise any results that were not provided in the training data. | ||
Line 166: | Line 159: | ||
==== The random forest algorithm ==== | |||
The random forest algorithm is an ensamble approach. This means that the data is split up into many different smaller data sets. The algoritm tries to optimalize each of these n sets. This way there are many small optimalized splits. A vector addition of these results then form the final result. Note that since n is very large, most of the data will be equally well represented. | The random forest algorithm is an ensamble approach. This means that the data is split up into many different smaller data sets. The algoritm tries to optimalize each of these n sets. This way there are many small optimalized splits. A vector addition of these results then form the final result. Note that since n is very large, most of the data will be equally well represented. | ||
This is better than | This is better than just running the algorithm on the full data set. The reason for this is that there is always some noise and variance in the given data set. If the full data set is taken for training then the algorithm will be training on all of the data, so also the specific outliers of that data set, which leads to overfitting on these precise points. | ||
More in detail, the data is first split into many different parts. Then a single data split is created by adding some of these parts together. On this set a single tree is trained. A tree will split the data in such a way that data with the same outcome fall into the same categories. This is done by using the gini-index of the tree, which tells you how much data with the same outcome is in the same category. This way it will continue to make split points until the data is grouped optimally with the constrainst of a mininal group size and a maximal tree depth. This way a decision tree is made. Given a data point it can use this decision tree to predict the outcome. | More in detail, the data is first split into many different parts. Then a single data split is created by adding some of these parts together. On this set a single tree is trained. A tree will split the data in such a way that data with the same outcome fall into the same categories. This is done by using the gini-index of the tree, which tells you how much data with the same outcome is in the same category. This way it will continue to make split points until the data is grouped optimally with the constrainst of a mininal group size and a maximal tree depth. This way a decision tree is made. Given a data point it can use this decision tree to predict the outcome. | ||
Line 175: | Line 168: | ||
If one now inserts a data point on this trained forest the random forest algorithm will ask the outcome of each of these trees and take the most represented answer. | If one now inserts a data point on this trained forest the random forest algorithm will ask the outcome of each of these trees and take the most represented answer. | ||
==== Machine learning for prostheses ==== | |||
Machine learning has been studied as a way to make prostheses react to electric pulses in the brain. Research has been done on this subject with rhesus monkeys. In this research, rhesus monkeys were first trained to perform a 3D outreaching task, and later implanted with chronic electrode arrays. After that, Principal Component Analysis (PCA) was used to reduce the amount of neurons being used in the analysis. Then a machine learning algorithm was employed to map the neurons being used to the direction of movement. The model made in this way was tested, and could correctly predict when the hand was in motion 81% of the time, and the motion ended less than 3 cm from the correct endpoint in less than 50% of the cases, so there's room for improvement in this algorithm. <ref>Isaacs RE, e. (2018). Work toward real-time control of a cortical neural prothesis. - PubMed - NCBI . Ncbi.nlm.nih.gov. Retrieved 18 February 2018, from https://www.ncbi.nlm.nih.gov/pubmed/10896185</ref> | |||
There has been some extensive research to how cells in the motor cortex of the brain are related to the direction of movement of a hand. This research shows that cells have a ‘preferred direction’ and are only dependent in the movement of the hand, so not the position.<ref> Schwartz, A. B., and A. P. Georgopoulos (1987) Relations between | |||
the amplitude of 2-dimensional arm movements and single cell discharge | |||
in primate motor cortex. Sot. Neurosci. Abstr. 13: 244.</ref> These directions can be determined by measuring the discharge rate of these cells and the real activity of the hand. The total movement of the hand can then be predicted using a ‘population vector’, calculated by a sum of all of the cells times a weighing factor based on the discharge rate of the cell. This vector is within the 95% directional cone for more than 80% of the time using 475 cells (and more cells lead to higher accuracy)<ref> Schwartz, A. B., R. E. Kettner, and A. P. Georgopoulos (1988) Primate | |||
motor cortex and free arm movements to visual targets in threedimensional | |||
space. I. Relations between single cell discharge and | |||
direction of movement. J. Neurosci. 8: 29 13-2927.. </ref> Unfortunately this will not be very useful for our further research, since we will not have the option to implant devices on human brains in order to detect their discharge rate for obvious reasons. | |||
== EEG-data == | == EEG-data == | ||
At first, the idea was to get proper EEG-equipment to achieve data of the right wrist movements. One or more of the group members would be the test person, and if possible Arie Rommers as well. The plan was to compare the results, because for all the group members it is known that they have their hand and can perform the movements, but Rommers does not have his arm anymore and has not performed any wrist movement for a long time. This could cause a difference in the way the brain would perform the movement. If the prosthesis would ever actually be used on somebody, this could be important information. | |||
However, it quickly turned out getting the equipment was difficult. The university itself works with the equipment for research in epilepsy, but when contacting them they did not answer. When trying the hospital, they said only academic hospitals lend their equipment to externals. Therefore, the Radboud University in Nijmegen was contacted, but using the state-of-the-art EEG-equipment of the Donders Institute at the university was extremely expensive and not possible in the time available for the project. During one of the tutor meetings, this problem was discussed and the advice was to look online for available data. Quickly, a site was found with many external links to free EEG-data. <ref>EEG / ERP data available for free public download. Retrieved 15 March 2018, from https://sccn.ucsd.edu/~arno/fam2data/publicly_available_EEG_data.html</ref> Sadly, there was barely any data of wrist movements, and if there was, it was not what was needed for this project. To solve this, other data was chosen from the BCI Competition III. <ref>G. Schalk, D. Krusienski, J.R. Wolpaw, B. Blankertz, G. Dornhege, K. Müller, A. Schlögl, B. Graimann, G. Pfurtscheller, S. Chiappa, J.R. Millán, M. Schröder, T. Hinterberger, T. Navin Lal, G. Widman and N. Birbaumer. BCI Competition III. Retrieved 19 March 2018, from http://www.bbci.de/competition/iii/</ref> This data did not necessarily contain specific movements, but movement of specific limbs. Since the wrist has four directions to move in, a data set was chosen with 4 different limbs to move: the left hand, right hand, foot and tongue. This test was done with three people, but only the data of test person 1 was used to train the algorithm. It might not be ideal not having data of the wrist movement only, but it is enough to prove the idea of a robotic prosthetic controlled by the brain might work. | |||
When looking for the EEG-data online, a Matlab toolbox to process EEG-data was found as well, called EEG-lab (see Literature study, Control of prosthetics). It was expected that this might help for the algorithm, but turned out not to be necessary. However, an extension of this toolbox was also found to predict which limb was used when getting certain data. Since the idea of this project was to predict the wrist movement with a machine learning algorithm, this extension could help with recognising patterns in the data. It was therefore looked into a bit, but not too extensively, since it is not desirable to copy it. | |||
Were there more time for this project, it might be possible to achieve the desired data and train the algorithm on that. However, specific movement of the wrist could be more difficult than recognising certain limbs. The control of each limb is placed on a certain place within the motor cortex. Within the area of a limb, the movements are generated. Since all wrist movement will be located in the wrist section of the brain, it will all be close together and therefore difficult to be distinguished by the algorithm. The electrodes will have to be put closely together and can therefore have noise from each other. It is however shown that the algorithm can actually distinguish the signals. More about that in the next chapter. | |||
== The Final Algorithm == | == The Final Algorithm == | ||
The final algorithm we used was a random forest algorithm implemented in python. We designed the algorithm such that it could be trained on the EEG data that we found. It was implemented in such a way that it would give a discrete result, such as left, right, up or down. This is because those were the only things that were known about the data. | |||
The algorithm was build in two Python files. The first file creates the random forest and exports it to an excel csv file. It can be found here: [[Random Forest Builder]] | |||
The second part uses the random forest created by the first part to make predictions, and it can be found here: [[Random Forest User]] | |||
== Functional Model == | == Functional Model == | ||
=== Physical Model === | |||
Since the model is a prosthesis, we figured it would be best represented with a physical model. This would show real world problems that could not be accurately represented using a digital model. The Dynamixel would be ideal, due to it being an all in one servo. Its size would mean that the model would not be too large, not only that but the driver, controller, and reduction gear are also included in the Dynamixel. | |||
The CST-lab had 2 of these Dynamixels that we could borrow for the duration of our project. These Dynamixel RX-28 servos were mounted on a few plastic plates with two different kinds of joints. Our project would also need these joints so the idea was to use this configuration with only some minor changes of our own. | |||
However one thing was missing, a way to connect the Dynamixel to the PC. The engineering department did not have a connector that was not in use, so a new Titan USB-COMI-M was ordered. This had the same functionality as an original usb2dynamixel, but with the added functionality of also supporting RS-232, not only RS-485 that the Dynamixel uses. This arrived after several days. | |||
The connection between Dynamixel and Laptop was made and using a pre-made python code we tried to call for it. This prompted a UTF-8 decoding error in the first byte. After searching the web for a fix, the answer seemed to be to change the encoding to UTF-16. However after changing this it still gave the exact same error. Since this did not solve the problem for us but did others we tried to find other differences between our and their code that would explain the problem. This however bore no fruit and did not result in a fix. | |||
The Dynamixel needed power and this was provided by a DC power supply that was connected to the Dynamixel using two cables. After consulting the internet we could not find a single conclusive way to do this. So we went back to where the Dynamixels are from and asked around there. This resulted a configuration in which the power supply was connected to the same Dynamixel that was also connected to the laptop, but even this was not the correct way for sure, because nobody there at that time had actually worked with the Dynamixel. | |||
This resulted in us thinking that the fault in the software (python) maybe due to a fault in the hardware (Dynamixel). We tested just powering the Dynamixel and figured out that while the first dynamixel that the power and laptop where connected to would turn on normally but the second one would get power but then show that it had an error by the led turning on but then staying on. We disconnected the second dynamixel from the first and would first focus on the first dynamixel. The first dynamixel was reconnected to the power and the laptop and turned back on. | |||
The problem should be in the code then and a matlab version was tried. This gave the exact same error and even a new c-lib error. This wasn’t an improvement and since the machine learning algorithm was also made in python we switched back. | |||
The code was checked part by part by inserting breaks to see until where it did run. This found the problem and it seemed to be the get character part of the code. The problem was not the code itself however but with Spyder and the console that it uses. The solution was running it through an external console. This solved the problems with the code. Now everything seemed to work, it would answer the questions of the position and the ID of the current dynamixel. But when given the command to move it didn’t do anything. So other commands where tried changing the Dynamixel ID and changing the perceived starting position. When doing this it gave back our original values. The problem turned out to be the way it was connected to the laptop. | |||
The Titan USB-COMI-M was connected according to RS-485 Protocol but the echo version turned out to be the problem. This just returned the values originally put in and the Dynamixel was not even involved in most of the commands we had given it. The commands were given again and the ID could be changed. This seemed to work but movement was still a problem. Not even the software that was made for the Dynamixel by the parent company could get it to move. | |||
Since the last problem was caused by the COMI-M, this problem might also be caused by it. The COMI-M had a set of Jumpers that could be changed according to what kind of protocol was used. Almost every possible configuration that made sense was tried and resulted in no progress. | |||
Harrie van de Loo was contacted again and this time there were people there that had worked with the Dynamixel before. They however had only used the Dynamixel in combination with Linux. He guided us to a ROS based tutorial <ref>Dynamixel Tutorial based in ROS. Retrieved 20 March 2018, from http://wiki.ros.org/dynamixel_controllers/Tutorials</ref> of the dynamixel but this would also only work in combination with Linux. This seemed to be the last resort to get the dynamixel to work within the time limit of the course. | |||
One of the group members had a USB through which Linux could be booted. This was done and seemed to go well until we realized that Linux needed an update to fully use this tutorial. The Update was downloaded and the USB crashed and this was the moment we opted to go for a digital model because this was the only thing that could be completed in the time limit. | |||
=== Virtual Model === | |||
After concluding that the physical model was probably not going to operate, a computer aided design (CAD) model was made representing the human wrist. Siemens NX10® was used to design and control the model. The mechatronic concept designer (MCD) is a toolbox within NX which is used to make dynamic models for machines and robotics. The first step was to create the assembly consisting of a hand and an arm connected by a condylar joint. Fingers and a thumb have also been added to aid the visual impression of the hand. Next, the model has been placed in the MCD environment within NX. To constrain the hand movement, a ball joint with additional angular constraints has been implemented which closely resembles the human wrist. | |||
Furthermore, to be able to control the motions of the hand, the algorithm has to communicate it's output from the python environment into the MCD environment. The output of the interpreter can be any number between zero and four, depending on the direction the algorithm predicts the user desires to move towards. The mechatronic concept designer toolbox has a built-in signal mapping tool to communicate with an OPC server. OPC stand for OLE for Process Control, a communication standard for industrial automatic systems. This posed a new challenge, the setup of a server on which variables can be stored effectively and without having to rent one. | |||
To solve this issue, search online revealed a company called Matrikon, which has free server hosting software available to use for simulation purposes. This made it possible to host a server on a local network to sustain communication between the algorithm and the model. It uses labels to store variables on a maintained update frequency of 1000 Hz. | |||
The next step is to output from python to the OPC server. To this end, a package called OpenOPC is used to set up commands for writing data to the server. However, this package required two other packages; Pyro (Python Remote Objects) and Pywin64. In addition, this package also requires a different python version (2.7 instead of the 3.5 used to create the interpreter). After satisfying these conditions, the OpenOPC commands have been used to succesfully transmit commands from the algorithm to the functional model. A demonstration of this whole mechanism can be seen in the next chapter. | |||
== Demonstration == | |||
To demonstrate the communication between the algorithm interpreting EEG's and the Functional model executing the output-commands of the algorithm, a video has been made showcasing this behaviour in real-time. The EEG data to interpret was the dataset of BCI competition 3 as described earlier. The model is shown inside the MCD environment of NX10 and the command prompt on the left shows the execution of the interpreter, which is reading the dataset in real-time. The top left box shows the labels created in the OPC server to store the algorithm output. | |||
[[File:FuMo_Demostration_Real_Time.gif]] | |||
The frequency at which the algorithm outputs new signals can be lowered to give a more clear representation of the functionality of the components, as shown below: | |||
[[File:FuMo_Demonstration_discrete.gif]] | |||
== Coaching questions == | == Coaching questions == |
Latest revision as of 22:27, 2 April 2018
Introduction
For the course Robots Everywhere, the goal is to do a project about robots. The groups can choose any topic, as long as it contains robots and can be finished in 7 weeks. The course started by brainstorming on this topic, after which Group 7 decided to look at robot prosthetics. Since this is quite a broad topic, it was necessary to narrow it down. The group wanted to do something challenging and new. Therefore, it was decided to look at wrists, because there are no robotic wrist prosthetics on the market yet. Since the fine motor skills come from wirsts and fingers, this could be a valuable addition to current prosthetics. The idea was to control these prosthetics directly with the brain, so paralysed people can use it as well. To turn the EEG-data of the brain into a movement, a machine learning algorithm was used.
On this wiki, the whole process of the project can be found. First, the problem statement and all objectives will be discussed, then the USE-aspects, deliverables and approach of the project, including the planning. After that, there is a literature study, to know what is already on the market. Finally, it will be discussed how the group worked towards a result and what this result looked like. This final part will contain three chapters: the EEG-data, the machine learning algorithm and the functional model.
Problem Statement and Objectives
In the U.S., it is estimated that one in every 190 people have suffered from limb loss.[1] This shows the importance and opportunity of prosthetics. For all these people, part of their old functions can be regained with the use of a prosthetic limb. While technology evolves, the prosthetics are able to perform more functions and are easier to handle. Where old prosthetics are mostly body-powered, they are now myoelectric or even robotic. These robotic prosthetics should eventually be able to mimic all old functions of the lost limb.
One of the challenges in the field is the hand. Especially the wrist is difficult. This joint creates a lot of movement, working with the forearm. To make a joint with as much movement as the wrist and the same power, has proven to be a difficult task. How can one design a full flexible wrist, while also giving it enough strength to lift objects? That will be one of the main questions during this project.
Another big challenge is how to control the prosthetic. Some of the newer, robotic limbs can actually work with the nerve system or the brain. This still proves difficult, since the user will need to learn to use the prosthetic in a natural way and some limbs, like the arm, have many different degrees of freedom to take into account. However, machine learning might prove a way for the prosthetic and the user to meet halfway: the user has to adjust to the prosthetic, and the prosthetic will learn how the user behaves. How do these algorithms work and what could it mean for the industry? By making our own algorithm, we will try to find out whether this might be a breakthrough for prosthetic use.
USE
It is important to look at the user aspects of wrist prostheses. For this there will be looked at the end users, society and enterprise.
Users
The users of a hand prothesis will be people who have somehow lost their hand or arm entirely. Their current way of living would be improved by giving them the possibility to use their prosthesis to pick up objects, which would also help them with basic uses of the hand. Further development could lead to even more possibilities such as catching objects or typing. This would greatly improve the quality of life for these people.
The preferences of the users would of course be a full-fuctioning arm and hand with perhaps even additional functions. The question is whether or not this is desirable, since if a prosthetic hand is more useful than a human one it could incentivise people to have a prostetic hand even if their current hand is still functioning.
Society
A hand prothesis will allow people who lose their arm or hand to more easily rehabilitate to their normal lives and jobs. This can decrease the amount of unemployed people, so they will not have to depend on a government allowance anymore. This can be beneficial for the economy.
However, easier and shorter rehabilitation can cause nurses to lose their jobs. On the other hand, there is a lot of pressure on people in the health care sector right now, so it can also be seen as a reduction on work pressure.
Furthermore, if the prosthetics become too advanced, people might be inclined to cut off their own arm. This should be prevented, unless humanity will turn into cyborgs in the future.
Moreover, the prosthetics can cause discrimination if they become so advanced that companies rather have a person with a prosthetic than one without it. They might be able to work faster and more precise than people who still have their arm.
Enterprise
The enterprise will be able to sell prostheses. To create this new technology, there will be more employment opportunities.
Deliverables and Approach
The results of this project will be presented in the form of a 3D Functional Model (FuMo) together with an algorithm. The model will consist of a design for a prosthetic hand and wrist in a 3D environment (Probably NX10). Since the design will be virtual as of now, the two deliverables cannot be combined to give a single result. However the algorithm's functionality will still be proven in another way. If in the next couple weeks it is concluded that a physical model is possible to make within the given time frame, the option will be considered.
Planning
Name | Week 1 | Week 2 | Week 3 | Week 4 |
---|---|---|---|---|
Eva | Search for useful sources and make a planning; Do research on robot hands | Relate research on human hands to the research on robotic hands; Do research on EEGs; Make an appointment with Spierings | Do research on EEGs and wrist prosthetics | Contact companies for EEG results/equipment; Test dynamixel |
Jurre | Finding sources; do research on human hands | Relate research on human hands to the research on robotic hands; Contact companies for EEG results/equipment | Do research on EEGs; Contact companies for EEG results/equipment | Contact companies for EEG results/equipment; Test dynamixel |
Karsten | Write about deliverables; Do research on control mechanism for prosthetic hand | Start on the design (FuMo); Contact Rommers | Get dynamixels and do research on them; Do research on RS485; Order USB2Dynamixel | Contact companies for EEG results/equipment; Test dynamixel |
Steven | Write about users; Do research on machine learning for prostheses | Start on machine learning algorithm | Work on machine learning algorithm | Work on machine learning algorithm |
Thijs | Do research on machine learning for prostheses | Start on machine learning algorithm | Work on machine learning algorithm | Work on machine learning algorithm |
Name | Week 5 | Week 6 | Week 7 | Week 8 |
Eva | Test dynamixel | Finish wrist movement dynamixel/CAD; Work on presentation | Presentation and finish Wiki | - |
Jurre | Test dynamixel | Finish wrist movement dynamixel; Work on presentation | Presentation and finish Wiki | - |
Karsten | Work on EEG-results from internet; Test dynamixel | Finish wrist movement dynamixel/CAD; Work on presentation | Presentation and finish Wiki | - |
Steven | Improvements in efficiency/running time of algorithm | Finish algorithm; Work on presentation | Presentation and finish Wiki | - |
Thijs | Improvements in efficiency/running time of algorithm | Finish algorithm; Work on presentation | Presentation and finish Wiki | - |
The yellow fields are the milestones.
Literature study
The human basics
The human hand
The human hand can be separated into three main parts. The forearm, the wrist and the fingers. For clarity we will look at each part separately. The fingers consist of two hinge joints and condyloid joint at the base of the digit. While they are separate joints, they cannot work independently. A tendon connects each digit with the associated muscles in the forearm. For each digit there is a pair of muscles of which one extends and one curls the digit. There is a second group of muscles also situated in the forearm that spreads the fingers apart. .[2]
The thumb is the only digit that slightly differs from this. By allowing one more degree of freedom to the first joint and adding a muscle in the hand that allows movement in this direction.
The wrist consists of several bones that together function as a single condyloid joint. This allows for it to flex, extend and deviate to both sides. The degrees of motion for the joint are 60° for flexing and extending and 20-30° deviation to both sides. .[3]
The forearm acts like a pivot joint using the two bones there to rotate the wrist. This gives the wrist about 180° of rotation. The forearm is also the place where almost all of the muscles controlling the hand are situated. This allows for the muscles to become larger and therefore stronger than if they were confined within the hand.
Movement in the brain
Generating movement is very difficult. It starts with the brain, in the primary motor cortex. This has to decide not only which muscles to contract for the movement, but also how much power will be needed. For example, to pick up a glass the brain has to detect where the hand is relative to the glass, so it knows how the hand should move towards it, and it should estimate the weight of the glass in order the know the force needed to pick it up. If a limb needs more control, there is more space reserved for it in the brain. Hands and wrists are quite complex and therefore need a lot of control, so they are largely represented in the motor cortex. When a signal is produced in the brain, it goes through the spinal cord to the muscles. Signals in the left half of the brain go to the right side of the body. [4]
State of the Art
Robot hands
In prosthetic and robot arms, the hand and wrist are difficult to develop. Many prosthetics do not even have a wrist motion and fingers cannot move separately, since these are difficult to mimic. Below, three state-of-the-art robots are discussed: first the SoftHand Pro-H, a dexterous hand and the wrist mechanism of the humanoid robot SARA.
Prosthetics can have different kinds of hands. The older ones have fingers, but they cannot move separately from each other. Some can, like the SoftHand Pro-H. This is one of the newer prosthetic devices, which had nineteen degrees of freedom. The fingers can grasp and make a fist, and are soft yet robust, which allows them to also hold pencils or irregular shapes. It is designed in a way that it can actually grip with a force of 40N. [5]
Also, a dexterous prosthetic hand has been developed. This hand works with micro servos and is connected to the brain. Electroencephalography controls the hand. All fingers can move separately, and it has many degrees of freedom. The finger bones are moved by servo motors, which are installed at the base of the fingers. The fingers cannot only bend, but also move sideways. The thumb is even controlled by two motors, giving it more degrees of freedom. It can even touch all the other fingers. This already comes very close to an actual human hand, perfect for amputees. [6]
Wrists in humanoid robots are not the same as in humans. In robots, there are typically two types. The first contains of one or more joints with one degree of freedom, while the second consists of one joint with either two degrees of freedom (a condylar joint) or three degrees of freedom (ball-and-socket joint). Wrists of the first type are often rigid and have high carrying capacity and reliability. They have high positioning accuracy, since they have low backlash. This is important for motion control. Wrists with only one joint and multiple degrees of freedom, are actually more human-like and are combined with artificial muscles. [7]
Control of prosthetics
Several ways to control prosthetics have been used in the past. Some were mechanic, but lately also electromyography and electroencephalography are used. Prosthetic arms working with the spinal cord have been developed, so the wearer only has to think about the movement to do it. Many existing prosthetics work on the twitching of remaining muscles. Other research is looking at gripping intuitively, so the user can grip something without consciously thinking about it. Working with the neural network could also allow sensory feedback, so the user can have ‘a sense of touch’. This way, the imposed pressure can be better regulated. It will give the user their feeling back. [8]
One of the methods used for prosthetics, is surface electromyography (sEMG). It can be used to control myoelectric prosthetics. In order to know which sEMG signal corresponds to which wrist movement, the sEMG of people making certain wrist movements was recorded. This movement was then classified by the system. It resulted in an accuracy of 84.93% in real-time classification, so the prosthetic would make the right movement in about 85% of the cases. [9]
Furthermore, EEGs can be used to control a prosthetic hand as well. However, EEGs of movement are not yet understood completely. Most of the times, only a linear model is used for them, which can only explain 10% of variance in cortical response, and over 80% of the response is nonlinear. Now, a new, dynamic model was designed to try to explain this. This model could explain 46% of the variance, which is a significant increase. This research has been done with wrist movement, so in order to use a prosthetic wrist, this could mean more precision in control of the prosthesis. [10]
A research about EEG-signals versus cortical current source (CCS) signals, found that EEG-CCS signals were more accurate in movement than only EEG-signals. EEG-signals could cause oscillatory movements, while EEG-CCS signals displayed the muscle movement better. [11] Another research tried to fuse sEMG and EEG signals. This turned out to be significally better than either sEMG or EEG signals. The classification accuracy increased more than 14%. [12] Furthermore, EEGLAB exists. This is an open source toolbox, used to analyse EEG dynamics. It runs under the MATLAB environment and processes single-trial EEG data of a number of channels. The data can be visualized , preprocessed and an independent component analysis can be performed. New scripts can be made with info from the EEGLAB script. [13]
Since EEGs are measured by putting electrodes on the scalp, there will be no difference for many paralysed people. Mostly, these people are paralysed because of a fault in the spinal cord. Movement already starts in the brain, so when controlling a prosthetic with EEG-data, there will be no blockade in the spine. Hence, paralysed people will be able to use them as well.
Feedback control for prosthetics research
Sensors for the measurement of applied force can be used in prosthetic hands to control the strength of the hand when grabbing objects. This technology is already implemented in 'soft' grippers in the form of a pneumatic soft sensor (PSS), consisting of a silicon body and a flexible pressure sensor and has been tested succesfully on rubber balls.[14][15] This could be advantageous for picking up fragile objects, raw eggs or fruit for example, in order to not break or bruise the object.
Since interacting with humans and fragile (biological) objects is a requirement for hand prosthetics, soft and flexible sensors and actuators are vital. Soft actuators work through the integration of microscopic changes at molecular level into a macroscopic deformation. This allows them to be 3D-printed at a reasonable price. [16] This holds that these actuators can be embedded within the soft tissue of the skin-like silicon if necessary.
A prosthesic hand behaves similar to an exoskeleton when controlling forces and movement. There has been alot of research on the topic of controlling exoskeletons. The mechanism can be controlled by: a model, hierarchy, parameter or usage based control system. Most exoskeletons nowadays use a combination of mentioned control systems to optimise performance. [17] The sensory feedback can also be interpreted by the human carrier of the exoskeleton / prosthesis. Through the use of discrete (event-driven) over continuous feedback, the traditional limitation of neural delays can be circumvented. [18]
Force feedback turns out to be difficult for haptic devices, since they have limited force accuracy. New research is conducted to use a variable motion mapping method. Now the motion mapping coefficient could be regulated according to the object stiffness. This method could lead to identifying objects without having to see them. However, the stiffness of the object has to be estimated beforehand. It could still be more efficient than current methods. [19]
Machine Learning
A machine learning algorithm is an algorithm that trains itself on patterns in the data it is given. There are different kinds of ways to train such an algorithm. There is supervised and unsupervised learning. In supervised learning the algorithm is trained using a data set for which the result is already known. This means that the network can get to a result and check its own performance and optimize for this training set data. This way it can become very good at predicting the results which were given. However, it will not recognise any results that were not provided in the training data. There is also unsupervised learning. What this does is it tries to group the data or find patterns based on de difference between specific data points. Then there is also deep learning. This is using a neural network that has many layers in order to group data. This can be done supervised or unsupervised. As we are only interested in the signals that stimulate wrist movement (and not in grouping all brain activity), it is logical to use supervised learning. For EEG's it was found that the random forest algorithm works very well, and better then the other tested algorithms (including neural networks) [20] . Thus was chosen for usual supervised machine learning using the random forest algorithm.
The random forest algorithm
The random forest algorithm is an ensamble approach. This means that the data is split up into many different smaller data sets. The algoritm tries to optimalize each of these n sets. This way there are many small optimalized splits. A vector addition of these results then form the final result. Note that since n is very large, most of the data will be equally well represented. This is better than just running the algorithm on the full data set. The reason for this is that there is always some noise and variance in the given data set. If the full data set is taken for training then the algorithm will be training on all of the data, so also the specific outliers of that data set, which leads to overfitting on these precise points.
More in detail, the data is first split into many different parts. Then a single data split is created by adding some of these parts together. On this set a single tree is trained. A tree will split the data in such a way that data with the same outcome fall into the same categories. This is done by using the gini-index of the tree, which tells you how much data with the same outcome is in the same category. This way it will continue to make split points until the data is grouped optimally with the constrainst of a mininal group size and a maximal tree depth. This way a decision tree is made. Given a data point it can use this decision tree to predict the outcome. Now this is done for n trees (with n a large integer) with each given a different split of the data. This way n decision trees are created for which you can predict the outcome based on a single data point. The algorithm is now trained. If one now inserts a data point on this trained forest the random forest algorithm will ask the outcome of each of these trees and take the most represented answer.
Machine learning for prostheses
Machine learning has been studied as a way to make prostheses react to electric pulses in the brain. Research has been done on this subject with rhesus monkeys. In this research, rhesus monkeys were first trained to perform a 3D outreaching task, and later implanted with chronic electrode arrays. After that, Principal Component Analysis (PCA) was used to reduce the amount of neurons being used in the analysis. Then a machine learning algorithm was employed to map the neurons being used to the direction of movement. The model made in this way was tested, and could correctly predict when the hand was in motion 81% of the time, and the motion ended less than 3 cm from the correct endpoint in less than 50% of the cases, so there's room for improvement in this algorithm. [21]
There has been some extensive research to how cells in the motor cortex of the brain are related to the direction of movement of a hand. This research shows that cells have a ‘preferred direction’ and are only dependent in the movement of the hand, so not the position.[22] These directions can be determined by measuring the discharge rate of these cells and the real activity of the hand. The total movement of the hand can then be predicted using a ‘population vector’, calculated by a sum of all of the cells times a weighing factor based on the discharge rate of the cell. This vector is within the 95% directional cone for more than 80% of the time using 475 cells (and more cells lead to higher accuracy)[23] Unfortunately this will not be very useful for our further research, since we will not have the option to implant devices on human brains in order to detect their discharge rate for obvious reasons.
EEG-data
At first, the idea was to get proper EEG-equipment to achieve data of the right wrist movements. One or more of the group members would be the test person, and if possible Arie Rommers as well. The plan was to compare the results, because for all the group members it is known that they have their hand and can perform the movements, but Rommers does not have his arm anymore and has not performed any wrist movement for a long time. This could cause a difference in the way the brain would perform the movement. If the prosthesis would ever actually be used on somebody, this could be important information.
However, it quickly turned out getting the equipment was difficult. The university itself works with the equipment for research in epilepsy, but when contacting them they did not answer. When trying the hospital, they said only academic hospitals lend their equipment to externals. Therefore, the Radboud University in Nijmegen was contacted, but using the state-of-the-art EEG-equipment of the Donders Institute at the university was extremely expensive and not possible in the time available for the project. During one of the tutor meetings, this problem was discussed and the advice was to look online for available data. Quickly, a site was found with many external links to free EEG-data. [24] Sadly, there was barely any data of wrist movements, and if there was, it was not what was needed for this project. To solve this, other data was chosen from the BCI Competition III. [25] This data did not necessarily contain specific movements, but movement of specific limbs. Since the wrist has four directions to move in, a data set was chosen with 4 different limbs to move: the left hand, right hand, foot and tongue. This test was done with three people, but only the data of test person 1 was used to train the algorithm. It might not be ideal not having data of the wrist movement only, but it is enough to prove the idea of a robotic prosthetic controlled by the brain might work.
When looking for the EEG-data online, a Matlab toolbox to process EEG-data was found as well, called EEG-lab (see Literature study, Control of prosthetics). It was expected that this might help for the algorithm, but turned out not to be necessary. However, an extension of this toolbox was also found to predict which limb was used when getting certain data. Since the idea of this project was to predict the wrist movement with a machine learning algorithm, this extension could help with recognising patterns in the data. It was therefore looked into a bit, but not too extensively, since it is not desirable to copy it.
Were there more time for this project, it might be possible to achieve the desired data and train the algorithm on that. However, specific movement of the wrist could be more difficult than recognising certain limbs. The control of each limb is placed on a certain place within the motor cortex. Within the area of a limb, the movements are generated. Since all wrist movement will be located in the wrist section of the brain, it will all be close together and therefore difficult to be distinguished by the algorithm. The electrodes will have to be put closely together and can therefore have noise from each other. It is however shown that the algorithm can actually distinguish the signals. More about that in the next chapter.
The Final Algorithm
The final algorithm we used was a random forest algorithm implemented in python. We designed the algorithm such that it could be trained on the EEG data that we found. It was implemented in such a way that it would give a discrete result, such as left, right, up or down. This is because those were the only things that were known about the data.
The algorithm was build in two Python files. The first file creates the random forest and exports it to an excel csv file. It can be found here: Random Forest Builder
The second part uses the random forest created by the first part to make predictions, and it can be found here: Random Forest User
Functional Model
Physical Model
Since the model is a prosthesis, we figured it would be best represented with a physical model. This would show real world problems that could not be accurately represented using a digital model. The Dynamixel would be ideal, due to it being an all in one servo. Its size would mean that the model would not be too large, not only that but the driver, controller, and reduction gear are also included in the Dynamixel.
The CST-lab had 2 of these Dynamixels that we could borrow for the duration of our project. These Dynamixel RX-28 servos were mounted on a few plastic plates with two different kinds of joints. Our project would also need these joints so the idea was to use this configuration with only some minor changes of our own.
However one thing was missing, a way to connect the Dynamixel to the PC. The engineering department did not have a connector that was not in use, so a new Titan USB-COMI-M was ordered. This had the same functionality as an original usb2dynamixel, but with the added functionality of also supporting RS-232, not only RS-485 that the Dynamixel uses. This arrived after several days.
The connection between Dynamixel and Laptop was made and using a pre-made python code we tried to call for it. This prompted a UTF-8 decoding error in the first byte. After searching the web for a fix, the answer seemed to be to change the encoding to UTF-16. However after changing this it still gave the exact same error. Since this did not solve the problem for us but did others we tried to find other differences between our and their code that would explain the problem. This however bore no fruit and did not result in a fix.
The Dynamixel needed power and this was provided by a DC power supply that was connected to the Dynamixel using two cables. After consulting the internet we could not find a single conclusive way to do this. So we went back to where the Dynamixels are from and asked around there. This resulted a configuration in which the power supply was connected to the same Dynamixel that was also connected to the laptop, but even this was not the correct way for sure, because nobody there at that time had actually worked with the Dynamixel.
This resulted in us thinking that the fault in the software (python) maybe due to a fault in the hardware (Dynamixel). We tested just powering the Dynamixel and figured out that while the first dynamixel that the power and laptop where connected to would turn on normally but the second one would get power but then show that it had an error by the led turning on but then staying on. We disconnected the second dynamixel from the first and would first focus on the first dynamixel. The first dynamixel was reconnected to the power and the laptop and turned back on.
The problem should be in the code then and a matlab version was tried. This gave the exact same error and even a new c-lib error. This wasn’t an improvement and since the machine learning algorithm was also made in python we switched back.
The code was checked part by part by inserting breaks to see until where it did run. This found the problem and it seemed to be the get character part of the code. The problem was not the code itself however but with Spyder and the console that it uses. The solution was running it through an external console. This solved the problems with the code. Now everything seemed to work, it would answer the questions of the position and the ID of the current dynamixel. But when given the command to move it didn’t do anything. So other commands where tried changing the Dynamixel ID and changing the perceived starting position. When doing this it gave back our original values. The problem turned out to be the way it was connected to the laptop.
The Titan USB-COMI-M was connected according to RS-485 Protocol but the echo version turned out to be the problem. This just returned the values originally put in and the Dynamixel was not even involved in most of the commands we had given it. The commands were given again and the ID could be changed. This seemed to work but movement was still a problem. Not even the software that was made for the Dynamixel by the parent company could get it to move.
Since the last problem was caused by the COMI-M, this problem might also be caused by it. The COMI-M had a set of Jumpers that could be changed according to what kind of protocol was used. Almost every possible configuration that made sense was tried and resulted in no progress.
Harrie van de Loo was contacted again and this time there were people there that had worked with the Dynamixel before. They however had only used the Dynamixel in combination with Linux. He guided us to a ROS based tutorial [26] of the dynamixel but this would also only work in combination with Linux. This seemed to be the last resort to get the dynamixel to work within the time limit of the course.
One of the group members had a USB through which Linux could be booted. This was done and seemed to go well until we realized that Linux needed an update to fully use this tutorial. The Update was downloaded and the USB crashed and this was the moment we opted to go for a digital model because this was the only thing that could be completed in the time limit.
Virtual Model
After concluding that the physical model was probably not going to operate, a computer aided design (CAD) model was made representing the human wrist. Siemens NX10® was used to design and control the model. The mechatronic concept designer (MCD) is a toolbox within NX which is used to make dynamic models for machines and robotics. The first step was to create the assembly consisting of a hand and an arm connected by a condylar joint. Fingers and a thumb have also been added to aid the visual impression of the hand. Next, the model has been placed in the MCD environment within NX. To constrain the hand movement, a ball joint with additional angular constraints has been implemented which closely resembles the human wrist.
Furthermore, to be able to control the motions of the hand, the algorithm has to communicate it's output from the python environment into the MCD environment. The output of the interpreter can be any number between zero and four, depending on the direction the algorithm predicts the user desires to move towards. The mechatronic concept designer toolbox has a built-in signal mapping tool to communicate with an OPC server. OPC stand for OLE for Process Control, a communication standard for industrial automatic systems. This posed a new challenge, the setup of a server on which variables can be stored effectively and without having to rent one.
To solve this issue, search online revealed a company called Matrikon, which has free server hosting software available to use for simulation purposes. This made it possible to host a server on a local network to sustain communication between the algorithm and the model. It uses labels to store variables on a maintained update frequency of 1000 Hz.
The next step is to output from python to the OPC server. To this end, a package called OpenOPC is used to set up commands for writing data to the server. However, this package required two other packages; Pyro (Python Remote Objects) and Pywin64. In addition, this package also requires a different python version (2.7 instead of the 3.5 used to create the interpreter). After satisfying these conditions, the OpenOPC commands have been used to succesfully transmit commands from the algorithm to the functional model. A demonstration of this whole mechanism can be seen in the next chapter.
Demonstration
To demonstrate the communication between the algorithm interpreting EEG's and the Functional model executing the output-commands of the algorithm, a video has been made showcasing this behaviour in real-time. The EEG data to interpret was the dataset of BCI competition 3 as described earlier. The model is shown inside the MCD environment of NX10 and the command prompt on the left shows the execution of the interpreter, which is reading the dataset in real-time. The top left box shows the labels created in the OPC server to store the algorithm output.
The frequency at which the algorithm outputs new signals can be lowered to give a more clear representation of the functionality of the components, as shown below:
Coaching questions
Bibliography
- ↑ K. Ziegler-Graham, E. J. MacKenzie, P.L. Epharim, T. G. Travinson, and R. Brookmeyer. Estimating the prevalence of limb loss in the united stated: 2005 to 2050. Archives of Physical Medicine and Rehabilitation, 89:422-429, March 2008
- ↑ C. L. Taylor, R. J. Schwarz. The anatomy and mechanics of the human hand. "Artificial limbs", 1955.
- ↑ C. Donna, B. A. Boone, P. Azen. Normal Range of Motion of Joints in Male Subjects. The Journal of Bone and Joint Surgery: incorparated, 1979 .
- ↑ S. Schwerin. The Anatomy of Movement. Brain Connection. Retrieved 31 March 2018, from https://brainconnection.brainhq.com/2013/03/05/the-anatomy-of-movement/.
- ↑ C. Piazza, M.G. Catalano, S.B. Godfrey, M. Rossi, G. Grioli, M. Bianchi, K. Zhao, A. Bicchi. The SoftHand Pro-H. IEEE Robotics & Automation magazine, 2017, p.87-101
- ↑ M. Owen, C. Au, A. Fowke. Development of a Dexterous Prosthetic Hand. Journal of Computing and Information Science in Engineering, 2018,
- ↑ M. Penčić, M. Rackov, M. Čavić, I. Kiss, V. G. Cioată. Social humanoid robot SARA: development of the wrist mechanism. IOP Conf. Series: Materials Science and Engineering 294, 2017
- ↑ J. Edwards. Signal Processing Powers Next-Generation Prosthetics. IEEE Signal Processing Magazine, 2018, p.13-16
- ↑ G.D. Eisenberg, K.G.H.M. Fyvie, A. Mohamed. Real-Time Segmentation and Feature Extraction of Electromyography: Towards Control of a Prosthetic Hand. IFAC PapersOnLine 50-2, 2017, p.151–156
- ↑ M.P. Vlaar, G. Birpoutsoukis, J. Lataire, M. Schoukens, A.C. Schouten, J. Schoukens, F.C.T. van der Helm. Modeling the Nonlinear Cortical Response in EEG Evoked by Wrist Joint Manipulation. IEEE Transaction on Neural Systems and Rehabilitation Engineering, vol. 26, 2018, p.205-215
- ↑ T. Kawase, N. Yoshimura, H. Kambara and Y. Koike. Controlling an electromyography-based power-assist device for the wrist using electroencephalography cortical currents. Advanced Robotics, 2016, p.88-96
- ↑ X. Li, O.W. Samuel, X. Zhang, H. Wang, P. Fang and G. Li. A motion-classification strategy based on sEMG-EEG signal combination for upper-limb amputees. Journal of NeuroEngineering and Rehabilitation, 2017
- ↑ A. Delorme, S. Makeig. EEGLAB: an open source toolbox for analsis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 2004, p.9-21
- ↑ H.Yang, Y.Chen, Y.Sun, L.Hao. A novel pneumatic soft sensor for measuring contact force and curvature of a soft gripper. Sensors and Actuators A: Physical, 2017, p.318-327
- ↑ Y. Zhu, J. Li, H. Cai, Y. Wu, H. Ding, N. Pan, X. Wang. Highly sensitive and skin-like pressure sensor based on asymmetric double-layered structures of reduced graphite oxide. Sensors and Actuators B: Chemical, 2018, p.1262-1267
- ↑ A. Zolfagharian, A.Z. Kouzani, S. Y. Khoo, A. A. A. Moghadam, I.Gibson, A. Kaynak. Evolution of 3D printed soft actuators. Sensors and Actuators A: Physical, 2016, p.258-272
- ↑ K.Anam,A.A.Al-Jumaily. Active Exoskeleton Control Systems: State of the Art.Procedia Engineering, 2012,p.988-994
- ↑ C. Cipriani, J.L. Segil, F. Clemente, R. F. ff. Weir, B. Edin. Humans can integrate feedback of discrete events in their sensorimotor control of a robotic hand. Exp Brain Res, 2014, p.3421-3429
- ↑ L. Liu, Y. Zhang, G. Liu, W. Xu. Variable motion mapping to enhance stiffness discrimination and identification in robot hand teleoperation. Robotics and Computer–Integrated Manufacturing 51, 2018, p.202–208
- ↑ Chan, A., Early, C., Subedi, S., Yuezhe Li and Hong Lin (2015). Systematic analysis of machine learning algorithms on EEG data for brain state intelligence. 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM).
- ↑ Isaacs RE, e. (2018). Work toward real-time control of a cortical neural prothesis. - PubMed - NCBI . Ncbi.nlm.nih.gov. Retrieved 18 February 2018, from https://www.ncbi.nlm.nih.gov/pubmed/10896185
- ↑ Schwartz, A. B., and A. P. Georgopoulos (1987) Relations between the amplitude of 2-dimensional arm movements and single cell discharge in primate motor cortex. Sot. Neurosci. Abstr. 13: 244.
- ↑ Schwartz, A. B., R. E. Kettner, and A. P. Georgopoulos (1988) Primate motor cortex and free arm movements to visual targets in threedimensional space. I. Relations between single cell discharge and direction of movement. J. Neurosci. 8: 29 13-2927..
- ↑ EEG / ERP data available for free public download. Retrieved 15 March 2018, from https://sccn.ucsd.edu/~arno/fam2data/publicly_available_EEG_data.html
- ↑ G. Schalk, D. Krusienski, J.R. Wolpaw, B. Blankertz, G. Dornhege, K. Müller, A. Schlögl, B. Graimann, G. Pfurtscheller, S. Chiappa, J.R. Millán, M. Schröder, T. Hinterberger, T. Navin Lal, G. Widman and N. Birbaumer. BCI Competition III. Retrieved 19 March 2018, from http://www.bbci.de/competition/iii/
- ↑ Dynamixel Tutorial based in ROS. Retrieved 20 March 2018, from http://wiki.ros.org/dynamixel_controllers/Tutorials