PRE2024 3 Group4
max van aken
Bram van der pas
Jarno peters
Simon B. Wessel
Javier Basterreche Blasco
Matei Manaila
Start of project
Problem statement and objective
The problem this group wants to tackle is that many swimmers have flaws in their swimming technique, while the quality and quantity of trainers is declining in many amateur clubs. To solve the problem we want to create a swimsuit with sensors that track the position and orientation of the limbs of swimmers. The suit should then be able to give feedback based on the data the sensors aqcuire.
The users
people who swim for sport (as amateurs, professionals generally have a lot of good coaches) and wish to improve their technique (pretty much all of them).
Requirements
The suit should not be too heavy or inhibit motion too much, as the swimmers should be able to swim as normal while the suit is measuring. It should also be a one size fits all solution, as this means swimming clubs have to purchase less suits. The suits should also be as affordable as possible, as the end users are amateur clubs, which are usually not super rich.
Approach
Preferrably we would like to do this using sensors on the suit, as this means that all of the technology would be on the suit, meaning no external infrastructure is required in the swimming pool. The sensors would be placed on top of or near joints in the body, such as the shoulder, elbow and wrist for arms, and the hips, knees and ankles for legs. Distances between joints would be determined using supersonic sensors and orientation would be determined by having gyrosensors at each joint location. With this approach, reference sensors would be required at the base of each arm or leg so the relative position data from the joints can be converted into more absolute data that is more useful.
If the sensor idea turns out to be impossible to implement during this course, the alternative would be the principles of a motion capture suit, where bright white balls are placed on a black suit and their position is determined using 2 cameras. One camera would view from the side in this case, while another would view from above. Based on this data, the same feedback can be constructed as with the sensor principle, but this would require 2 cameras on rails to be installed in the swimming pool, and these cameras would need to follow the suit around. This would possibly make the suit more expensive, as these rails would need to be either 25 or 50 meters long, depending on the swimming pool. This would also be less practical to implement for amateur clubs, as the pools they use would need to agree with installing said rails.
Milestones and deliverables
Due to the time frame and the scope of the course, a full body suit is likely not feasible. To be able to have something to show at the end of the course, a prototype will be built for one arm. There are also multiple ways of swimming, for this project the focus will be on the front crawl.
The milestones for the construction of the arm suit would be as follows:
- Build the sleeve (for now without any sensors yet). Keep the type of sensor to be used in mind when creating a sleeve.
- Build a functional prototype, either by attaching sensors, or by making a construction with external cameras. The prototype should be able to send position data for each joint to a computer.
- Convert the raw position data to usable coordinates, likely with angles and distances between joints.
- Construct a program that can differentiate between correct and wrong technique. Some technique errors may be distinguished manually, others might require some simple implementations of AI. One method for this would be gathering a bunch of data for wrong and correct arm motion, gathering simple features from the data, like minimal and maximal angle of the elbow joint, and training a simple decision tree.
- (bonus) if there is some time left, it may be possible to also write a program for a different way of swimming, like backstroke or the butterfly.
Literature study
Wearable motion capture suit with full-body tactile sensors[1]
This article discusses a suit with not only motion sensors, but also tactile sensors. These sensors detect whether a part of the suit is touching something or not. The motion sensors consist of an accelerometer, several gyroscopes, and multiple magnetometers. The data from these sensors is processed in a local cpu and subsequently sent to a central computer, to decrease processing time and ensure real-time calculations. The goal of the suit is to give researchers in the field of sports and rehabilitation more insight in human motion and behavior, as before this, no real motion capture suit with both motion sensors and tactile sensors had been implemented.
Motion tracking: no silver bullet, but a respectable arsenal[2]
This article goes over the different principles of motion sensors and which methods there are. They discuss mechanical, inertial, acoustic, magnetic, optical, radio and microwave sensing.
mechanical sensing: Provides accurate data for a single target, but generally has a small range of motion. These generally work by detecting mechanical stress, which is not a desirable approach for this project.
Inertial sensing: By using gyroscopes and accelerometers, the orientation and acceleration can be determined in the sensor. By compensating for gravity and double integrating over the acceleration, the position can be determined. One downside is that they are quite sensitive to drift and errors, and a small error integrated over time yields massive errors in the final position. For our project this would be very useful, as sensors determining their position wrt each other is difficult to do as the orientation is difficult to determine.
Acoustic sensing: These sensors transmit a small ultrasonic pulse and time how long it takes to get back. This method has multiple challenges, such as the fact that it can only measure relative changes in distance, not absolute distance. It is also very noise sensitive, as the sound wave can reflect off of multiple surfaces. Those reflections can get back at the sensor at different times, causing all sorts of problems. To solve the reflection problems, the sensor can be programmed to only consider the first reflection and ignore the rest, as this first reflection is generally the one that is to be measured.
Magnetic sensing: These sensors rely on magnetometers for static fields and on a change in induced current for changing magnetic fields. One creative way to use this is to have a double coil produce a magnetic field at a given location and estimate the sensors position and orientation based on the field it measures.
optical sensing: These sensors consist of two components; a light source and a sensor. The article discusses these sensors further, but since water and air have different refractive indices, and the sensors will be in and out of the water at random, these sensors will be useless.
radio and microwave sensing: Based on what the article had to say, this is generally used for long range position determination, such as gps. This is likely not useful for this project.
The Use of Motion Capture Technology in 3D Animation[3]
This article reviews literature about motion capture in 3D animation, and it aims to identify the strengths and limitations of different methods and technologies. It starts by describing different motion capture systems, while later on it comes to conclusions about the accessibility, ease of use, and the future of motion capture in general. Although this last part is not super interesting for us, the descriptions of different systems is.
active & passive optical motion capture: The basic idea is that an object or a person has a suit with either active or passive optical elements. Passive elements only reflect external light, and their position is measured using external cameras, generally multiple cameras from several directions. The material is usually selected such that it reflects infrared light. Active markers on the other hand emit their own light, which is again generally in the infrared part of the spectrum. Also for active markers their position is measured using cameras.
Inertial motion capture: This system uses inertia sensors (described in [2]) to determine the position of key joints and body parts. This system does not depend on lighting and cameras, increasing the freedom of motion. A widely used inertia based system is the Xsens MVN system.
Markerless motion capture: In this case, no markers or sensors are used, but the motion is simply recorded with one or multiple cameras. Software then interprets the data and turns it into something usable for animators. For us this approach is not very usefull.
Surface electromyography: This method is generally used to detect fine motions in the face using sensors that detect the electrical currents produced by contracting muscles. For us again not super useful.
Musculoskeletal model-based inverse dynamic analysis under ambulatory conditions using inertial motion capture[4]
This article discusses the use of inertial motion sensors from xsens, which is currently part of movella. The specific model used here is the xsens MVN link. They constructed a suit using these sensors and let the test subjects perform different movements. The root mean square distance between the determined position and the real position was found to lie between about 3 and 8 degrees, depending on the body part measured. If we can reach these kinds of values for our prototype that would be sufficient. Since this article is from 2019, the current state of the art technology might be even better than this.
Sensor network oriented human motion capture via wearable intelligent system[5]
This article uses 15 wireless inertial motion sensors placed on human limbs and other important locations to capture the motion of the person. The researchers had their focus on a lightweight design with small sensors and a low impact on behavior. The specific sensors used are MPU9250 models, which also only cost about 12 euros.The researchers transform the coordinates gained from the sensors and have an error of about 1.5% in the determined displacement.
- ↑ Y. Fujimori, Y. Ohmura, T. Harada and Y. Kuniyoshi, "Wearable motion capture suit with full-body tactile sensors," 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 2009, pp. 3186-3193, doi: 10.1109/ROBOT.2009.5152758. keywords: {Tactile sensors;Humans;Motion estimation;Humanoid robots;Wearable sensors;Motion measurement;Force measurement;Motion analysis;Shape;Robot control}, https://ieeexplore.ieee.org/abstract/document/5152758
- ↑ 2.0 2.1 G. Welch and E. Foxlin, "Motion tracking: no silver bullet, but a respectable arsenal," in IEEE Computer Graphics and Applications, vol. 22, no. 6, pp. 24-38, Nov.-Dec. 2002, doi: 10.1109/MCG.2002.1046626. keywords: {Tracking;Silver;Delay;Roads;Motion estimation;Motion measurement;Pipelines;Robustness;Degradation;Magnetic fields}, https://ieeexplore.ieee.org/abstract/document/1046626
- ↑ WIBOWO, Mars Caroline; NUGROHO, Sarwo; WIBOWO, Agus. The use of motion capture technology in 3D animation. International Journal of Computing and Digital Systems, 2024, 15.1: 975-987. https://pdfs.semanticscholar.org/9514/28e966feece961d7100448d0caf17a8b93ec.pdf
- ↑ Angelos Karatsidis, Moonki Jung, H. Martin Schepers, Giovanni Bellusci, Mark de Zee, Peter H. Veltink, Michael Skipper Andersen, Musculoskeletal model-based inverse dynamic analysis under ambulatory conditions using inertial motion capture, Medical Engineering & Physics, Volume 65, 2019, Pages 68-77, ISSN 1350-4533, https://doi.org/10.1016/j.medengphy.2018.12.021
- ↑ Qiu S, Zhao H, Jiang N, et al. Sensor network oriented human motion capture via wearable intelligent system. Int J Intell Syst. 2022; 37: 1646-1673. https://doi.org/10.1002/int.22689