PRE2018 1 Group2
Project Robots Everywhere (Q1) - Group 2
Group 2 consists of:
- Hans Chia (0979848)
- Jared Mateo Eduardo (0962419)
- Roelof Mestriner (0945956)
- Mitchell Schijen (0906009)
Progress
Weekly Presentations
At the start of each weekly meeting we will prepare a short presentation about our progress. After the weekly meetings the newest presentation will be added to this section of the wiki.
- week 1 (10-09)
- week 2 (17-09)
- week 3 (24-09)
- week 4 (01-08) To be uploaded during the next major wiki update
Progression on milestones
(list of completed milestones + comments about their completion + completion date)
2018-09-06
The team decided on a topic. During the kick-off meeting on monday 2018-09-03 we brainstormed about several topics. During the first week we performed literature studies to inspect their originality and feasibility. We continued brainstorming and decided on a different topic, which you can read about in the remainder of this wiki page. The initial topics are listed below:
- Extending the Smart City concept by adding functionality to satellite navigation. When a driver enters a city, they are asked whether they want to reserve and drive to a free parking spot near their destination. If the driver agrees, their satnav will query the city network, which will then book a free parking spot near the driver's destination. The satnav will automatically change its destination to the chosen parking spot. We were under the impression that such a system would make driving in a unknown city less stressful, and increase the efficiency of city traffic. During an initial literature study we found that there were numerous implementations of this topic. Although each of these implementations differed from our own vision in some way, we eventually decided to look for a different topic.
- Creating a new Guitar Robot that builds upon the work done by PRE2017_4 Group 2. This topic appeared interesting to us as it included building an actual robot. We also had ideas for making a platform were disabled musicians can play each other’s songs, regardless of the specific modification that was done to their instrument. We were concerned about this topic as it had been done before and because there is only one person in our group who currently plays a musical instrument.
Week 2
During the first coaching session several new points of interest for the literature study were found. During week 2 we worked on expanding the literature study with more information about how many people have to deal with RSI, and how RSI can be prevented. The following sections were added to the literature study:
- how big of a problem is RSI
- RSI issues: upper extremity problems
- RSI prevention: how to properly setup a computer monitor
- RSI prevention: how to use breaks to combat RSI
- These additions can be viewed on the 0LAUK0 2018Q1 Group 2 - SotA Literature Study page. The results of these additions to the literature study also provide key information about the creation of the design plans and the creation of a prototype, which will be the focus of week 3 of this project.
Week 3
During this week the group started working on milestone 4: creating design plans. Using the conclusions of the literature studies done in the past two weeks we decided to focus on the automated monitor arm for the first design plan and prototype. The team also tried an existing software library for computer vision that can aid in the creation of a prototype of the automated monitor arm.
Week 4
In order to generate room in our schedule for testing a prototype, this week focused on designing the prototype.
Mitchell worked on the facial tracking software. At the start of the week we obtained two identical webcams, which would be used for testing stereoscopy. However, I soon ran into two problems with this setup. In Processing one can specify a video stream from a camera using the name of the desired camera, the desired resolution and the desired framerate. If Processing can find a video mode that meets all of these parameters, a video capture can be initiated. However, it is not possible to specify a camera by the USB port that it uses to connect to the computer. As such both of the webcams registered in the programming environment as the same device. Combined the two cameras had over 70 video modes available, the first half of this list belonging to one camera, the second half belonging to the second camera. No matter which video modes were chosen, only one video capture was willing to start. I eventually found a work-around, where I had to change the display name of one of the webcams in my computer’s registry. This gave both cameras a distinct name, allowing both of them to be selected. However, this work-around had to be reapplied on almost every reboot of the computer, and it only worked for the specific USB port the camera was plugged in at the time.
The second problem entails that there is no documented method in the port of OpenCV to Processing to calibrate the two cameras. Using a mix of my computer’s integrated webcam and one external webcam or both external webcams made for a limited improvement in depth perception. In both cases I was presented with a highly distorted depth image.
Another issue that needs working on is performance. I am currently working on making the face detection multi-threaded, since the program hovers around five frames per second of output when face detection is performed. Implementing multi-threading immediately increased performance to 30 frames per second, but debugging is needed, as the program would lock up after a second or so. For now I will continue to test with a combination of my computer’s built-in webcam and one of the external webcams. During the next week we will have to swap one of the external cameras for one of a different make or model, such that both can be detected without the problems described above. I am also looking into the calibration of two cameras for stereoscopic vision, as calibration should be able to correct for the differences between different cameras.
Hans worked on controlling the stepper motors that would be used in the prototype.
Roelof worked on the design of the monitor arm. A render of this design is available on our google drive project folder, and is visible here.
Jared worked on devising a test plan in order to assess whether the final prototype will function properly (see section prototype – testing). This first test plan details the unit, integration and user tests needed to evaluate if the prototype will function completely in line with the design plans.
Topic
Topic in a nutshell
Our project centers around designing a RSI robot: a smart desk that automatically adjusts itself to the posture of its user to improve comfort, increase productivity and prevent medical conditions that are part of Repetitive Strain Injury (RSI). The working name of this concept is the Smart Flexplace System (SFS).
Problem statement and objectives
Flexplaces are used a lot nowadays by large companies. It occurs often that only a handful of directors have their own office, while the rest of the employees can work anywhere they want improving their productivity and work-attitude . The new Main Building at the TU/e, ATLAS, will also embark flexplaces only. The downside of these flexplaces is that the flexplace is not customarily adjusted for the person working at it, which can lead to bad working conditions. Most users do not exactly know what the best posture is and when to take a small break from work, which can lead to Repetitive Strain Injury (RSI).
To solve this problem 0LAUK0 Group 2 would like to introduce the Smart Flexplace System – TU/e ATLAS 2019 project. In this project the working condition problems of the flexplaces in ATLAS will be solved by studying multiple fields of interest and creating a Smart Flexplace System (SFS) by combining software with adjustable office-hardware. This system will be able to adjust automatically depending on the users posture and profile.
User Description
The RSI preventive AI will be attached to tables, chairs and computers. The users of this AI will therefore be people who work with computers regularly (daily). This will be the case for people working in the ICT sector, as well as for students, project managers, etc. In this study the focus will be specifically on flexible working spaces. Everyone making use of these flexible working spaces can be considered to be the users.
User Requirements
User-based Requirements The aim of this AI is to prevent RSI, so in general the AI (table, chair, etc.) needs the requirement that it is able to adjust itself in order to prevent RSI. This will be done by working with a user interface. The user can use his or her account to log on into the AI system. This account needs to contain information about the length, size and disabilities of the user in order to set itself to the perfect position to prevent RSI the most. Secondly, so now and then the AI needs to readjust itself again (this can be done by warning the user to sit differently, or by just moving into another position itself). It would also be suitable if the user can adjust the chair, table or computer by himself, so in the interface there needs to be an option to steer the system manually.
Technical-based Requirements Considering technical requirements, all re-adjustable instruments need to have a little motor in order to readjust themselves in the first place. These motors need to be able to receive orders from both the user interface and the AI system, in order to turn the right amount of degrees.
Preparation
Approach
As you may have read in the topic section of this wiki, our concept entails the research and design of a smart desk that reduces/prevents RSI. We will use a literature study to gain insights about topics relevant to our goal. We will use both results from the literature study as well as information from our contact person, an Arbo-coordinator at the TU/e, to better define user requirements. From this we will develop design plans that encompass the main components of our concept. We aim to validate those plans with our contactperson, and we want to develop one or more of these design plans into a working prototype. After this prototype has been validated we will present our process, as well as the results of our research, design and prototyping in the final presentation of this course.
Milestones
List of milestones
We have defined several milestones that will guide the progression of our project.
- Choose a research topic.
- Research the State-of-the-Art regarding our topic by performing a literature study
- Use our contact person at the TU/e to gather additional information regarding our case.
- Create design plans that describe the different aspects of our envisioned product.
- Validate our design plans with our contact person.
- Build a prototype that focusses on one or more of our design plans.
- Validate the prototype with our contact person.
- Produce a final presentation in which we will discuss our process, design plans and prototype(s).
Clarification of the milestones
The State-of-the-Art literature study may give us insights that would require us to modify our ultimate goal within this project.
One of our team members has managed to get in touch with an Arbocoordinator at the TU/e. We use the new Atlas building as a case to focus on an application of our concept. We would also like to ask this contactperson to help validate our design plans and any prototype that we are able to build.
The design plans will encompass topics such as the user interface, user profiles, the design of electronic adjustable desks, the design of face tracking monitors.
Deliverables
The following deliverables will be created by the group:
- Design plans that encompass the relevant topics of our concept.
- Build one or more prototypes that implement our design plans.
- A final presentation in which we will discuss the design plans and the prototype(s).
Planning
Our group's planning is available for inspection here
State-of-the-Art literature study
The State-of-the-Art literature study has its own page, which can be found at 0LAUK0 2018Q1 Group 2 - SotA Literature Study.
Design
Introduction
In the literature study we found that more office works suffered from issues pertaining to their neck and back than to their arms or hands. We made it clear in our deliverables section that we intend to build at least one prototype that brings one or more of our design plans to life. Given the finding described previously, the team decided that the prototype would focus on minimizing neck issues by tackling the RSI issue of a maladjusted computer monitor.
The prototype will be an automatically adjusting computer monitor stand. It will feature two cameras that will make use of stereoscopic video to measure the distance between the user and the monitor. It will also keep track of head movements that the user might make. These measurements are used to move the monitor when necessary. Moving the monitor is necessary when the user changes their posture in such a way that their current posture is at odds with RSI prevention guidelines (for example, sitting too close to the monitor). Actuators in the base of the monitor will allow the monitor to move to an optimal RSI preventing stance.
We found that to reduce/prevent RSI the user needs to stay in motion by changing posture. During a visit to Tijn Borghuis at the IPO building we were shown one of the height-adjustable desks that will be used in the Atlas building. since these height adjustable desks can work as a sitting desk and a standing desk, it is safe to assume that the user will have varying posture throughout the working day when using such a desk. This enforces our case for the integration of an automatically adjusting monitor (along with the fact that there are many manually adjustable monitor arms already on the market).
The team started out by doing research into actuators suitable for moving the monitor (strong enough to carry the weight of a monitor, silent enough not to annoy/cause hearing loss for the user). The team also looked into current face tracking technologies. While investigating current software libraries we came across the Open Source Computer Vision Library, OpenCV, which features over 2500 optimized algorithms for computer vision (OpenCV team, n.d.).
For now a port of OpenCV (Borenstain, 2013) for the Processing programming language (Processing Foundation, n.d.) seems highly interesting, as this software is readily available and in personal testing of the software we found that it's demo's worked straight away. However, testing stereoscopy will have to wait until we have the required camera equipment.
Regarding the user experience, the team has to work out what the threshold will be for moving the monitor. We do not want the monitor to move with every movement of the user, as some of these movements have nothing to do with looking at the monitor (for instance, if the user looks down to read something from their paper notes, the monitor should not move in this situation). We also have to look into minimizing privacy concerns. Furthermore, we could measure the variation in user posture over time using logs of the face tracking software.
References
- OpenCV team. (n.d.). About. Retrieved from [1]
- The Processing Foundation. (n.d.). Processing. Retrieved from [2]
- Borenstein, G. (2013). OpenCV for Processing [software library]. Retrieved from [3]
Literature study
Our exact findings are available on the 0LAUK0 2018Q1 Group 2 - Design Plans Research page.
Prototype
Introduction
In tandem with creating the design plans the team is also working on creating a functional prototype of an automated RSI preventing computer monitor stand. The goal is to create a prototype that comes as close as possible to our ideal design plan for such a prototype while taking into account budget and time limitations, given that we want to test the functionality of the system. The first test plan that the team devised focusses on whether the developed prototype meets the developed design plans. During week 5 the team will investigate how user satisfaction could be measured, and how a test plan for such research could be set up.
Testing
Main page: 0LAUK0 2018Q1 Group 2 - Prototype Functionality Testing