Retake Embedded Motion Control 2018 Nr1: Difference between revisions
No edit summary |
No edit summary |
||
(15 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Retake Albu, T.F. | Retake Albu, T.F. (486100, 19992109) | ||
<b>Requirements:</b> | <b>Requirements:</b> | ||
Line 5: | Line 5: | ||
<b>R1.</b> The robot will start by searching for the POI. | <b>R1.</b> The robot will start by searching for the POI. | ||
<b>R2.</b> | <b>R2.</b> Once POI is found, the robot will follow POI. | ||
<b>R3.</b> The robot will filter out another person, even if it comes close to POI, but not between the POI and the robot. | <b>R3.</b> The robot will filter out another person, even if it comes close to POI, but not between the POI and the robot. | ||
<b>R4.</b> If the contact with the POI is lost, the robot will search for POI with the same procedure as at start. | <b>R4.</b> If the contact with the POI is lost, the robot will search for POI with the same procedure as at start. | ||
Line 15: | Line 16: | ||
<b>A1.</b> POI will move with a speed less than 0.5m/s. | <b>A1.</b> POI will move with a speed less than 0.5m/s. | ||
<b>A2.</b> Two legs of POI will always be visible to the robot. | <b>A2.</b> Two legs of POI will always be visible to the robot. | ||
<b>Architectural Design Decisions:</b> | <b>Architectural Design Decisions:</b> | ||
<b> | <b>ADD1.</b> We organize the software with two goals: detect POI, and follow POI. The robot will start in detection mode, and as soon as the POI has been found, it goes to follow mode. If the contact with POI is lost, the robot goes back to detection mode. Within the follow POI goal, we distinguish two sub-goals: navigating to a certain "watching" position behind the POI, and aligning the robot to face the POI. The robot will go to the watching position which gives it the best view of the POI movements. If POI is still, the robot will just stay there and "watch". | ||
<b> | <b>ADD2.</b> The necessary tasks we consider to be: detecting the objects around the robot in a certain distance range; calculating the "watching" position and the line of POI legs; navigating to a certain position; aligning by rotation the robot to a certain direction. | ||
<b> | <b>ADD3.</b> The World Model consists of the position of the objects around the robot in a certain distance range, in the coordinate system of the robot. | ||
<b>ADD4.</b> The necessary skills we consider to be: measuring the distance between the robot and the closest object at a certain given angle between -pi and pi (not including -pi and including pi); translating the robot at a certain speed, by specifying the projection of the velocity on the coordinate axis in the robot coordinate system; rotating the robot at a certain angular speed. | |||
<b>Detailed Design Decisions:</b> | <b>Detailed Design Decisions:</b> | ||
D1. In order to find the POI, the robot will scan 2 times 180 degrees, and will parse the information. The robot shall rotate until it detect the 2 legs of the POI in the range 0.2m-1m, after that it will move at the "watching" position -- a central position at 0.4m behind the POI, facing it. | <b>D1.</b> In order to find the POI, the robot will scan 2 times 180 degrees, and will parse the information -- it will be able to put together information about an object that appears in both measurements. The robot shall rotate until it detect the 2 legs of the POI in the range 0.2m-1m, after that it will move at the "watching" position -- a central position at 0.4m behind the POI, facing it. | ||
<b>D2.</b> The watching position is calculated as follows: | |||
- The robot detects the 2 legs of the POI, and calculates their centers of mass. The centers of mass define the line segment of the POI legs. | |||
- The watching position is at 0.4m orthogonally from the middle of this line segment | |||
<b>D3.</b> If the robot detects a different number of objects than 2 (0, 1, 3 etc.) then it considers the contact with POI broken, and it starts searching for it again. | |||
<b>D4.</b> In order to filter out a person walking by, the robot will consider a cone of 60 degrees, centered in the middle of the line segment of POI legs. Everything outside the code ill be discarded. | |||
<b>Important Remark.</b> Design decision D4 is a wrong decision. At testing it turned out the actual system cannot align perfectly, and using the cone gives mistakes in finding POI -- it will cut off one leg. Therefore, this decision is to be replaced by D4' presented below. Unfortunately, there was no more time to implement this decision. | |||
<b>D4'.</b> In order to filter out a person walking by, the robot will memorize the position of POI legs. If at the next scan more objects are detected, the robot will only consider two of them located close to the previous scan, if they are close enough and have a similar shape -- otherwise, the connection with POI is to be considered broken. | |||
<b>D5. </b>The robot has to move and rotate to certain setpoints. In practice, it will never be able to do this with 0 error. Therefore, a maximum allowed error is defined on each coordinate, and the robot will stop when it is within the allowed range. | |||
<b>D6. </b>The translational encoders seem to not work properly, very likely the robot slips on the surface. In this project only the scanner information will be used, and the rotational encoder, which seems to be alright. (The decision to use the rotational encoder is also wrong, please see the test results below.) | |||
<b>Test Results and Possible Improvements.</b> The robot detects the POI and follows it. However, several problems have been detected, therefore a list of problem reports (PR) and change requests (CR) is presented below. | |||
<b>CR/PR 1.</b> For some strange reason, after alignment the robot is not facing perfectly the middle of the line segment (of the POI legs) and this gives problems in detecting the legs further -- if POI makes a reasonable fast movement in the "wrong" direction, the robot will only see one leg and breaks the connection. | |||
I believe the problem is in the rotational encoder, it introduces too large an error and the robot settles on the wrong angle. The rotational encoder is better than the translational ones (those I do not use), however I should have used only the scanner information and calculate all distances and angles from there. | |||
<b>CR/PR 2. </b>The detection procedure is a little sloppy: after scanning two times 180 degrees, the robot rotates in the initial position, this simply takes time and can be avoided by some small calculation. | |||
<b>CR/PR 3. </b>The filtering of a second person has to be changed entirely, see D4' above. | |||
<b>CR/PR 4. </b>Further filtering of small or large objects (smaller than half the POI leg and larger than double of it) can be performed by just skipping the too small or large contiguous segments in the scan. It is assumed here that POI is not a pirate with a thin wooden leg :) | |||
<b>CR/PR 5. </b>While moving, the robot should perform several scans and use a smart calculated info (like average). This would allow for some errors in the scan: very rarely but it happens, the scanner doesn't see a point in the middle of the leg, then this will result in two objects immediately, and with the current software the connection with POI is considered broken. | |||
<b>CR/PR 6. </b>In order to improve the robustness, the robot should increase the allowed POI range to about 2m, and use the filtering presented above. | |||
<b>CR/PR 7. </b>If POI disappears but a wall is detected, a wall-follower can be considered, which should bring the robot close to POI. |
Latest revision as of 16:33, 17 August 2018
Retake Albu, T.F. (486100, 19992109)
Requirements:
R1. The robot will start by searching for the POI.
R2. Once POI is found, the robot will follow POI.
R3. The robot will filter out another person, even if it comes close to POI, but not between the POI and the robot.
R4. If the contact with the POI is lost, the robot will search for POI with the same procedure as at start.
Assumptions:
A1. POI will move with a speed less than 0.5m/s.
A2. Two legs of POI will always be visible to the robot.
Architectural Design Decisions:
ADD1. We organize the software with two goals: detect POI, and follow POI. The robot will start in detection mode, and as soon as the POI has been found, it goes to follow mode. If the contact with POI is lost, the robot goes back to detection mode. Within the follow POI goal, we distinguish two sub-goals: navigating to a certain "watching" position behind the POI, and aligning the robot to face the POI. The robot will go to the watching position which gives it the best view of the POI movements. If POI is still, the robot will just stay there and "watch".
ADD2. The necessary tasks we consider to be: detecting the objects around the robot in a certain distance range; calculating the "watching" position and the line of POI legs; navigating to a certain position; aligning by rotation the robot to a certain direction.
ADD3. The World Model consists of the position of the objects around the robot in a certain distance range, in the coordinate system of the robot.
ADD4. The necessary skills we consider to be: measuring the distance between the robot and the closest object at a certain given angle between -pi and pi (not including -pi and including pi); translating the robot at a certain speed, by specifying the projection of the velocity on the coordinate axis in the robot coordinate system; rotating the robot at a certain angular speed.
Detailed Design Decisions:
D1. In order to find the POI, the robot will scan 2 times 180 degrees, and will parse the information -- it will be able to put together information about an object that appears in both measurements. The robot shall rotate until it detect the 2 legs of the POI in the range 0.2m-1m, after that it will move at the "watching" position -- a central position at 0.4m behind the POI, facing it.
D2. The watching position is calculated as follows:
- The robot detects the 2 legs of the POI, and calculates their centers of mass. The centers of mass define the line segment of the POI legs. - The watching position is at 0.4m orthogonally from the middle of this line segment
D3. If the robot detects a different number of objects than 2 (0, 1, 3 etc.) then it considers the contact with POI broken, and it starts searching for it again.
D4. In order to filter out a person walking by, the robot will consider a cone of 60 degrees, centered in the middle of the line segment of POI legs. Everything outside the code ill be discarded.
Important Remark. Design decision D4 is a wrong decision. At testing it turned out the actual system cannot align perfectly, and using the cone gives mistakes in finding POI -- it will cut off one leg. Therefore, this decision is to be replaced by D4' presented below. Unfortunately, there was no more time to implement this decision.
D4'. In order to filter out a person walking by, the robot will memorize the position of POI legs. If at the next scan more objects are detected, the robot will only consider two of them located close to the previous scan, if they are close enough and have a similar shape -- otherwise, the connection with POI is to be considered broken.
D5. The robot has to move and rotate to certain setpoints. In practice, it will never be able to do this with 0 error. Therefore, a maximum allowed error is defined on each coordinate, and the robot will stop when it is within the allowed range.
D6. The translational encoders seem to not work properly, very likely the robot slips on the surface. In this project only the scanner information will be used, and the rotational encoder, which seems to be alright. (The decision to use the rotational encoder is also wrong, please see the test results below.)
Test Results and Possible Improvements. The robot detects the POI and follows it. However, several problems have been detected, therefore a list of problem reports (PR) and change requests (CR) is presented below.
CR/PR 1. For some strange reason, after alignment the robot is not facing perfectly the middle of the line segment (of the POI legs) and this gives problems in detecting the legs further -- if POI makes a reasonable fast movement in the "wrong" direction, the robot will only see one leg and breaks the connection.
I believe the problem is in the rotational encoder, it introduces too large an error and the robot settles on the wrong angle. The rotational encoder is better than the translational ones (those I do not use), however I should have used only the scanner information and calculate all distances and angles from there.
CR/PR 2. The detection procedure is a little sloppy: after scanning two times 180 degrees, the robot rotates in the initial position, this simply takes time and can be avoided by some small calculation.
CR/PR 3. The filtering of a second person has to be changed entirely, see D4' above.
CR/PR 4. Further filtering of small or large objects (smaller than half the POI leg and larger than double of it) can be performed by just skipping the too small or large contiguous segments in the scan. It is assumed here that POI is not a pirate with a thin wooden leg :)
CR/PR 5. While moving, the robot should perform several scans and use a smart calculated info (like average). This would allow for some errors in the scan: very rarely but it happens, the scanner doesn't see a point in the middle of the leg, then this will result in two objects immediately, and with the current software the connection with POI is considered broken.
CR/PR 6. In order to improve the robustness, the robot should increase the allowed POI range to about 2m, and use the filtering presented above.
CR/PR 7. If POI disappears but a wall is detected, a wall-follower can be considered, which should bring the robot close to POI.