AutoRef Honors2019 vision: Difference between revisions
(5 intermediate revisions by 2 users not shown) | |||
Line 9: | Line 9: | ||
= Ball tracking = | = Ball tracking = | ||
To track the ball from the drone camera | To track the ball from the drone camera we used existing code (Rosebrock, 2015) and adapted it to work for us. The code needs a range of colors that the ball shows on camera represented by a threshold in HSV color space. | ||
The first thing the code will do is grab a frame from the video and make it into a standard size and apply a blur filter to create a less detailed image. Then all of the | The first thing the code will do is grab a frame from the video and make it into a standard size and apply a blur filter to create a less detailed image. Then all of the images that are not the color of the ball is made black, and more filters are applied to remove small parts in the image with the same color as the ball. Now functions from the cv2 package from OpenCV are used to encircle the largest piece of the image with the color of the ball. Then the center and radius of this circle are identified. | ||
A problem we faced was that the ball was sometimes detected in an incorrect position due to a color flickering. Then we assumed that the ball will be detected in the same region of two sequential frames, this we could use by adding a feature to only look for the ball in the same region of the image where the ball was detected in the previous frame. | |||
= Line detection = | = Line detection = | ||
In contrast to the ball detection the line detection is made to work in the simulation and needs to be adapted to work on the actual drone. The main function used for detecting lines, or line segments to be more exact, is the HoughLinesP function from the before mentioned cv2 package, this function detects straight line segments in pictures from contrast in color. To now only detect the line segments we want to detect we applied filters and selections, each will be described below using an example image. | In contrast to the ball detection the line detection is made to work in the simulation and needs to be adapted to work on the actual drone. The main function used for detecting lines, or line segments to be more exact, is the HoughLinesP function from the before mentioned cv2 package, this function detects straight line segments in pictures from the contrast in color. To now only detect the line segments we want to detect we applied filters and selections, each will be described below using an example image. | ||
[[File:Initial.PNG|300px|]] | [[File:Initial.PNG|300px|]] | ||
The first filter is a color based filter to not consider any lines that are not on the field. | The first filter is a color-based filter to not consider any lines that are not on the field. | ||
[[File:Color.PNG|300px|]] | [[File:Color.PNG|300px|]] | ||
Line 29: | Line 28: | ||
[[File:Result.PNG|300px|]] | [[File:Result.PNG|300px|]] | ||
Now we want exactly two lines detected for each white line, this implies that all lines detected should have corresponding parallel | Now we want exactly two lines detected for each white line, this implies that all lines detected should have corresponding parallel lines. Using this assumption we can filter out all lines that do not have such a parallel line. | ||
[[File:Parallel.PNG|300px|]] | [[File:Parallel.PNG|300px|]] | ||
The next thing we notice is that a lot of times there are multiple lines close to | The next thing we notice is that a lot of times there are multiple lines close to each other where we only want to detect one line. Now we can compute the distance between line segments and merge those segments that lie close to each other. | ||
[[File:Merged.PNG|300px|]] | [[File:Merged.PNG|300px|]] |
Latest revision as of 19:05, 24 May 2020
AutoRef Honors 2019 - Vision
Ball tracking
To track the ball from the drone camera we used existing code (Rosebrock, 2015) and adapted it to work for us. The code needs a range of colors that the ball shows on camera represented by a threshold in HSV color space.
The first thing the code will do is grab a frame from the video and make it into a standard size and apply a blur filter to create a less detailed image. Then all of the images that are not the color of the ball is made black, and more filters are applied to remove small parts in the image with the same color as the ball. Now functions from the cv2 package from OpenCV are used to encircle the largest piece of the image with the color of the ball. Then the center and radius of this circle are identified.
A problem we faced was that the ball was sometimes detected in an incorrect position due to a color flickering. Then we assumed that the ball will be detected in the same region of two sequential frames, this we could use by adding a feature to only look for the ball in the same region of the image where the ball was detected in the previous frame.
Line detection
In contrast to the ball detection the line detection is made to work in the simulation and needs to be adapted to work on the actual drone. The main function used for detecting lines, or line segments to be more exact, is the HoughLinesP function from the before mentioned cv2 package, this function detects straight line segments in pictures from the contrast in color. To now only detect the line segments we want to detect we applied filters and selections, each will be described below using an example image.
The first filter is a color-based filter to not consider any lines that are not on the field.
Now we can use the HoughLinesP function to give us the line segments to work with.
Now we want exactly two lines detected for each white line, this implies that all lines detected should have corresponding parallel lines. Using this assumption we can filter out all lines that do not have such a parallel line.
The next thing we notice is that a lot of times there are multiple lines close to each other where we only want to detect one line. Now we can compute the distance between line segments and merge those segments that lie close to each other.