PRE2019 4 Group3: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
 
(196 intermediate revisions by 5 users not shown)
Line 1: Line 1:
[[File:logo.png|500px|center]]
[[File:logo.png|500px|center]]


= Group members =
= Group Members =


{| class="wikitable" style="border-style: solid; border-width: 1px;" cellpadding="3"
{| class="wikitable" style="border-style: solid; border-width: 1px;" cellpadding="3"
Line 36: Line 36:
|}
|}


=Peer review=
= Problem Statement =


= Problem statement =
Over 5 trillion pieces of plastic are currently floating around in the oceans <ref name=TOC>Oceans. (2020, March 18). Retrieved April 23, 2020, from https://theoceancleanup.com/oceans/</ref>. For a part, this so-called plastic soup, exists of large plastics, like bags, straws, and cups. But it also contains a vast concentration of microplastics: these are pieces of plastic smaller than 5[mm] in size <ref name=microdef> Wikipedia contributors. (2020, April 13). Microplastics. Retrieved April 23, 2020, from https://en.wikipedia.org/wiki/Microplastics </ref>.
 
Over 5 trillion pieces of plastic are currently floating around in the oceans <ref name=TOC>Oceans. (2020, March 18). Retrieved April 23, 2020, from https://theoceancleanup.com/oceans/</ref>. For a part, this so-called plastic soup, exists of large plastics, like bags, straws, and cups. But it also contains a vast concentration of microplastics: these are pieces of plastic smaller than 5mm in size <ref name=microdef> Wikipedia contributors. (2020, April 13). Microplastics. Retrieved April 23, 2020, from https://en.wikipedia.org/wiki/Microplastics </ref>.
There are five garbage patches across the globe <ref name=TOC></ref>. In the garbage patch in the Mediterranean sea, the most prevalent microplastics were found to be polyethylene and polypropyline <ref name=microplastics> Suaria, G., Avio, C. G., Mineo, A., Lattin, G. L., Magaldi, M. G., Belmonte, G., … Aliani, S. (2016). The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters. Scientific Reports, 6(1). https://doi.org/10.1038/srep37551</ref>.
There are five garbage patches across the globe <ref name=TOC></ref>. In the garbage patch in the Mediterranean sea, the most prevalent microplastics were found to be polyethylene and polypropyline <ref name=microplastics> Suaria, G., Avio, C. G., Mineo, A., Lattin, G. L., Magaldi, M. G., Belmonte, G., … Aliani, S. (2016). The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters. Scientific Reports, 6(1). https://doi.org/10.1038/srep37551</ref>.


A study in the Northern Sea showed that 5.4% of the fish had ingested plastic <ref name=ingestion>Foekema, E. M., De Gruijter, C., Mergia, M. T., van Franeker, J. A., Murk, A. J., & Koelmans, A. A. (2013). Plastic in North Sea Fish. Environmental Science & Technology, 47(15), 8818–8824. https://doi.org/10.1021/es400931b</ref>.
A study in the Northern Sea showed that 5.4[%] of the fish had ingested plastic <ref name=ingestion>Foekema, E. M., De Gruijter, C., Mergia, M. T., van Franeker, J. A., Murk, A. J., & Koelmans, A. A. (2013). Plastic in North Sea Fish. Environmental Science & Technology, 47(15), 8818–8824. https://doi.org/10.1021/es400931b</ref>.
The plastic consumed by the fish accumulates - new plastic does go into the fish, but does not come out. The buildup of plastic particles results in stress in their livers <ref name=plasticeffects>Rochman, C. M., Hoh, E., Kurobe, T., & Teh, S. J. (2013). Ingested plastic transfers hazardous chemicals to fish and induces hepatic stress. Scientific Reports, 3(1). https://doi.org/10.1038/srep03263</ref>. Beside that, fish can become stuck in the larger plastics. Thus, the plastic soup is becoming a threat for sea life.
The plastic consumed by the fish accumulates - new plastic does go into the fish, but does not come out. The buildup of plastic particles results in stress in their livers <ref name=plasticeffects>Rochman, C. M., Hoh, E., Kurobe, T., & Teh, S. J. (2013). Ingested plastic transfers hazardous chemicals to fish and induces hepatic stress. Scientific Reports, 3(1). https://doi.org/10.1038/srep03263</ref>. Beside that, fish can become stuck in the larger plastics. Thus, the plastic soup is becoming a threat for sea life.


[[File:garbage.jpg|400px|Image: 800 pixels|center|thumb|The locations of the five garbage patches around the globe]]  
[[File:garbage.jpg|400px|Image: 800 pixels|center|thumb|The locations of the five garbage patches around the globe<ref name=TOC>Oceans. (2020, March 18). Retrieved April 23, 2020, from https://theoceancleanup.com/oceans/</ref>.]]  


A lot of this plastic comes from rivers. A study published in 2017 found that about 80% of plastic trash is flowing into the sea from 10 rivers that run through heavily populated regions. The other 20% of plastic trash enters the ocean directly <ref>Stevens, A. (2019, December 3). Tiny plastic, big problem. Retrieved May 10, 2020, from https://www.sciencenewsforstudents.org/article/tiny-plastic-big-problem</ref>, for example, trash blown from a beach or discarded from ships.
A lot of this plastic comes from rivers. A study published in 2017 found that about 80[%] of plastic trash is flowing into the sea from 10 rivers that run through heavily populated regions. The other 20[%] of plastic waste enters the ocean directly <ref>Stevens, A. (2019, December 3). Tiny plastic, big problem. Retrieved May 10, 2020, from https://www.sciencenewsforstudents.org/article/tiny-plastic-big-problem</ref>, for example, trash blown from a beach or discarded from ships.


In 2019, over 200 volunteers walked along parts of the Maas and Waal <ref name=plasticsoepMaasWaal>Peels, J. (2019). Plasticsoep in de Maas en de Waal veel erger dan gedacht, vrijwilligers vinden 77.000 stukken afval. Retrieved May 6, from https://www.omroepbrabant.nl/nieuws/2967097/plasticsoep-in-de-maas-en-de-waal-veel-erger-dan-gedacht-vrijwilligers-vinden-77000-stukken-afval</ref>, they found 77.000 pieces of litter of which 84% was plastic. This number was higher than expected. The best way to help cleaning up the oceans is to first make sure to stop the influx. In order to stop the influx, it must be known how much plastic is flowing through the rivers. The amount of litter was higher than expected, which means that at this moment there is no good monitoring of the rivers on the plastic flow.
In 2019, over 200 volunteers walked along parts of the Maas and Waal <ref name=plasticsoepMaasWaal>Peels, J. (2019). Plasticsoep in de Maas en de Waal veel erger dan gedacht, vrijwilligers vinden 77.000 stukken afval. Retrieved May 6, from https://www.omroepbrabant.nl/nieuws/2967097/plasticsoep-in-de-maas-en-de-waal-veel-erger-dan-gedacht-vrijwilligers-vinden-77000-stukken-afval</ref>, and they found 77.000 pieces of litter of which 84[%] was plastic. This number was higher than expected. The best way to help cleaning up the oceans is to first make sure the influx stops. In order to do so, it is important to know how much waste flows from certain rivers to the ocean. At this moment there is no good monitoring of waste flow in rivers, usually everything is counted by hand.


In this project, a contribution will be made to the gathering of information on the litter flowing through the river Maas, specifically the part in Limburg. This is done by providing a concept of an information-gathering 'shark'. This machine uses image recognition to identify the plastic. A design will be made and the image recognition will be tested. Lastly, it will be thought out how the shark will be able to save information and communicate it.
In this project, a contribution will be made to the gathering of information on the litter flowing through the river Maas, specifically the part in Limburg. There will be worked together with the company Noria. They made a machine that removes waste from the water. More information on their project and their interests is provided within the 'Users' section. The device that will be designed will be placed on the Noria as an information-gathering device. It will use image recognition to identify the waste. A design will be made and the image recognition will be tested. Lastly, it will be thought out how the device will be able to save information and communicate it.


=== Objectives ===
=== Objectives ===
Line 58: Line 56:
* Do research into the state of the art of current recognition software, river cleanup devices and neural networks.  
* Do research into the state of the art of current recognition software, river cleanup devices and neural networks.  


* Create a software tool that distinguishes garbage from marine life.
* Create a software tool that recognizes and counts different types of waste.


* Test this software tool and form a conclusion on the effectiveness of the tool.
* Test this software tool and form a conclusion on the effectiveness of the tool.


* Create a design for the SPlaSh
* Create a design for the image recognition device.


* Think of a way to save and communicate the information gathered.
* Think of a way to save and communicate the information gathered.


=== Users ===
=== Users ===
In this part the different users will be discussed. With users are meant: the different groups that are involved with this problem. 
In this part the different users or stakeholders will be discussed.


===== Schone rivieren (Schone Maas) =====
===== Schone Rivieren (Schone Maas) =====


Schone rivieren is a foundation which is established by IVN Natuureducatie, Plastic Soup Foundation and Stichting De Noordzee. This foundation has the goal to have all Dutch rivers plastic-free in 2030. They rely on volunteers to collectively clean up the rivers and gather information. They would benefit a lot from the SPlaSh, because it provides the organization with useful data that can be used to optimize the river cleanup.
Schone rivieren is a foundation which is established by IVN Natuureducatie, Plastic Soup Foundation and Stichting De Noordzee <ref name ='sr'>Schone Rivieren. (2020, May 19). Schone Rivieren. Retrieved June 17, 2020, from https://www.schonerivieren.org/</ref>. This foundation has the goal to have all Dutch rivers plastic-free in 2030. They rely on volunteers to collectively clean up the rivers and gather information. They would benefit a lot from the information gathered by the Waste Identifier, because it provides the organization with useful data that can be used to optimize the river cleanup.


A few of the partners will be listed below. These give an indication of the organizations this foundation is involved with.
A few of the partners will be listed below. These give an indication of the organizations this foundation is involved with.


* ''University of Leiden'' - The science communication and society department of the University does a lot of research to the interaction between science and society, this expertise is used by the foundation.
* ''Rijkswaterstaat (executive agency of the Ministry of Infrastructure and Water Management)'' - Rijkswaterstaat is interested in information about the amount of waste in rivers and the clean up of this.


* ''Rijkswaterstaat (executive agency of the Ministry of Infrastructure and Water Management)'' - Rijkswaterstaat will provide knowledge that can be used for the project. Therefore, Rijkswaterstaat is also a user of its own, whom will be discussed later.
* ''Nationale Postcode Loterij (national lottery)'' - They donated 1.950.000 euros to the foundation. This indicates that the problem is seen as significant. This donation helps the foundation to grow and allows them to use resources.
 
* ''Nationale Postcode Loterij (national lottery)'' - Donated 1.950.000 euros to the foundation. This indicates that the problem is seen as significant. This donation helps the foundation to grow and allows them to use resources such as the SPlaSh.


* ''Tauw'' - Tauw is a consultancy and engineering agency that offers consultancy, measurement and monitoring services in the environmental field. It also works on the sustainable development of the living environment for industry and governments.
* ''Tauw'' - Tauw is a consultancy and engineering agency that offers consultancy, measurement and monitoring services in the environmental field. It also works on the sustainable development of the living environment for industry and governments.


Lastly, the foundation also works with the provinces, Brabant, Gelderland, Limburg, Utrecht and Limburg.
Lastly, the foundation also works together with the provinces Noord-Brabant, Gelderland, Limburg, and Utrecht.


===== Rijkswaterstaat =====
===== Rijkswaterstaat =====


Rijkswaterstaat is the executive agency of the Ministry of Infrastructure and Water Management, as mentioned before. This means that it is the part of the government that is responsible for the rivers of the Netherlands. They also are the biggest source of data regarding rivers and all water related topics in the Netherlands. Other independent researchers can request data from their database. This makes them a good user, since this project could add important data to that database. Rijkswaterstaat also funds projects, which can prove helpful if the concept that is worked out in the project is ever realized to a prototype.
Rijkswaterstaat is the executive agency of the Ministry of Infrastructure and Water Management, as mentioned before <ref name='rws'>Rijkswaterstaat. (2020, June 12). Rijkswaterstaat. Retrieved June 17, 2020, from https://www.rijkswaterstaat.nl/</ref>. This means that it is the part of the government that is responsible for the rivers of the Netherlands. They also are the biggest source of data regarding rivers and all water related topics in the Netherlands. Other independent researchers can request data from their database. This makes them a good user, since this project could add important data to that database. Rijkswaterstaat also funds projects, which can prove helpful if the concept that is worked out in the project is ever realized.


===== RanMarine Technology (WasteShark) =====
===== RanMarine Technology (WasteShark) =====


RanMarine Technology is a company that is specialized in the design and development of industrial autonomous surface vessels (ASV’s) for ports, harbours and other marine and water environments. The company is known for the WasteShark. This device floats on the water surface of rivers, ports and marinas to collect plastics, bio-waste and other debris <ref name='ranmarine'>WasteShark ASV | RanMarine Technology. (2020, February 27). Retrieved May 2, 2020, from https://www.ranmarine.io/</ref>. It currently operates at coasts, in rivers and in harbours around the world - also in the Netherlands. The idea is to collect the plastic waste before a tide takes it out into the deep ocean, where the waste is much harder to collect.
RanMarine Technology is a company that is specialized in the design and development of industrial autonomous surface vessels (ASVs) for ports, harbors and other marine and water environments. The company is known for the WasteShark. This device floats on the water surface of rivers, ports and marinas to collect plastics, bio-waste and other debris <ref name='ranmarine'>WasteShark ASV | RanMarine Technology. (2020, February 27). Retrieved May 2, 2020, from https://www.ranmarine.io/</ref>. It currently operates at coasts, in rivers and in harbors around the world - also in the Netherlands. The idea is to collect the plastic waste before a tide takes it out into the deep ocean, where the waste is much harder to collect.


[[File:wasteshark.jpg|400px|Image: 400 pixels|center|thumb|The WasteShark in action]]
[[File:wasteshark.jpg|400px|Image: 400 pixels|center|thumb|The WasteShark in action<ref name="ranmarine"></ref>.]]


WasteSharks can collect 200 liters of trash at a time, before having to return to an on-land unloading station. They also charge there. The WasteShark has no carbon emissions, operating on solar power and batteries. The batteries can last 8-16 hours. Both an autonomous model and a remote-controlled model are available <ref name="ranmarine"></ref>. The autonomous model is even able to collaborate with other WasteSharks in the same area. They can thus make decisions based on shared knowledge <ref name="cordis"></ref>. An example of that is, when one WasteShark senses that it is filling up very quickly, other WasteSharks can come join it, for there is probably a lot of plastic waste in that area.
WasteSharks can collect 200 liters of trash at a time, before having to return to an on-land unloading station. They also charge there. The WasteShark has no carbon emissions, operating on solar power and batteries. The batteries can last 8-16 hours. Both an autonomous model and a remote-controlled model are available <ref name="ranmarine"></ref>. The autonomous model is even able to collaborate with other WasteSharks in the same area. They can thus make decisions based on shared knowledge <ref name="cordis"></ref>. An example of that is, when one WasteShark senses that it is filling up very quickly, other WasteSharks can come join it, for there is probably a lot of plastic waste in that area.
Line 101: Line 97:
The autonomous WasteShark detects floating plastic that lies in the path of the WasteShark using laser imaging detection and ranging (LIDAR) technology. This means the WasteShark sends out a signal, and measures the time it takes until a reflection is detected <ref name="lidar">Wikipedia contributors. (2020, May 2). Lidar. Retrieved May 2, 2020, from https://en.wikipedia.org/wiki/Lidar</ref>. From this, the software can figure out the distance of the object that caused the reflection. The WasteShark can then decide to approach the object, or stop / back up a little in case the object is coming closer <ref name="functions"></ref>, this is probably for self-protection. The design of the WasteShark makes it so that plastic waste can go in easily, but can hardly go out of it. The only moving parts of the design are two thrusters which propel the WasteShark forward or backward <ref name="cordis">CORDIS. (2019, March 11). Marine Litter Prevention with Autonomous Water Drones. Retrieved May 2, 2020, from https://cordis.europa.eu/article/id/254172-aquadrones-remove-deliver-and-safely-empty-marine-litter</ref>. This means that the design is very robust, which is important in the environment it is designed to work in.
The autonomous WasteShark detects floating plastic that lies in the path of the WasteShark using laser imaging detection and ranging (LIDAR) technology. This means the WasteShark sends out a signal, and measures the time it takes until a reflection is detected <ref name="lidar">Wikipedia contributors. (2020, May 2). Lidar. Retrieved May 2, 2020, from https://en.wikipedia.org/wiki/Lidar</ref>. From this, the software can figure out the distance of the object that caused the reflection. The WasteShark can then decide to approach the object, or stop / back up a little in case the object is coming closer <ref name="functions"></ref>, this is probably for self-protection. The design of the WasteShark makes it so that plastic waste can go in easily, but can hardly go out of it. The only moving parts of the design are two thrusters which propel the WasteShark forward or backward <ref name="cordis">CORDIS. (2019, March 11). Marine Litter Prevention with Autonomous Water Drones. Retrieved May 2, 2020, from https://cordis.europa.eu/article/id/254172-aquadrones-remove-deliver-and-safely-empty-marine-litter</ref>. This means that the design is very robust, which is important in the environment it is designed to work in.


The fully autonomous version of the WasteShark can also simultaneously collect water quality data, scan the seabed to chart its shape, and filter the water from chemicals that might be in it <ref name="functions"></ref>. These extra measurement devices and gadgets are offered as add-ons. To perform autonomously, this design also has a mission planning ability. In the future, the device should even be able to construct a predictive model of where trash collects in the water <ref name="cordis"></ref>. The information provided by the SPlaSh can be used by RanMarine Technology in the future to guide the WasteShark to areas with a high number of litter.
The fully autonomous version of the WasteShark can also simultaneously collect water quality data, scan the seabed to chart its shape, and filter the water from chemicals that might be in it <ref name="functions"></ref>. These extra measurement devices and gadgets are offered as add-ons. To perform autonomously, this design also has a mission planning ability. In the future, the device should even be able to construct a predictive model of where trash collects in the water <ref name="cordis"></ref>. The information provided by the Waste Identifier can be used by RanMarine Technology in the future to guide the WasteShark to areas with a high number of litter.
 
===== Albatross =====
 
A second device that focuses on collecting datasets of microplastics in rivers and oceans, is the Albatross from the company Pirika Inc. <ref name ='Albatross'>Albatross, floating microplastic database, from https://en.opendata.plastic.research.pirika.org/</ref>. They do this by collecting water samples which are analysed with microscopes afterwards. These microplastics are collected using a plankton net with diameters of 0.1 or 0.3 mm. However, the device does not operate or navigate on it's own, it is a static measurement. The addition of the plankton net could be an addition to the WasteShark to focus on microplastics instead of macroplastics.


===== Noria =====
===== Noria =====
Noria focuses on the development of innovative methods and techniques to tackle the plastic waste problem in the water. They focus on tackling this problem from the time the plastic ends up in the water until it reaches the sea <ref name=noria>Noria - Schonere wateren door het probleem bij de bron aan te pakken. (2020, January 27). Retrieved May 21, 2020, from https://nlinbusiness.com/steden/munchen/interview/noria-schonere-wateren-door-het-probleem-bij-de-bron-aan-te-pakken-ZG9jdW1lbnQ6LUx6YXdoalp2cGpvcEVXbVZYaFI=</ref>. In the figure below, the system of Noria can be seen. Via Rijkswaterstaat, contact has been made with the founder and owner of Noria, Rinze de Vries. Rinze de Vries is interested in working together for this project. Therefore, there is decided to apply an image recognition system on the Noria system to detect the amount and sort of garbage that is collected by the system of Noria.
Noria focuses on the development of innovative methods and techniques to tackle the plastic waste problem in the water. They focus on tackling this problem from the time the plastic ends up in the water until it reaches the sea <ref name=noria>Noria - Schonere wateren door het probleem bij de bron aan te pakken. (2020, January 27). Retrieved May 21, 2020, from https://nlinbusiness.com/steden/munchen/interview/noria-schonere-wateren-door-het-probleem-bij-de-bron-aan-te-pakken-ZG9jdW1lbnQ6LUx6YXdoalp2cGpvcEVXbVZYaFI=</ref>. In the figure below, the system of Noria can be seen. It works with a large rotating mill, of which the blades consist of sieves. As the blades rotate, using an electric motor, macroplastics and other debris are removed out of the top layer of the water. Eventually the waste ends up in the middle of the machine, where it falls into a storage bin. Via Rijkswaterstaat, contact has been made with the founder and owner of Noria, Rinze de Vries. Rinze de Vries is interested in working together for this project. Therefore, there is decided to apply an image recognition system on the Noria system to detect the amount and type of waste that is collected by the system of Noria.


[[File:noria.jpg|500px|Image: 400 pixels|center|thumb|System of Noria in action <ref name=noria>Noria - Schonere wateren door het probleem bij de bron aan te pakken. (2020, January 27). Retrieved May 21, 2020, from https://nlinbusiness.com/steden/munchen/interview/noria-schonere-wateren-door-het-probleem-bij-de-bron-aan-te-pakken-ZG9jdW1lbnQ6LUx6YXdoalp2cGpvcEVXbVZYaFI=</ref>]]
[[File:noria.jpg|500px|Image: 400 pixels|center|thumb|System of Noria in action <ref name=noria>Noria - Schonere wateren door het probleem bij de bron aan te pakken. (2020, January 27). Retrieved May 21, 2020, from https://nlinbusiness.com/steden/munchen/interview/noria-schonere-wateren-door-het-probleem-bij-de-bron-aan-te-pakken-ZG9jdW1lbnQ6LUx6YXdoalp2cGpvcEVXbVZYaFI=</ref>.]]


A pilot has been executed with the Noria. This pilot is aimed at testing a plastic catch system in the lock of Borgharen. The following conclusions can be drawn from this pilot:
A pilot has been executed with the Noria. This pilot is aimed at testing a plastic catch system in the lock of Borgharen. The following conclusions can be drawn from this pilot:


* More than 95% of the plastic waste released into the lock was taken out of the water with the Noria system. This applies to plastic waste as well as organic waste with a size of 10 to 700 mm.
* More than 95[%] of the waste released into the lock was taken out of the water with the Noria system. This applies to waste as well as organic waste with a size of 10 to 700 mm.


* At this moment, it is quite a challenge to drain the waste from the system.
* At this moment, it is quite a challenge to drain the waste from the system.
Line 120: Line 112:
=== Requirements ===  
=== Requirements ===  
   
   
The following points are the requirements. These requirements are conditions or tasks that must be completed to ensure the completion of the project.  
For the Waste Identifier a number of requirements has been set that are listed below. In order to make the requirements concrete and relevant it has been decided to contact potential users. One of the users, Rijkswaterstaat, responded to the request and decided that it was allowed to conduct an interview with one of their employees, Ir. Brinkhof, who is a project manager. He is specialized in the region of the Maas and has insight in all projects and maintenance. Another interview has been conducted with Ir. Rinze de Vries, who is the owner of Noria. Both these interviews can be found at the end of this page in the section 'Conducted interviews'. Based on the conducted interviews the following requirements have been set.
==== Requirements for the Software ====


==== Requirements for the Software ====
* The program should be able to identify and classify different types of waste;


* The program that is written should be able to identify and classify different types of waste;
* The program should be able to count the amount of each waste type that flows into the Noria;


* The program should be able to identify and count waste in the water correctly for at least 90 percent of the time;
* The program should be able to identify and count waste in the water correctly for at least 90[%] of the time;


* Data should be converted to information;
* Data should be converted to information;


* The same piece of waste should not be counted multiple times.
* The same piece of waste should not be counted multiple times. The same threshold of counting 90[%] correctly applies here.


==== Requirements for the Design ====
==== Requirements for the Design ====
In order to make the requirements of the design concrete and relevant it has been decided to contact potential users. One of the users, Rijkswaterstaat, responded to the request and decided that it was allowed to conduct an interview with one of their employees, Ir. Brinkhof, who is a project manager. He is specialized in the region of the Maas and has insight in the all projects and maintenance.


* The design should be weatherproof;
* The design should be weatherproof;


* It should operate at all moments when Noria is also operating;  
* The design should operate at all moments when Noria is also operating;  


* The design should be robust, so it should not be damaged easily;  
* The design should be robust, so it should not be damaged easily;  
Line 151: Line 142:
=== Approach ===
=== Approach ===


For the planning, a Gantt Chart is created with the most important things. The overall view of our planning is that in the first two weeks, a lot of research has to be done. This needs to be done for, among other things, the problem statement, users and the current technology. In the second week, more information about different types of neural networks and the working of different layers should be investigated to gain more knowledge. Also, this could lead to installing multiple packages or programs on our laptops, which needs time to test whether they work. During this second week, a data-set should be created or found that can be used to train our model. If this cannot be found online and thus should be created, this would take much more time than one week. However, it is hoped to be finished after the third week. After this, the group is split into people who create the design and applications of the robot, and people who work on the creation of the neural network. After week 5, an idea of the robotics should be elaborated with the use of drawings or digital visualizations. Also all the possible neural networks should be elaborated and tested, so that in week 8 conclusions can be drawn for the best working neural network. This means that in week 8, the Wiki-page can be concluded with a conclusion and discussion about the neural network that should be used and about the working of the device. Finally, week 9 is used to prepare for the presentation.  
For the planning, a Gantt Chart is created with the most important goals and subgoals that need to be tackled. The group is split into people who create the design and applications of the Waste Identifier, and people who work on the creation of the neural network. The overall view of the planning is that in the first two weeks, a lot of research has to be done. This needs to be done for, among other things, the problem statement, users and the current technology. In the second week, more information about different types of neural networks and the working of different layers should be investigated to gain more knowledge. Also, this could lead to installing multiple packages or programs on laptops, which needs time to test whether they work. During this second week, a data-set should be created or found that can be used to train our model. If this cannot be found online and thus should be created, this would take much more time than one week. However, it is hoped to be finished after the third week. After week 5, an idea of the design should be elaborated with the use of drawings or digital visualizations. Also all the possible neural networks should be elaborated and tested, so that in week 8 conclusions can be drawn for the best working neural network. This means that in week 8, the Wiki-page can be finished with a conclusion and discussion about the neural network that should be used and about the working of the device. Finally, week 9 is used to prepare for the presentation.  


Currently, the activities are subdivided related to the Neural Network / image recognition and the design of the device. Kevin and Lotte will work on the design of the device and Menno, Marijn and Dennis will look work on the neural networks.
The activities are subdivided related to the neural network/image recognition and the design of the device. Kevin and Lotte will work on the design of the device and Menno, Marijn and Dennis will look work on the neural networks.


[[File:gannt.png|800px|Image: 800 pixels|center|thumb|Gannt chart]]
[[File:gannt.png|800px|Image: 800 pixels|center|thumb|Project planning.]]


=== Milestones ===
=== Milestones ===
Line 163: Line 154:
|-
|-
| 1 (April 20th till April 26th)
| 1 (April 20th till April 26th)
| Correct information and knowledge for first meeting
| Gather information and knowledge about chosen topic.
|-
|-
| 2 (April 27th till May 3rd)
| 2 (April 27th till May 3rd)
| Further research on different types of Neural Networks and having a working example of a CNN.
| Further research on different types of neural networks and having a working example of a neural network.
|-
|-
| 3 (May 4th till May 10th)
| 3 (May 4th till May 10th)
Line 172: Line 163:
|-
|-
| 4 (May 11th till May 17th)
| 4 (May 11th till May 17th)
| First findings of correctness of different Neural Networks and tests of different types of Neural Networks.
| First findings of correctness of different neural networks and tests of different types of neural networks.
|-
|-
| 5 (May 18th till May 24th)
| 5 (May 18th till May 24th)
| Conclusion of the best working neural network and final visualisation of the design.
| Conclusion of the best working neural network and making final visualisation of the design.
|-
|-
| 6 (May 25th till May 31st)
| 6 (May 25th till May 31st)
| First set-up of wiki page with the found conclusions of Neural Networks and design with correct visualisation of the findings.
| First set-up of wiki page with the found conclusions of neural networks and design with correct visualisation of the findings.
|-
|-
| 7 (June 1st till June 7th)
| 7 (June 1st till June 7th)
| Creation of the final wiki-page  
| Creation of the final wiki-page.
|-
|-
| 8 (June 8th till June 14th)
| 8 (June 8th till June 14th)
| Presentation and visualisation of final presentation
| Presentation and visualisation of final presentation.
|}
|}


Line 195: Line 186:
= State-of-the-Art =
= State-of-the-Art =


=== Quantifying plastic waste ===
=== Quantifying Waste ===


Plastic debris in rivers has been quantified before in three ways <ref name="counting">Emmerik, T., & Schwarz, A. (2019). Plastic debris in rivers. WIREs Water, 7(1). https://doi.org/10.1002/wat2.1398</ref>. First of all, by quantifying the sources of plastic waste. Second of all, by quantifying plastic transport through modelling. Lastly, by quantifying plastic transport through observations. The last one is most in line with what will be done in this project. No uniform method for counting plastic debris in rivers was made. So, several plastic monitoring studies each thought of their own way to do so. The methods can be divided up into 5 different subcategories <ref name="counting"></ref>:
Plastic debris in rivers has been quantified before in three ways <ref name="counting">Emmerik, T., & Schwarz, A. (2019). Plastic debris in rivers. WIREs Water, 7(1). https://doi.org/10.1002/wat2.1398</ref>. First of all, by quantifying the sources of plastic waste. Second of all, by quantifying plastic transport through modelling. Lastly, by quantifying plastic transport through observations. The last one is most in line with what will be done in this project. No uniform method for counting plastic debris in rivers was made. So, several plastic monitoring studies each thought of their own way to do so. The methods can be divided up into 5 different subcategories <ref name="counting"></ref>:
Line 203: Line 194:
2. Active sampling: Collecting samples from riverbanks, beaches, or from a net hanging from a bridge or a boat. This method does not only quantify the plastic transport, it also qualifies it - since it is possible to inspect what kinds of plastics are in the samples, how degraded they are, how large, etc. This method works mainly in the top layer of the river. The area of the riverbed can be inspected by taking sediment samples, for example using a fish fyke <ref>Morritt, D., Stefanoudis, P. V., Pearce, D., Crimmen, O. A., & Clark, P. F. (2014). Plastic in the Thames: A river runs through it. Marine Pollution Bulletin, 78(1–2), 196–200. https://doi.org/10.1016/j.marpolbul.2013.10.035</ref>.
2. Active sampling: Collecting samples from riverbanks, beaches, or from a net hanging from a bridge or a boat. This method does not only quantify the plastic transport, it also qualifies it - since it is possible to inspect what kinds of plastics are in the samples, how degraded they are, how large, etc. This method works mainly in the top layer of the river. The area of the riverbed can be inspected by taking sediment samples, for example using a fish fyke <ref>Morritt, D., Stefanoudis, P. V., Pearce, D., Crimmen, O. A., & Clark, P. F. (2014). Plastic in the Thames: A river runs through it. Marine Pollution Bulletin, 78(1–2), 196–200. https://doi.org/10.1016/j.marpolbul.2013.10.035</ref>.


3. Passive sampling: Collecting samples from debris accumulations around existing infrastructure. In the few cases where infrastructure to collect plastic debris is already in place, it is just as easy to use them to quantify and qualify the plastic that gets caught. This method does not require any extra investment. It is, like active sampling, more focused on the top layer of the plastic debris, since the infrastructure is, too.
3. Passive sampling: Collecting samples from debris accumulations around existing infrastructure. In the few cases where infrastructure to collect plastic debris is already in place, it is just as easy to use them to quantify and qualify the plastic that gets caught. This method does not require any extra investment. It is, like active sampling, more focused on the top layer of the plastic debris, since the infrastructure is too.


4. Visual observations: Watching plastic float by from on top of a bridge and counting it. This method is very easy to execute, but it is less certain than other methods, due to observer bias, and due to small plastics in a river possibly not being visible from a bridge. This method is adequate for showing seasonal changes in plastic quantities.
4. Visual observations: Watching plastic float by from on top of a bridge and counting it. This method is very easy to execute, but it is less certain than other methods, due to observer bias, and due to small plastics in a river possibly not being visible from a bridge. This method is adequate for showing seasonal changes in plastic quantities.
Line 209: Line 200:
5. Citizen science: Using the public as a means to quantify plastic debris. Several apps have been made to allow lots of people to participate in ongoing research for classifying plastic waste. This method gives insight into the transport of plastic on a global scale.
5. Citizen science: Using the public as a means to quantify plastic debris. Several apps have been made to allow lots of people to participate in ongoing research for classifying plastic waste. This method gives insight into the transport of plastic on a global scale.


===== Visual observations, done automatically =====
===== Automatic Visual Observations =====
Cameras can be used to improve visual observations. One study did such a visual observation on a beach, using drones that flew about 10 meters above it. Based on input from cameras on the UAV's, plastic debris could be identified, located and classified (by a machine learning algorithm) <ref>Martin, C., Parkes, S., Zhang, Q., Zhang, X., McCabe, M. F., & Duarte, C. M. (2018). Use of unmanned aerial vehicles for efficient beach litter monitoring. Marine Pollution Bulletin, 131, 662–673. https://doi.org/10.1016/j.marpolbul.2018.04.045</ref>. Similar systems have also been used to identify macroplastics on rivers.
Cameras can be used to improve visual observations. One study did such a visual observation on a beach, using drones that flew about 10 meters above it. Based on input from cameras on the UAVs, plastic debris could be identified, located and classified (by a machine learning algorithm) <ref>Martin, C., Parkes, S., Zhang, Q., Zhang, X., McCabe, M. F., & Duarte, C. M. (2018). Use of unmanned aerial vehicles for efficient beach litter monitoring. Marine Pollution Bulletin, 131, 662–673. https://doi.org/10.1016/j.marpolbul.2018.04.045</ref>. Similar systems have also been used to identify macroplastics on rivers.


Another study made a deep learning algorithm (a CNN - to be exact, a "Visual Geometry Group-16 (VGG16) model, pre-trained on the large-scale ImageNet dataset" <ref name="classify">Kylili, K., Kyriakides, I., Artusi, A., & Hadjistassou, C. (2019). Identifying floating plastic marine debris using a deep learning approach. Environmental Science and Pollution Research, 26(17), 17091–17099. https://doi.org/10.1007/s11356-019-05148-4</ref>) that was able to classify different types of plastic from images. These images were taken from above the water, so this study also focused on the top layer of plastic debris.
Another study made a deep learning algorithm (a CNN - to be exact, a "Visual Geometry Group-16 (VGG16) model, pre-trained on the large-scale ImageNet dataset" <ref name="classify">Kylili, K., Kyriakides, I., Artusi, A., & Hadjistassou, C. (2019). Identifying floating plastic marine debris using a deep learning approach. Environmental Science and Pollution Research, 26(17), 17091–17099. https://doi.org/10.1007/s11356-019-05148-4</ref>) that was able to classify different types of plastic from images. These images were taken from above the water, so this study also focused on the top layer of plastic debris.
Line 216: Line 207:
[[File:classification.png|600px|Image: 600 pixels|center|thumb|The plastic debris in these images was automatically classified by a deep learning algorithm.]]
[[File:classification.png|600px|Image: 600 pixels|center|thumb|The plastic debris in these images was automatically classified by a deep learning algorithm.]]


The algorithm had a training set accuracy of 99%. But that doesn't say much about the performance of the algorithm, because it only says how well it categorizes the training images, which it has seen lots of times before. To find out the performance of an algorithm, it has to look at images it has never seen before (so, images that are not in the training set). The algorithm recognized plastic debris on 141 out of 165 brand new images that were fed into the system <ref name="classify"></ref>. That leads to a validation accuracy of 86%. It was concluded that this shows the algorithm is pretty good at what it should do.
The algorithm had a training set accuracy of 99[%]. But that does not say much about the performance of the algorithm, because it only says how well it categorizes the training images, which it has seen lots of times before. To find out the performance of an algorithm, it has to look at images it has never seen before (so images that are not in the training set). The algorithm recognized plastic debris on 141 out of 165 brand new images that were fed into the system <ref name="classify"></ref>. That leads to a validation accuracy of 86[%]. It was concluded that this shows the algorithm is pretty good at what it should do.
 
Their improvement points are that the accuracy could be even higher and more different kinds of plastic could be distinguished, while not letting the computational time be too long.
 
=== Image Recognition ===
Over the past decade or so, great steps have been made in developing deep learning methods for image recognition and classification <ref name="ImRecNow">Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef</ref>. In recent years, convolutional neural networks (CNNs) have shown significant improvements on image classification <ref name="DLImage">Lee, G., & Fujita, H. (2020). Deep Learning in Medical Image Analysis. New York, United States: Springer Publishing.</ref>. It is demonstrated that the representation depth is beneficial for the classification accuracy <ref name ="deepCnn">Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf</ref>. Another method is the use of VGG networks, that are known for their state-of-the-art performance in image feature extraction. Their setup exists out of repeated patterns of 1, 2 or 3 convolution layers and a max-pooling layer, finishing with one or more dense layers. The convolutional layer transforms the input data to detect patterns and edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed, is by choosing a different activation function or kernel size <ref name ="deepCnn">Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf</ref>.  


Their improvement points are that the accuracy could be even higher and more different kinds of plastic could be distinguished, while not letting the computational time be too long. This is something we should look into in this project, too.
There are still limitations to the current image recognition technologies. First of all, most methods are supervised, which means they need big amounts of labelled training data, that need to be put together by someone <ref name="ImRecNow">Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef</ref>. This can be solved by using unsupervised deep learning instead of supervised. For unsupervised learning, instead of large databases, only some labels will be needed to make sense of the world. Currently, there are no unsupervised methods that outperform supervised. This is because supervised learning can better encode the characteristics of a set of data. The hope is that in the future unsupervised learning will provide more general features so any task can be performed <ref name = "Unsupervised">Culurciello, E. (2018, December 24). Navigating the Unsupervised Learning Landscape - Intuition Machine. Retrieved April 22, 2020, from https://medium.com/intuitionmachine/navigating-the-unsupervised-learning-landscape-951bd5842df9</ref>. Another problem is that sometimes small distortions can cause a wrong classification of an image <ref name="ImRecNow">Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef</ref> <ref name ="distSens">Bosse, S., Becker, S., Müller, K.-R., Samek, W., & Wiegand, T. (2019). Estimation of distortion sensitivity for visual quality prediction using a convolutional neural network. Digital Signal Processing, 91, 54–65. https://doi.org/10.1016/j.dsp.2018.12.005</ref>. This can already be caused by shadows on an object that can cause color and shape differences <ref name="RBrooks">Brooks, R. (2018, July 15). [FoR&AI] Steps Toward Super Intelligence III, Hard Things Today – Rodney Brooks. Retrieved April 22, 2020, from http://rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/</ref>. A different pitfall is that the output feature maps are sensitive to the specific location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Using this, has the effect of making the resulting down sampled feature maps more robust to changes in the position of the feature in the image <ref name ="deepCnn">Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf</ref>.


=== Neural Networks ===
=== Neural Networks ===


Neural networks are a set of algorithms that are designed to recognize patterns. They interpret sensory data through machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors. Real-world data, such as images, sound, text or time series, needs to be translated into such numerical data to process it <ref name=neuralbeginner>Nicholson, C. (n.d.). A Beginner’s Guide to Neural Networks and Deep Learning. Retrieved April 22, 2020, from https://pathmind.com/wiki/neural-network</ref>.
Neural networks are a set of algorithms that are designed to recognize patterns. They interpret sensory data through machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors. Real-world data, such as images, sound, text or time series, need to be translated into such numerical data to process it <ref name=neuralbeginner>Nicholson, C. (n.d.). A Beginner’s Guide to Neural Networks and Deep Learning. Retrieved April 22, 2020, from https://pathmind.com/wiki/neural-network</ref>.


There are different types of neural networks <ref name=typesneural>Cheung, K. C. (2020, April 17). 10 Use Cases of Neural Networks in Business. Retrieved April 22, 2020, from https://algorithmxlab.com/blog/10-use-cases-neural-networks/#What_are_Artificial_Neural_Networks_Used_for</ref>:
There are different types of neural networks <ref name=typesneural>Cheung, K. C. (2020, April 17). 10 Use Cases of Neural Networks in Business. Retrieved April 22, 2020, from https://algorithmxlab.com/blog/10-use-cases-neural-networks/#What_are_Artificial_Neural_Networks_Used_for</ref>:
Line 231: Line 227:


==== Convolutional Neural Networks ====
==== Convolutional Neural Networks ====
In this project, the neural network should retrieve data from images. Therefore a convolutional neural network will be used. Convolutional neural networks are generally composed of the following layers <ref name=convolution>Amidi, A., & Amidi, S. (n.d.). CS 230 - Convolutional Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-convolutional-neural-networks</ref>:
In this project, the neural network should retrieve data from images. Therefore a convolutional neural network could be used. Convolutional neural networks are generally composed of the following layers <ref name=convolution>Amidi, A., & Amidi, S. (n.d.). CS 230 - Convolutional Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-convolutional-neural-networks</ref>:


[[File:CNN.png|800px|Image: 800 pixels|center|thumb|Layers in a convolutional neural network]]  
[[File:CNN.png|800px|Image: 800 pixels|center|thumb|Layers in a convolutional neural network.]]  


The convolutional layer transforms the input data to detect patterns, edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed are by choosing a different activation function, or kernel size. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). Max pooling is applied to reduce overfitting. A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. This has the effect of making the resulting downsampled feature maps more robust to changes in the position of the feature in the image. The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Fully connected layers connect all input values via separate connections to an output channel. Since this project has to deal with a binary problem, the final fully connected layer will consist of 1 output. Stochastic gradient descent (SGD) is the most common and basic optimizer used for training a CNN <ref name=CNNrad>Yamashita, Rikiya & Nishio, Mizuho & Do, Richard & Togashi, Kaori. (2018). Convolutional neural networks: an overview and application in radiology. Insights into Imaging. 9. 10.1007/s13244-018-0639-9 </ref>. It optimizes the model using parameters based on the gradient information of the loss function. However, many other optimizers have been developed that could have a better result. Momentum keeps the history of the previous update steps and combines this information with the next gradient step to reduce the effect of outliers <ref name=gdl>Qian, N. (1999, January 12). On the momentum term in gradient descent learning algorithms. - PubMed - NCBI. Retrieved April 22, 2020, from https://www.ncbi.nlm.nih.gov/pubmed/12662723</ref>. RMSProp also tries to keep the updates stable, but in a different way than momentum. RMSprop also takes away the need to adjust learning rate <ref name=generalization>Hinton, G., Srivastava, N., Swersky, K., Tieleman, T., & Mohamed , A. (2016, December 15). Neural Networks for Machine Learning: Overview of ways to improve generalization [Slides]. Retrieved from http://www.cs.toronto.edu/~hinton/coursera/lecture9/lec9.pdf</ref>. Adam takes the ideas behind both momentum and RMSprop and combines into one optimizer <ref name=stoch_optim>Kingma, D. P., & Ba, J. (2015). Adam: A Method for Stochastic Optimization. Presented at the 3rd International Conference for Learning Representations, San Diego.</ref>. Nesterov momentum is a smarter version of the momentum optimizer that looks ahead and adjusts the momentum based on these parameters <ref name=convergence>Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence o(1/k^2).</ref>. Nadam is an optimizer that combines RMSprop and Nesterov momentum <ref name=Nesterovmomentum>Dozat, T. (2016). Incorporating Nesterov Momentum into Adam. Retrieved from https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ</ref>.
The convolutional layer transforms the input data to detect patterns, edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed are by choosing a different activation function, or kernel size. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). Max pooling is applied to reduce overfitting. A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. This has the effect of making the resulting downsampled feature maps more robust to changes in the position of the feature in the image. The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Fully connected layers connect all input values via separate connections to an output channel. Since this project has to deal with a binary problem, the final fully connected layer will consist of 1 output. Stochastic gradient descent (SGD) is the most common and basic optimizer used for training a CNN <ref name=CNNrad>Yamashita, Rikiya & Nishio, Mizuho & Do, Richard & Togashi, Kaori. (2018). Convolutional neural networks: an overview and application in radiology. Insights into Imaging. 9. 10.1007/s13244-018-0639-9 </ref>. It optimizes the model using parameters based on the gradient information of the loss function. However, many other optimizers have been developed that could have a better result. Momentum keeps the history of the previous update steps and combines this information with the next gradient step to reduce the effect of outliers <ref name=gdl>Qian, N. (1999, January 12). On the momentum term in gradient descent learning algorithms. - PubMed - NCBI. Retrieved April 22, 2020, from https://www.ncbi.nlm.nih.gov/pubmed/12662723</ref>. RMSProp also tries to keep the updates stable, but in a different way than momentum. RMSprop also takes away the need to adjust learning rate <ref name=generalization>Hinton, G., Srivastava, N., Swersky, K., Tieleman, T., & Mohamed , A. (2016, December 15). Neural Networks for Machine Learning: Overview of ways to improve generalization [Slides]. Retrieved from http://www.cs.toronto.edu/~hinton/coursera/lecture9/lec9.pdf</ref>. Adam takes the ideas behind both momentum and RMSprop and combines into one optimizer <ref name=stoch_optim>Kingma, D. P., & Ba, J. (2015). Adam: A Method for Stochastic Optimization. Presented at the 3rd International Conference for Learning Representations, San Diego.</ref>. Nesterov momentum is a smarter version of the momentum optimizer that looks ahead and adjusts the momentum based on these parameters <ref name=convergence>Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence o(1/k^2).</ref>. Nadam is an optimizer that combines RMSprop and Nesterov momentum <ref name=Nesterovmomentum>Dozat, T. (2016). Incorporating Nesterov Momentum into Adam. Retrieved from https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ</ref>.


=== Image Recognition ===
==== YOLO ====
Over the past decade or so, great steps have been made in developing deep learning methods for image recognition and classification <ref name="ImRecNow">Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef</ref>. In recent years, convolutional neural networks (CNNs) have shown significant improvements on image classification <ref name="DLImage">Lee, G., & Fujita, H. (2020). Deep Learning in Medical Image Analysis. New York, United States: Springer Publishing.</ref>. It is demonstrated that the representation depth is beneficial for the classification accuracy <ref name ="deepCnn">Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf</ref>. Another method is the use of VGG networks, that are known for their state-of-the-art performance in image feature extraction. Their setup exists out of repeated patterns of 1, 2 or 3 convolution layers and a max-pooling layer, finishing with one or more dense layers. The convolutional layer transforms the input data to detect patterns and edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed, is by choosing a different activation function or kernel size <ref name ="deepCnn">Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf</ref>.
 
There are still limitations to the current image recognition technologies. First of all, most methods are supervised, which means they need big amounts of labelled training data, that need to be put together by someone <ref name="ImRecNow">Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef</ref>. This can be solved by using unsupervised deep learning instead of supervised. For unsupervised learning, instead of large databases, only some labels will be needed to make sense of the world. Currently, there are no unsupervised methods that outperform supervised. This is because supervised learning can better encode the characteristics of a set of data. The hope is that in the future unsupervised learning will provide more general features so any task can be performed <ref name = "Unsupervised">Culurciello, E. (2018, December 24). Navigating the Unsupervised Learning Landscape - Intuition Machine. Retrieved April 22, 2020, from https://medium.com/intuitionmachine/navigating-the-unsupervised-learning-landscape-951bd5842df9</ref>. Another problem is that sometimes small distortions can cause a wrong classification of an image <ref name="ImRecNow">Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef</ref> <ref name ="distSens">Bosse, S., Becker, S., Müller, K.-R., Samek, W., & Wiegand, T. (2019). Estimation of distortion sensitivity for visual quality prediction using a convolutional neural network. Digital Signal Processing, 91, 54–65. https://doi.org/10.1016/j.dsp.2018.12.005</ref>. This can already be caused by shadows on an object that can cause color and shape differences <ref name="RBrooks">Brooks, R. (2018, July 15). [FoR&AI] Steps Toward Super Intelligence III, Hard Things Today – Rodney Brooks. Retrieved April 22, 2020, from http://rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/</ref>. A different pitfall is that the output feature maps are sensitive to the specific location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Using this, has the effect of making the resulting down sampled feature maps more robust to changes in the position of the feature in the image <ref name ="deepCnn">Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf</ref>.
 
Specific research has been carried out into image recognition and classification of fish in the water. For example, a study that used state-of-the-art object detection to detect, localize and classify fish species using visual data, obtained by underwater cameras, has been carried out. The initial goal was to recognize herring and mackerel and this work was specifically developed for poorly conditioned waters. Their experiments on a dateset obtained at sea, showed a successful detection rate of 66.7% and successful classification rate of 89.7% <ref name = "fishdetec"> Christensen, J. H., Mogensen, L. V., Galeazzi, R., & Andersen, J. C. (2018). Detection, Localization and Classification of Fish and Fish Species in Poor Conditions using Convolutional Neural Networks. 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV). https://doi.org/10.1109/auv.2018.8729798 </ref>. There are also studies that researched image recognition and classification of micro plastics. By using computer vision for analyzing required images, and machine learning techniques to develop classifiers for four types of micro plastics, an accuracy of 96.6% was achieved <ref name = "plasticdetec"> Castrillon-Santana , M., Lorenzo-Navarro, J., Gomez, M., Herrera, A., & Marín-Reyes, P. A. (2018, January 1). Automatic Counting and Classification of Microplastic Particles. Retrieved April 23, 2020, from https://www.scitepress.org/Papers/2018/67250/67250.pdf </ref>.
 
For these recognitions, image databases need to be found for the recognition of fish and plastic. First of all, ImageNet can be used, which is a database with many pictures of different subjects. Secondly 3 databases of different fishes have been found:
http://groups.inf.ed.ac.uk/f4k/GROUNDTRUTH/RECOG/
https://wiki.qut.edu.au/display/cyphy/Fish+Dataset
https://wiki.qut.edu.au/display/cyphy/Fish+Dataset (same?)
 
=== YOLO ===


YOLO is a deep learning algorithm which came out on may 2016. It is popular because it’s very fast compared with other deep learning algorithms <ref name = yolo> Canu, S. (2019, June 27). YOLO object detection using Opencv with Python. Retrieved May 26, 2020, from https://pysource.com/2019/06/27/yolo-object-detection-using-opencv-with-python/ </ref>. For YOLO, a completely different approach is used than for prior detection systems. In prior detection systems, a model is applied to an image at multiple locations and scales. High scoring regions of the image are considered detections. For YOLO a single deep convolutional neural network is applied to the full image. This network divides the image into a grid of cells and each cell directly predicts a bounding box and object classification <ref name = yolo3>Brownlee, J. (2019, October 7). How to Perform Object Detection With YOLOv3 in Keras. Retrieved May 29, 2020, from https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/</ref>. These bounding boxes are weighted by the predicted probabilities <ref name = yolo2>Redmon, J. (2019, November 15). pjreddie/darknet. Retrieved May 29, 2020, from https://github.com/pjreddie/darknet/wiki/YOLO:-Real-Time-Object-Detection</ref>.
YOLO is a deep learning algorithm which came out on may 2016. It is popular because it’s very fast compared with other deep learning algorithms <ref name = yolo> Canu, S. (2019, June 27). YOLO object detection using Opencv with Python. Retrieved May 26, 2020, from https://pysource.com/2019/06/27/yolo-object-detection-using-opencv-with-python/ </ref>. For YOLO, a completely different approach is used than for prior detection systems. In prior detection systems, a model is applied to an image at multiple locations and scales. High scoring regions of the image are considered detections. For YOLO a single deep convolutional neural network is applied to the full image. This network divides the image into a grid of cells and each cell directly predicts a bounding box and object classification <ref name = yolo3>Brownlee, J. (2019, October 7). How to Perform Object Detection With YOLOv3 in Keras. Retrieved May 29, 2020, from https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/</ref>. These bounding boxes are weighted by the predicted probabilities <ref name = yolo2>Redmon, J. (2019, November 15). pjreddie/darknet. Retrieved May 29, 2020, from https://github.com/pjreddie/darknet/wiki/YOLO:-Real-Time-Object-Detection</ref>.




The newest version of YOLO is YOLO v3. It uses a variant of Darknet for training and testing. Darknet originally has 53 layers trained on ImageNet. For the task of detection, 53 more layers are stacked onto it. In total, this means that a 106 layer fully convolutional underlying architecture is used for YOLO v3. In the following figure can be seen how the architecture of YOLO v3 looks like <ref name = yolo4> Kathuria, A. (2018, April 23). What’s new in YOLO v3? Retrieved May 29, 2020, from https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b</ref>.
The newest version of YOLO is YOLO v3. It uses a variant of Darknet for training and testing. Darknet originally has 53 layers trained on ImageNet. For the task of detection, 53 more layers are stacked onto it. In total, this means that a 106 layer fully convolutional underlying architecture is used for YOLO v3. In the figure below it can be seen what the architecture of YOLO v3 looks like <ref name = yolo4> Kathuria, A. (2018, April 23). What’s new in YOLO v3? Retrieved May 29, 2020, from https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b</ref>.


[[File:YOLO_network.png|800px|Image: 800 pixels|center|thumb|YOLO network structure <ref name = yolo4> Kathuria, A. (2018, April 23). What’s new in YOLO v3? Retrieved May 29, 2020, from https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b</ref>]]
[[File:YOLO_network.png|800px|Image: 800 pixels|center|thumb|YOLO network structure <ref name = yolo4> Kathuria, A. (2018, April 23). What’s new in YOLO v3? Retrieved May 29, 2020, from https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b</ref>.]]


==== LabelImg ====
==== LabelImg ====
Line 262: Line 246:
The network needs to be trained on images of the object that is needed to be identified by the network. These images, on which the network will be trained, need to be labeled to assign them to a certain class. This can be done with LabelImg. LabelImg is a graphical image annotation tool which can be seen below. The objects need to be identified manually by creating a rectangular box around it and assigning them a label.
The network needs to be trained on images of the object that is needed to be identified by the network. These images, on which the network will be trained, need to be labeled to assign them to a certain class. This can be done with LabelImg. LabelImg is a graphical image annotation tool which can be seen below. The objects need to be identified manually by creating a rectangular box around it and assigning them a label.


[[File:Labelimg.png|650px|Image: 800 pixels|center|thumb|LabelImg]]
[[File:Labelimg.png|650px|Image: 800 pixels|center|thumb|LabelImg.]]


At the end, the network should be able to detect the objects that are trained on it. This can be done with different formats: photos, videos or via webcam. In the figure below, an example of the working of the network can be seen. First, the network divides the image into regions and predicts the bounding boxes and probabilities for each region. Then, these bounding boxes are weighted by the predicted probabilities.
At the end, the network should be able to detect the objects that are trained on it. This can be done with different formats: photos, videos or via webcam. In the figure below, an example of the working of the network can be seen. First, the network divides the image into regions and predicts the bounding boxes and probabilities for each region. Then, these bounding boxes are weighted by the predicted probabilities.


[[File:YOLO_example2.png|650px|Image: 800 pixels|center|thumb|Example object detection <ref name = example2>Bhattarai, S. (2019, December 25). What is YOLO v2 (aka YOLO 9000)? Retrieved June 1, 2020, from https://saugatbhattarai.com.np/what-is-yolo-v2-aka-yolo-9000/</ref>]]
[[File:YOLO_example2.png|650px|Image: 800 pixels|center|thumb|Example object detection <ref name = example2>Bhattarai, S. (2019, December 25). What is YOLO v2 (aka YOLO 9000)? Retrieved June 1, 2020, from https://saugatbhattarai.com.np/what-is-yolo-v2-aka-yolo-9000/</ref>.]]


= Further exploration =
= Further Exploration =
=== Location ===
=== Location ===
Rivers are seen as a major source of debris in the oceans <ref name=“plastic”>Lebreton. (2018, January 1). OSPAR Background document on pre-production Plastic Pellets. Retrieved May 3, 2020, from https://www.ospar.org/documents?d=39764</ref> . The tide has a big influence on the direction of the floating waste. During low tide the waste flows towards the sea, and during high tide it can flow over the river towards the river banks <ref name='plasticresearch'> Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf</ref>.  
Rivers are seen as a major source of debris in the oceans <ref name=“plastic”>Lebreton. (2018, January 1). OSPAR Background document on pre-production Plastic Pellets. Retrieved May 3, 2020, from https://www.ospar.org/documents?d=39764</ref> . The tide has a big influence on the direction of the floating waste. During low tide the waste flows towards the sea, and during high tide it can flow over the river towards the river banks <ref name='plasticresearch'> Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf</ref>.  
Line 274: Line 258:
A big consequence of plastic waste in rivers, seas, oceans and river banks is that a lot of animals can mistake plastic for food, often resulting in death. There are also economic consequences. More waste in waters, means more difficult water purification, especially because of microplastics. It costs extra money to be able to purify the water. Also, cleaning of waste in river areas, costs millions a year <ref name = “cleaningwaste”> Staatsbosbeheer. (2019, September 12). Dossier afval in de natuur. Retrieved May 3, 2020, from https://www.staatsbosbeheer.nl/over-staatsbosbeheer/dossiers/afval-in-de-natuur</ref>.  
A big consequence of plastic waste in rivers, seas, oceans and river banks is that a lot of animals can mistake plastic for food, often resulting in death. There are also economic consequences. More waste in waters, means more difficult water purification, especially because of microplastics. It costs extra money to be able to purify the water. Also, cleaning of waste in river areas, costs millions a year <ref name = “cleaningwaste”> Staatsbosbeheer. (2019, September 12). Dossier afval in de natuur. Retrieved May 3, 2020, from https://www.staatsbosbeheer.nl/over-staatsbosbeheer/dossiers/afval-in-de-natuur</ref>.  


A large-scale investigation has taken place into the wash-up of plastic on the banks of rivers. At river banks of the Maas, an average of 630 pieces of waste per 100 meters of river bank were counted, of which 81% is plastic. Some measurement locations showed a count of more than 1200 pieces of waste per 100 meters riverbank, and can be marked as hotspots. A big concentration of these hotspots can be found at the riverbanks of the Maas in the south of Limburg. A lot of waste, originating from France and Belgium, flows into the Dutch part of the Maas here. Evidence for this, is the great amount of plastic packaging with French texts. Also, in these hotspots the proportion of plastic is even higher, namely 89% instead of 81% <ref name='plasticresearch'> Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf</ref>.  
A large-scale investigation has taken place into the wash-up of waste on the banks of rivers. At river banks of the Maas, an average of 630 pieces of waste per 100 meters of river bank was counted, of which 81[%] is plastic. Some measurement locations showed a count of more than 1200 pieces of waste per 100 meters riverbank, and can be marked as hotspots. A big concentration of these hotspots can be found at the riverbanks of the Maas in the south of Limburg. A lot of waste, originating from France and Belgium, flows into the Dutch part of the Maas here. Evidence for this, is the great amount of plastic packaging with French texts. Also, in these hotspots the proportion of plastic is even higher, namely 89[%] instead of 81[%] <ref name='plasticresearch'> Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf</ref>.  


The SPlaSh should help to tackle the problem of the plastic soup at its roots, the rivers. Because of the high plastic concentration in the Maas in the south of Limburg, there will be specifically looked into designing the live image recognition program and robot, for this part of the Maas. There are many different things that have to be taken into account to avoid negative influences of the SPlaSh. Two main things that need to be taken into account are river animals and boats. The SPlaSh should namely, of course, not 'eat' animals and it should not be broken by boats.
The Waste Identifier should help to tackle the problem of the plastic soup at its roots, the rivers. Because of the high plastic concentration in the Maas in the south of Limburg, there will be specifically looked into designing the image recognition module, for this part of the Maas. The Noria is often placed in locks to make sure it does not interfere with other water traffic. Therefore, there will also be focused on those specific parts of the river Maas in the South of Limburg.


===== Plastic =====
=== Waste ===
An extensive research into the amount of plastic on the river banks of the Maas has been executed <ref name='plasticresearch'> Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf</ref>. As explained before, plastic in rivers can float into the oceans or can end up on river banks. Therefore, the counted amount of plastic on the river banks of the Maas is only a part of the total amount of plastic in the rivers, since another part flows into the ocean. The exact numbers of how much plastic flows into the oceans are not clear. However, it is certain that at the south of Limburg an average of more than 1200 pieces of waste per 100 meters of riverbank of the Maas were counted, of which 89% is plastic.  
An extensive research into the amount of waste on the river banks of the Maas has been executed <ref name='plasticresearch'> Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf</ref>. As explained before, waste in rivers can float into the oceans or can end up on river banks. Therefore, the counted amount of waste on the river banks of the Maas is only a part of the total amount of litter in the rivers, since another part flows into the ocean. The exact numbers of how much flows into the oceans are not clear. However, it is certain that at the South of Limburg an average of more than 1200 pieces of waste per 100 meters of riverbank of the Maas were counted, of which 89[%] is plastic.  


A top 15 was made of which types of waste were encountered the most. The type of plastic most commonly found is indefinable pieces of soft/hard plastic and plastic film that are smaller than 50 [cm], including styrofoam. This indefinable pieces also include nurdles. This are small plastic granules, that are used as a raw element for plastic products. Again, the south of Limburg has the highest concentration of this type of waste. This is because there are relatively more industrial areas there. Another big part of the counted plastics are disposable plastics, often used as food and drink packaging. In total 25% of all encountered plastic is disposable plastic from food and drink packages.  
A top 15 was made of which types of waste were encountered the most. The type of plastic most commonly found is indefinable pieces of soft/hard plastic and plastic film that are smaller than 50 [cm], including styrofoam. This indefinable pieces also include nurdles. This are small plastic granules, that are used as a raw element for plastic products. Again, the south of Limburg has the highest concentration of this type of waste. This is because there are relatively more industrial areas there. Another big part of the counted plastics are disposable plastics, often used as food and drink packaging. In total 25[%] of all encountered plastic is disposable plastic from food and drink packages.  


Only plastic that has washed up on the riverbanks has been counted. Not much is known about how much plastic is in the water, below the water surface. From the state-of-the-art it appeared that there are clues, that plastic in waters is not only present at the surface, but also at lower levels. The robot and image recognition program that will be designed, will help to map the amount of plastic in deeper waters of the Maas in the south of Limburg, to get a better idea of how much plastic floats through that part of the river in total.
Only litter that has washed up on the riverbanks has been counted. The Waste Identifier can help with monitoring the waste flow in the water of the rivers to get a more complete view of hotspots and often encountered waste types.


===== Image Database =====
=== Image Database ===
The CNN can be pretrained on the large-scale ImageNet. Due to this pre-training, the model has learned certain image features from this large dataset. Secondly the neural network should be trained on a database specified on this subject. This database should then randomly be divided into 3 groups. The biggest group is the training data, which the neural network uses to see patterns and to predict the outcome of the second dataset, the validation data. Ones this validation data has been analyzed, a new epoch is started, which means that the validation data is part of the training data. Once a final model has been created, a test dataset can be used to analyzed its performance.
The CNN or YOLO can be pretrained on the large-scale ImageNet. Due to this pre-training, the model has learned certain image features from this large dataset. Secondly the neural network should be trained on a database specified on this subject. This database should then randomly be divided into 3 groups. The biggest group is the training data, which the neural network uses to see patterns and to predict the outcome of the second dataset, the validation data. Ones this validation data has been analyzed, a new epoch is started, which means that the validation data is part of the training data. Once a final model has been created, a test dataset can be used to analyzed its performance.


It is difficult to find a database perfectly corresponding to our subject. First of all, a big dataset of plastic waste in the ocean is available <ref name ='plasticseadata'>Buffon X. (2019, May 20) Robotic Detection of Marine Litter Using Deep Visual Detection Models. Retrieved May 9, 2020, from https://ieeexplore.ieee.org/abstract/document/8793975</ref>. This could be potentially usable for detection of plastic deeper in the river, but we would also like to detect plastic on the surface, where this is the place where most macro plastics float. This database contains a total amount of 3644 images of underwater waste containing 1316 mages of plastic. Further, a big dataset of plastic shapes can be used, although these are not from underwater <ref name = 'plasticdata'> Thung G. (2017, Apr 10) Dataset of images of trash Torch-based CNN for garbage image classification. Retrieved May 9, 2020, from https://github.com/garythung/trashnet</ref>. Using image preprocessing, it could be possible to still find corresponding shapes of plastic from pictures that the underwater camera takes. Lastly, a dataset can be created by ourselves by taken screenshots from nature documentaries.
It is difficult to find a database perfectly corresponding to our subject. First of all, a big dataset of plastic waste in the ocean is available <ref name ='plasticseadata'>Buffon X. (2019, May 20) Robotic Detection of Marine Litter Using Deep Visual Detection Models. Retrieved May 9, 2020, from https://ieeexplore.ieee.org/abstract/document/8793975</ref>. A big dataset of plastic shapes can be used, although these are not from waste in the water it can still be useful <ref name = 'plasticdata'> Thung G. (2017, Apr 10) Dataset of images of trash Torch-based CNN for garbage image classification. Retrieved May 9, 2020, from https://github.com/garythung/trashnet</ref>. Using image preprocessing, it could be possible to still find corresponding shapes of plastic from pictures in the water that the camera takes. Lastly, a dataset can be created by ourselves.


= Neural Network Design =
= Neural Network Design =
Because of the higher frame rate that can be created using YOLO in comparison to CNN, it has been chosen to use YOLO as the object detection method. A dataset, that will be further explained later, will be labelled and used for training and validation. This training is done using Google Colab, so that an external GPU computer, made available by Google, can be used to improve the training speed. Here, weights from darknet, which is the framework for YOLO, is downloaded and iterationally changed to fit the database and gain the lowest validation loss. However, the connection to Google Colab can only be made for 12 hours. Because of this, the training has been done multiple times by restarting a new training using the final weights from the previous training. By doing this, the validation loss has been reduced to 0.045. These weights have finally been trained on new images and videos, to verify the low loss. Also a counting software has been created. For this, it needs to be assumed that there is a current in the water and new waste object only appear from the top of the frame. If an object has been detected, and no object was higher than this object, it means that this is a new object and will be counted.
Because of the higher frame rate that can be created using YOLO in comparison to CNN, it has been chosen to use YOLO as the object detection method. A dataset, that will be further explained later, will be labelled and used for training and validation. This training is done using Google Colab, so that an external GPU computer, made available by Google, can be used to improve the training speed. Here, weights from Darknet, which is the framework for YOLO, is downloaded and iterationally changed to fit the database and gain the lowest validation loss. However, the connection to Google Colab can only be made for 12 hours. Because of this, the training has been done multiple times by restarting a new training using the final weights from the previous training. By doing this, the validation loss has been reduced to 0.045. These weights have finally been trained on new images and videos, to verify the low loss. Also a counting software has been created. For this, it needs to be assumed that there is a current in the water and new waste objects only appear from the top of the frame. If an object has been detected, and no object was higher than this object, it means that this is a new object and it will be counted.  
 
Although that part of the code for testing and training has been obtained from the PySource blog, these codes needed to be adapted to our problem statement and number of classifications. Besides, code for counting and changing the label number to our specific problem, have been  created.


Although, that part of the code for testing and training has been obtained from the PySource blog <ref name = 'Pysource'> Sergio Canu (2020, April 1) Train YOLO to detect a custom object (online with free GPU)</ref>, these codes need to be adapted to our problem statement and number of classifications. Besides, code for counting and changing the label number to our specific problem, have been  created. The files can be downloaded from the following GitHub link: https://github.com/mennocromwijkk/Robots_Everywhere_3. Here, the file "bottle_and_can_Train_YoloV3 .ipynb" can be used to train the model in Google Colab. the zip-file "obj.zip" needs to be placed in a map named "yolov3" in Google Drive. The files "yolo_object_detection.py" can be used for the detection of images and "real_time_yoloV2.py" for the detection of videos, where the counting software has been added.


==== Data Augmentation ====
==== Data Augmentation ====
 
The dataset does not contain as many images as desired. If there is not enough data, neural networks tend to over-fit to the little amount of data there is, which is undesirable. That is why some way has to be found to increase the size of the dataset. One way to increase the size of a dataset is by use of data augmentation <ref name='dak1'>https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/</ref><ref name='dak2'>Goyal, S. (2019, December 17). MachineX: Image Data Augmentation Using Keras. Retrieved June 19, 2020, from https://towardsdatascience.com/machinex-image-data-augmentation-using-keras-b459ef87cd22</ref> . If this is used, then not only are the original images fed into the neural network, but also slightly altered images. Alterations include:
The dataset doesn't contain as many images as desired. If there is not enough data, neural networks tends to over-fit to the little amount of data there is, which is undesirable. That's why some way has to be found to increase the size of the dataset. One way to increase the size of a dataset is by use of Data Augmentation. If this is used, then not only are the original images fed into the neural network, but also slightly altered images. Alterations include:


* Translation
* Translation
Line 308: Line 290:
* Gaussian noise, etc.
* Gaussian noise, etc.


[[File:data_aug.png|600px|Image: 800 pixels|center|thumb|Different uses of data augmentation. Every image is a completely new one to the Neural Network.]]
[[File:data_aug.png|600px|Image: 800 pixels|center|thumb|Different uses of data augmentation. Every image is a completely new one to the neural network.]]
 
Every altered images counts as completely new data for the Neural Network, which is why it is able to train using this duplicated data without over-fitting to it.
 
Neural Networks benefit from having more data to train on, simply because the classifications become stronger with more data. But on top of that, Neural Networks that are trained on translated, resized or rotated images are much better at classifying objects that are slightly altered in any way (this is called invariance). In the case of plastic underwater, training a CNN to be invariant makes a lot of sense: there's no saying whether a piece of plastic will be upside-down, up close, far away, etc.
 
Two sources for Data Augmentation in keras:
 
https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/
 
https://towardsdatascience.com/machinex-image-data-augmentation-using-keras-b459ef87cd22


Data Augmentation code has been written and can be implemented once the dataset is final.
Every altered image counts as completely new data for the neural network, which is why it is able to train using this duplicated data without over-fitting to it.


Neural networks benefit from having more data to train on, simply because the classifications become stronger with more data. But on top of that, neural networks that are trained on translated, resized or rotated images are much better at classifying objects that are slightly altered in any way (this is called invariance). In the case of waste in water, training a neural network to be invariant makes a lot of sense: there is no saying whether a piece of waste will be upside-down, slightly damaged, not fully visible etc. A data augmentation code has been written and can be implemented once the dataset is final. It can be found via the following GitHub link:  https://github.com/mennocromwijkk/Robots_Everywhere_3 at "data_aug.py".


==== Dataset ====
==== Dataset ====
The idea is that a test setup will be created and placed in a black reservoir. If the final proof-of-concept test setup is likewise, it makes sense that the dataset should have similar conditions. This is why the dataset will consist of self-taken pictures using a similar setup (position, camera angle, lighting) as the test setup, and of an online dataset which contains images of waste outside the water. This way, a dataset that is large enough to train the neural network is obtained. The final dataset can be found via the following GitHub link: https://github.com/mennocromwijkk/Robots_Everywhere_3.


With the new research goal, it is needed to have a fitting dataset. Doubts were placed on the previously found dataset (Deep-sea Debris Database) working for the application that we now have in mind. Either another dataset could be searched for, or a new dataset could be made. The latter requires more time, but could have much better results if it is done right.
The self-taken pictures will be made from a very slight angle (so not directly above the plastic), in reasonable shade, with as little reflections as possible, to avoid confusing the neural network. The amount of useful images can later be increased using data augmentation. Different types of river waste will be gathered and submerged in the water of the reservoir. There will be images of:
 
The idea is that a test setup will be created and placed in a black vat. If the final proof-of-concept test setup is likewise, it makes sense that the dataset should have similar conditions. This is why the dataset will consist of self-taken pictures using a similar setup (position, camera angle, lighting) as the test setup.
 
At least 100 different pictures should be taken. The amount of pictures can later be increased using Data Augmentation. The pictures will be made from a very slight angle (so not directly above the plastic), in reasonable shade, with as little reflections as possible, to avoid confusing the Neural Network. Different types of river waste will be gathered and submerged in the water of the vat. There will be images of
*Plastic bottles
*Plastic bottles
*Drinking cans
*Drinking cans
The ground truth of the images will be categorized by hand using the labelling program 'labelImg'. With the latter, the position of the object can also be indicated. More than one type of waste can be on one image. There will also be some noise in the water, to make it a bit harder for the Neural Network to recognize the waste. This noise will come in the form of leaves, similar to the noise that will be faced on Noria's actual installation.
The ground truth of the images will be categorized by hand using the labelling program 'labelImg'. With the latter, the position of the object can also be indicated. More than one type of waste can be on one image. There will also be some noise in the water, to make it a bit harder for the neural network to recognize the waste. This noise will come in the form of leaves, similar to the noise that will be faced on Noria's actual installation.


This means that our final product will be mostly a proof-of-concept. If the idea is actually realized on the Noria, it is advised to recreate the dataset in its environment, so that the neural network does not get confused over any sudden changes. Given the effectiveness in the black vat setup, the possible effectiveness of the neural network in Noria's environment can be discussed at the end of the project.
This means that our final product will be mostly a proof-of-concept. If the idea is actually realized on the Noria, it is advised to recreate the dataset in its river environment, so that the neural network does not get confused over any sudden changes. Given the effectiveness in the black vat setup, the possible effectiveness of the neural network in Noria's environment can be discussed at the end of the project.


===== Photos =====
===== Photos =====
 
Eventually 78 photos have been taken by ourselves, containing bottles and cans, to train a neural network with, together with the online dataset of plastic bottles and cans on a white back ground<ref name = 'plasticdata'> Thung G. (2017, Apr 10) Dataset of images of trash Torch-based CNN for garbage image classification. Retrieved May 9, 2020, from https://github.com/garythung/trashnet</ref>. This is a TrashNet database that is open for everybody and downloaded on GitHub.. They are compressed to 500 x 500 pixels using a Photoshop script. See the examples below.
A couple of test photos (78), containing bottles and cans, have been made to train a Neural Network with. They were compressed to 500 x 500 pixels using a Photoshop Script. See the examples below.


[[File:dataset2.png|600px|thumb|center|4 of the photos that were taken, and 4 augmented photos.]]  
[[File:dataset2.png|600px|thumb|center|4 of the photos that were taken, and 4 augmented photos.]]  
Line 347: Line 316:
*Rotation
*Rotation
*Flipping
*Flipping
Each of these augmentations was done at random. The random range was made very slight so that little problems occured with trash being stretched out unrealistically. The resulting dataset is only 18 MB and can be found on the group's OneDrive. The augmented images were categorized by hand as 'plastic' and 'can', using labelImg. They were then zipped into a folder and submitted into Google Colab to start training.
Each of these augmentations was done at random. The random range was made very slight so that little problems occured with trash being stretched out unrealistically. The resulting dataset is only 18 MB and can be found via the following Github link: https://github.com/mennocromwijkk/Robots_Everywhere_3. The augmented images were categorized by hand as 'plastic' and 'can', using labelImg. They were then zipped into a folder and submitted into Google Colab to start training.


==== Testing results (photos) ====
====Test Plan====
''Goal:''


New test photos were taken of individual waste items in the vat and of a very crowded vat filled with lots of trash. The idea was to see if the current neural network could also handle crowded photos of trash, granted that recognizing individual items might be pretty easy for it.
Test the amount of correctly identified and counted waste pieces in the water.


Most of the test photos gave correct results. The test photos of the individual waste items all worked perfectly. However, in some of the crowded photos, one or maybe two items were missed. Sometimes this was caused by the object being a little out of frame. In other cases, the object was behind the surrounding objects, making it too hidden to be recognized.
''Hypothesis:''


[[File:dataset3.png|600px|thumb|center|4 examples of image recognition on the crowded test photos.]]
At least 90[%] of the waste will be identified and counted correctly out of at least 50 images and a video of waste in water.


Waste being out of frame should not be an issue in the final application of the image recognition, since the camera will film the entire width where the trash can be. Waste being too close together might cause a problem on the Noria though, as clogging of waste is a realistic problem. This problem could be (mostly) solved by training on very crowded images, which will force the neural network to look for smaller parts of waste hidden behind other parts, too.
''Materials:''
* Camera
* Different types of waste
* Image recognition software
* Reservoir with water


==== Testing results (videos) ====
''Method:''
* Throw different types of waste in the water
* Take at least 50 different images of this from above, with the camera (there can be more pieces of waste within one image)
*      Make a video of the floating waste
* Add the images to a folder
* Run the image recognition software
* Analyze how much pieces of waste are correctly identified and counted


The image recognition worked on videos, too. It interprets each frame individually.
''Note:'' due to limited resources it was not possible to make a long video of floating waste, so separate videos are made that are placed one behind the other. To get more reliable results in the future, more images and videos can be used.


To test the counting, new videos were recorded where the camera moved over the trash, making it as if the trash was moving from top to bottom. The idea was to use these videos to test whether the counting of waste was working or not.
==== Testing Results (photos) ====
New test photos are taken of individual waste items in the reservoir, and of a very crowded reservoir filled with lots of trash. This test dataset can also be found in GitHub (https://github.com/mennocromwijkk/Robots_Everywhere_3). The idea was to see if the current neural network could also handle crowded photos of trash, granted that recognizing individual items might be pretty easy for it.  


...
Most of the test photos gave correct results. The test photos of the individual waste items all worked perfectly. However, in some of the crowded photos, one or maybe two items were missed. Sometimes this was caused by the object being a little out of frame. In other cases, the object was behind the surrounding objects, making it too hidden to be recognized.


= Design =
[[File:dataset3.png|600px|thumb|center|4 examples of image recognition on the crowded test photos.]]
Besides, the image recognition program, the robot itself will need to meet the requirements mentioned at the beginning of this page. There, it is mentioned that the robot should operate at all moments when the Noria is also operating. This means that battery life should be long or some kind of power generation must be present at the Noria itself. Also, the design should be weatherproof and robust. The robot will need to have certain functionalities to be able to meet these requirements. There will be focused on specific parts of the robot that are essential to the operation of the robot. This includes:
 
*Image recognition hardware;
Waste being out of frame should not be an issue in the final application of the image recognition, since the camera will film over the entire width where the trash can be. Waste being too close together might cause a problem on the Noria though, as clogging of waste is a realistic problem. This problem could be (mostly) solved by training on very crowded images, which will force the neural network to look for smaller parts of waste hidden behind other parts.
*Data transfer;
*Power source;
*General assembly.


===Image recognition hardware===
==== Testing Results (videos) ====
The camera that must be used, should be weatherproof. Also, the device should not run out of energy. Besides, it must be possible to retract the images from the camera to be able to use them for image recognition. Finally, the quality should of course be high enough to be able to let the image recognition work well. A common used camera is the GoPro. The GoPro Hero6, Hero7 and Hero8 can be powered externally, also with a weatherproof connection <ref name ='externalpower'>Coleman, D. (2020, April 8). Can You Run a GoPro HERO8, HERO7, or HERO6 with External Power but Without an Internal Battery? Retrieved May 22, 2020, from https://havecamerawilltravel.com/gopro/external-power-internal-battery/</ref> <ref name ='weatherproofpower'>Air Photography. (2018, April 29). Weatherproof External Power for GoPro Hero 5/6/7 | X~PWR-H5. Retrieved May 22, 2020, from https://www.youtube.com/watch?v=S6Y7a3ZtoeE</ref>. The internal battery can be left in place as a safety net in case external power cannot be provided. Without an internal battery, the camera will turn off when the external power flow stops and it will not turn back on automatically when the power source is restored. With an internal battery it will switch seamlessly when necessary. The disadvantage is of course that the internal battery can also run out of power. GoPro does not offer very long battery life when shooting for a long time, however there are ways to improve this and this will be elaborated on in the next part. For now there will be focused on the resolution that the GoPro cameras have to offer. The newest GoPro, the GoPro Hero8 Black, takes photos in 12MP and makes video footage (including timelapses) in 4K up to 60fps. Additionally, it has improved video stabilization, called HyperSmooth 2.0, which can come in handy when there are more waves, by e.g. rougher weather <ref name='gopro8'>GoPro. (n.d.). HERO8 Black Tech Specs. Retrieved May 22, 2020, from https://gopro.com/en/nl/shop/hero8-black/tech-specs?pid=CHDHX-801-master</ref>. However, lots of external extension (like additional power sources from external companies) are not compatible with the newest GoPros yet. The GoPro Hero7 Black has about the same specs when it comes to image and video quality. It also has video stabilization, but an older version, namely HyperSmooth <ref name=\gopro7>GoPro. (n.d.-a). HERO7 Black Action Camera | GoPro. Retrieved May 22, 2020, from https://gopro.com/en/nl/shop/cameras/hero7-black/CHDHX-701-master.html</ref>. More extensions are possible for the GoPro Hero7 Black, so it is better to use that version.


The most important part of this project is to visualize the amount and sort of waste that is being removed from the water. This will be done by using the object detection software. A script is written that classifies the waste and also counts the amount of waste objects that are removed from the water. The same neural network that is used for the photos can be used for this purpose. The neural network individually interprets each frame in this case.


GoPros are a compact and a relatively cheap option compared to DSLR cameras (Digital Single Lens Reflex). However, as mentioned before, battery life can be an issue. Therefore, another option could be to use the Cyclapse Pro, which can also come with extensions such as solar panels etc. They have a build-in Nikon or Canon camera, which can provide a higher quality <ref name='cyclapsepro'>Harbortronics. (n.d.-b). Cyclapse Pro - Starter | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-starter/</ref>. The standard implemented camera is the Canon T7, that provides 24.1MP pictures and can provide full HD videos at 30 fps <ref name ='canon'>Canon USA. (n.d.). Canon U.S.A., Inc. | EOS Rebel T7 EF-S 18-55mm IS II Kit. Retrieved May 22, 2020, from https://www.usa.canon.com/internet/portal/us/home/products/details/cameras/eos-dslr-and-mirrorless-cameras/dslr/eos-rebel-t7-ef-s-18-55mm-is-ii-kit</ref>. The camera itself is $700 USD  (2 times more expensive than GoPros), and the costs increase quickly when additional components are bought. The complete Cyclapse Pro includes a Digisnap Pro controller with Bluetooth to enable time-lapsing, a Cyclapse weatherproof housing and a lithium ion battery <ref name='cyclapsepro'>Harbortronics. (n.d.). Cyclapse Pro - Standard | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-standard/</ref>. Because of this, this Cyclapse Pro module costs over $3000 USD. Also, the module is not as compact as a GoPro, since DSLR cameras themselves are already much larger than GoPros. Before a choice can be made between both options, there must be looked at data transfer options and additional power sources.
To test whether the counting works, new videos are recorded where the camera moves over the trash, making it as if the trash is moving from top to bottom. The idea is to use these videos to test whether the counting of waste is working or not. In the video below, a demonstration of the counting software can be seen. At this moment the software is made in a relatively simple way, because it would be good enough to show the working of the concept and there is not enough time to make a more complicated better working script.  


===Data transfer===
[[File:Counting.gif|1500px|thumb|center|Demonstration of counting software.]]
A GoPro creates its own Wifi signal to which you could connect a phone using the GoPro app. Then data could be sent from there to a computer. Another option could be Auto Upload which is part of GoPro Plus. For a monthly or yearly fee, the GoPro automatically uploads its footage to The Cloud <ref name='autoupload'>GoPro. (2020, May 22). Auto Uploading Your Footage to the Cloud With GoPro Plus. Retrieved May 23, 2020, from https://community.gopro.com/t5/en/Auto-Uploading-Your-Footage-to-the-Cloud-With-GoPro-Plus/ta-p/388304#</ref> <ref name='autoupload2'>GoPro. (2020a, May 14). How to Add Media to GoPro PLUS. Retrieved May 23, 2020, from https://community.gopro.com/t5/en/How-to-Add-Media-to-GoPro-PLUS/ta-p/401627</ref>. However, this works together with the GoPro app which requires a mobile device. The image recognition itself will use a computer. Also, when auto uploading to The Cloud, the images/videos are not deleted from the storage within the GoPro. This will be necessary for operation of the device, since otherwise the GoPro storage will be filled up quickly. Besides, it is not completely clear if Auto Upload requires that the GoPro and mobile device are connected to the same Wifi network. Finally, to auto upload, the GoPro must be connected with a power source and it needs to be charged at least 70%. It may not be possible to always keep the battery above 70%.  


A solution could be to connect GoPro to the FlashAir™ W-04 wireless SD card. This SD card can save up to 64 GB of data. The SD card can be accessed with a phone or laptop and the pictures have to be manually saved. Then the pictures can be used for the image recognition. This requires that the SD-card is manually swapped at certain times.
In the second video can be seen that some objects are counted double. This occurs because the object is not detected in a certain frame. To solve this, tracking software can be used so the object can be followed. The object could then be counted when it passes a certain line (see https://www.youtube.com/watch?v=WcKx9u6XmDI) or when it is in a certain box (see https://www.youtube.com/watch?v=3Tw7q0YdcHA).


[[File:Double_counting.gif|1500px|thumb|center|Two objects are counted double.]]


The Cyclapse Pro also offers Wifi options to be able to transfer data <ref name='cyclapsefaq'>Harbortronics. (n.d.-d). Support / DigiSnap Pro / Frequently Asked Questions | Cyclapse. Retrieved May 23, 2020, from https://cyclapse.com/support/digisnap-pro/frequently-asked-questions-faq/</ref>. The DigiSnap Pro within the Cyclapse Pro can transfer images from the camera to an FTP (File Transfer Protocol) server on the local network or internet. The DigiSnap Pro most popularly uses FTP image transfers via USB cellular modems and local USB download. The Digisnap Pro also provides an Android app. Every image taken by the camera can be configured within the DigiSnap Pro Android Application to automatically transfer to a specified FTP folder location on the internet using a USB cellular modem.
= Design =
Besides the image recognition program, the module itself will need to meet the requirements mentioned at the beginning of this page. There, it is mentioned that the robot should operate at all moments when the Noria is also operating. This means that battery life should be long or some kind of power generation must be present at the Noria itself. Also, the design should be weatherproof and robust. The robot will need to have certain functionalities to be able to meet these requirements. There will be focused on specific parts of the robot that are essential to the operation of the robot. This includes:
*Image recognition hardware
*Data transfer
*Power source
*General assembly


===Power source===
===Image Recognition Hardware===
[[File:SolarX.png|200px|Image: 200 pixels|right|thumb|GoPro with SolarX extension<ref name ='CamdoSolar'>CamDo. (n.d.-b). SolarX Solar Upgrade Kit. Retrieved May 22, 2020, from https://cam-do.com/products/solarx-gopro-solar-system</ref>.]]
The camera that must be used, should be weatherproof. Also, the device should not run out of energy. Besides, it must be possible to retract the images from the camera to be able to use them for image recognition. Finally, the quality should of course be high enough to be able to let the image recognition work well. A common used camera is the GoPro. The GoPro Hero6, Hero7 and Hero8 can be powered externally, also with a weatherproof connection <ref name ='externalpower'>Coleman, D. (2020, April 8). Can You Run a GoPro HERO8, HERO7, or HERO6 with External Power but Without an Internal Battery? Retrieved May 22, 2020, from https://havecamerawilltravel.com/gopro/external-power-internal-battery/</ref> <ref name ='weatherproofpower'>Air Photography. (2018, April 29). Weatherproof External Power for GoPro Hero 5/6/7 | X~PWR-H5. Retrieved May 22, 2020, from https://www.youtube.com/watch?v=S6Y7a3ZtoeE</ref>. The internal battery can be left in place as a safety net in case external power cannot be provided. Without an internal battery, the camera will turn off when the external power flow stops and it will not turn back on automatically when the power source is restored. With an internal battery it will switch seamlessly when necessary. The disadvantage is of course that the internal battery can also run out of power. GoPro does not offer very long battery life when shooting for a long time, however there are ways to improve this and this will be elaborated on in the next part. For now there will be focused on the resolution that the GoPro cameras have to offer. The newest GoPro, the GoPro Hero8 Black, takes photos in 12MP and makes video footage (including timelapses) in 4K up to 60fps. Additionally, it has improved video stabilization, called HyperSmooth 2.0, which can come in handy when there are more waves, by e.g. rougher weather <ref name='gopro8'>GoPro. (n.d.). HERO8 Black Tech Specs. Retrieved May 22, 2020, from https://gopro.com/en/nl/shop/hero8-black/tech-specs?pid=CHDHX-801-master</ref>. However, lots of external extension (like additional power sources from external companies) are not compatible with the newest GoPros yet. The GoPro Hero7 Black has about the same specs when it comes to image and video quality. It also has video stabilization, but an older version, namely HyperSmooth <ref name=\gopro7>GoPro. (n.d.-a). HERO7 Black Action Camera | GoPro. Retrieved May 22, 2020, from https://gopro.com/en/nl/shop/cameras/hero7-black/CHDHX-701-master.html</ref>. More extensions are possible for the GoPro Hero7 Black, so it is better to use that version.
[[File:Cyclapse.jpg|200px|Image: 200 pixels|right|thumb|Cyclapse with solar panel extension<ref name='cyclapsesolar'> Harbortronics. (n.d.-a). Cyclapse Pro - Standard | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-standard/</ref>.]]
CamDo offers an add-on to the GoPro Hero3 to Hero7. It is called SolarX which is a weatherproof module, designed for usage together with the Blink or BlinkX time-lapse camera controllers <ref name ='CamdoSolar'>CamDo. (n.d.-b). SolarX Solar Upgrade Kit. Retrieved May 22, 2020, from https://cam-do.com/products/solarx-gopro-solar-system</ref>. This enables long term operation of GoPro cameras for time lapse photography. It includes a 9 Watt solar panel to recharge the included V50 battery. The solar panel can be upgraded to 18W for use in cloudy or rainy areas. The solar panel charges the included Lithium Polymer battery which outputs 5 volts to power the camera and can also power other accessories within the weatherproof enclosure. The solar panel can directly be attached to the casing or can be placed separately for optimal usage. The complete module adds significant size to the GoPro, but within the casing there is extra space for additional accessories. If the camera can run indefinitely with only the solar panel, depends on the weather and camera settings (e.g. how often is a picture taken?). This depends on the type of data that is needed and at the moment there is still looked into this, since it is unclear if images or video footage will be used for image recognition. CamDo made a calculator to determine battery life and the best setup, so when the data type is clear this can be used to determine battery life and the amount of solar panels needed <ref name ='calculator'>https://cam-do.com/pages/photography-time-lapse-calculator?_ga=2.4808368.207575993.1590147015-1651516203.1590147015</ref>.  


GoPros are a compact and a relatively cheap option compared to DSLR cameras (Digital Single Lens Reflex). However, as mentioned before, battery life can be an issue. Therefore, another option could be to use the Cyclapse Pro, which can also come with extensions such as solar panels. They have a build-in Nikon or Canon camera, which can provide a higher quality <ref name='cyclapsepro'>Harbortronics. (n.d.-b). Cyclapse Pro - Starter | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-starter/</ref>. The standard implemented camera is the Canon T7, that provides 24.1MP pictures and can provide full HD videos at 30 fps <ref name ='canon'>Canon USA. (n.d.). Canon U.S.A., Inc. | EOS Rebel T7 EF-S 18-55mm IS II Kit. Retrieved May 22, 2020, from https://www.usa.canon.com/internet/portal/us/home/products/details/cameras/eos-dslr-and-mirrorless-cameras/dslr/eos-rebel-t7-ef-s-18-55mm-is-ii-kit</ref>. The camera itself is $700 USD  (2 times more expensive than GoPros), and the costs increase quickly when additional components are bought. The complete Cyclapse Pro includes a Digisnap Pro controller with Bluetooth to enable time-lapsing, a Cyclapse weatherproof housing and a lithium ion battery <ref name='cyclapsepro'>Harbortronics. (n.d.). Cyclapse Pro - Standard | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-standard/</ref>. Because of this, this Cyclapse Pro module costs over $3000 USD. Also, the module is not as compact as a GoPro, since DSLR cameras themselves are already much larger than GoPros. Before a choice can be made between both options, there must be looked at data transfer options and additional power sources.


With the BlinkX controller mentioned earlier, you can customize a daily or weekly schedule to program up to 10 separate schedules for either time lapse or motion detection in photo, day, night, burst or video modes <ref name='blinkx'>CamDo. (n.d.-a). GoPro Motion Detector X-Band Sensor with Cable for Blink and BlinkX. Retrieved May 22, 2020, from https://cam-do.com/collections/blink-related-products/products/blinkx-time-lapse-camera-controller-for-gopro-hero5-6-7-8-cameras?_pos=1&_sid=8bc03109b&_ss=r</ref>. BlinkX powers the GoPro camera down between intervals, increasing the battery life significantly and giving the ability to undertake long term time lapse sequences. The controller can be powered from the GoPro battery and does not require a separate power source. The SolarX itself costs $995 USD (with a 9W solar panel), the BlinkX controller is $355 USD and the GoPro Hero7 Black is around $230 USD. So in total it will be cheaper than the Cyclapse Pro that does not include a rechargeable power source yet. An important remark that must be made is that CamDo is an external company that makes add-ons for GoPro cameras. It is kind of a workaround to let GoPros do what they were not truly designed for and therefore it can be less reliable than using the Harbortronics Cyclapse Pro. However, the GoPro is a relatively cheap and much more light option. Also, these problems mainly apply to older versions of GoPro like the Hero4 Black <ref name='reliable'>Coleman, D. (2019, September 4). How to Shoot a Long Time Lapse with a GoPro HERO4 Silver or Black. Retrieved May 22, 2020, from https://havecamerawilltravel.com/gopro/long-timelapse-gopro/</ref>. This has probably been improved for newer versions (nog wat research hiernaar doen voor dat er een keuze gemaakt wordt).
===Data Transfer===
A GoPro creates its own Wifi signal to which you could connect a phone using the GoPro app. Then data could be sent from there to a computer. Another option could be Auto Upload which is part of GoPro Plus. For a monthly or yearly fee, the GoPro automatically uploads its footage to The Cloud <ref name='autoupload'>GoPro. (2020, May 22). Auto Uploading Your Footage to the Cloud With GoPro Plus. Retrieved May 23, 2020, from https://community.gopro.com/t5/en/Auto-Uploading-Your-Footage-to-the-Cloud-With-GoPro-Plus/ta-p/388304#</ref> <ref name='autoupload2'>GoPro. (2020a, May 14). How to Add Media to GoPro PLUS. Retrieved May 23, 2020, from https://community.gopro.com/t5/en/How-to-Add-Media-to-GoPro-PLUS/ta-p/401627</ref>. However, this works together with the GoPro app which requires a mobile device. The image recognition itself will use a computer. Also, when auto uploading to The Cloud, the images/videos are not deleted from the storage within the GoPro. This will be necessary for operation of the device, since otherwise the GoPro storage will be filled up quickly. Besides, it is not completely clear if Auto Upload requires that the GoPro and mobile device are connected to the same Wifi network. Finally, to auto upload, the GoPro must be connected with a power source and it needs to be charged to at least 70[%], and it may not be possible to always keep the battery above this 70[%].  


A solution could be to connect GoPro to the FlashAir™ W-04 wireless SD card. This SD card can save up to 64 GB of data. The SD card can be accessed with a phone or laptop and the pictures have to be manually saved. Then the pictures can be used for the image recognition. Also, a normal SD card could be used, but this requires that the SD card is manually swapped at certain times.


Cyclapse Pro also offers a solar panel extension <ref name='cyclapsesolar'> Harbortronics. (n.d.-a). Cyclapse Pro - Standard | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-standard/</ref>. Without solar panel, a full battery can make around 3000 images <ref name='cyclapsefaq'>Harbortronics. (n.d.-d). Support / DigiSnap Pro / Frequently Asked Questions | Cyclapse. Retrieved May 23, 2020, from https://cyclapse.com/support/digisnap-pro/frequently-asked-questions-faq/</ref>. The 20W solar panel can make sure the battery is charged. A second battery pack can be included, to increase the duration the system will operate without charging (e.g. for cloudy skies) <ref name='cyclapsebat'>Harbortronics. (n.d.-a). Cyclapse Pro - Glacier | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-glacier/</ref>. Like with the GoPro, it also uses a controller, the Digisnap Pro, to reduce battery usage and gives programming options <ref name='cyclapsepro'>Harbortronics. (n.d.-b). Cyclapse Pro - Starter | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-starter/</ref>. Total costs (dependent on specific add-ons), are around $4000 USD, which is significantly larger than for the GoPro.
The Cyclapse Pro also offers Wifi options to be able to transfer data <ref name='cyclapsefaq'>Harbortronics. (n.d.-d). Support / DigiSnap Pro / Frequently Asked Questions | Cyclapse. Retrieved May 23, 2020, from https://cyclapse.com/support/digisnap-pro/frequently-asked-questions-faq/</ref>. The DigiSnap Pro within the Cyclapse Pro can transfer images from the camera to an FTP (File Transfer Protocol) server on the local network or internet. The DigiSnap Pro most popularly uses FTP image transfers via USB cellular modems and local USB download. The Digisnap Pro also provides an Android app. Every image taken by the camera can be configured within the DigiSnap Pro Android Application to automatically transfer to a specified FTP folder location on the internet using a USB cellular modem.


===Power Source===
[[File:SolarX.png|200px|Image: 200 pixels|right|thumb|GoPro with SolarX extension<ref name ='CamdoSolar'>CamDo. (n.d.-b). SolarX Solar Upgrade Kit. Retrieved May 22, 2020, from https://cam-do.com/products/solarx-gopro-solar-system</ref>.]]
[[File:Cyclapse.jpg|200px|Image: 200 pixels|right|thumb|Cyclapse with solar panel extension<ref name='cyclapsesolar'> Harbortronics. (n.d.-a). Cyclapse Pro - Standard | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-standard/</ref>.]]


Data transfer will increase the required power. Transferring data more frequently will use more battery power. The Cyclapse Pro offers a after each 30 pictures setting, which is a good balance between battery life and frequent uploads.
CamDo offers an add-on to the GoPro Hero3 to Hero7. It is called SolarX which is a weatherproof solar panel module <ref name ='CamdoSolar'>CamDo. (n.d.-b). SolarX Solar Upgrade Kit. Retrieved May 22, 2020, from https://cam-do.com/products/solarx-gopro-solar-system</ref>. This enables long term operation of GoPro cameras for time lapse photography. It includes a 9[W] solar panel to recharge the included V50 battery. The solar panel can be upgraded to 18[W] for use in cloudy or rainy areas. The solar panel charges the included lithium polymer battery which outputs 5[V] to power the camera and can also power other accessories within the weatherproof enclosure. The solar panel can directly be attached to the casing or can be placed separately for optimal usage. The complete module adds significant size to the GoPro, but within the casing there is extra space for additional accessories. If the camera can run indefinitely with only the solar panel, depends on the weather and camera settings. CamDo made a calculator to determine battery life and the best setup <ref name ='calculator'>https://cam-do.com/pages/photography-time-lapse-calculator?_ga=2.4808368.207575993.1590147015-1651516203.1590147015</ref>. This calculator will be used in the next section to determine whether the solar panel will provide enough power to the camera if it has to make videos 24/7.


Cyclapse Pro also offers a solar panel extension <ref name='cyclapsesolar'> Harbortronics. (n.d.-a). Cyclapse Pro - Standard | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-standard/</ref>. Without solar panel, a full battery can make around 3000 images <ref name='cyclapsefaq'>Harbortronics. (n.d.-d). Support / DigiSnap Pro / Frequently Asked Questions | Cyclapse. Retrieved May 23, 2020, from https://cyclapse.com/support/digisnap-pro/frequently-asked-questions-faq/</ref>. The 20W solar panel can make sure the battery is charged. A second battery pack can be included, to increase the duration the system will operate without charging (e.g. for cloudy skies) <ref name='cyclapsebat'>Harbortronics. (n.d.-a). Cyclapse Pro - Glacier | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-glacier/</ref>. It uses a controller, the Digisnap Pro, to reduce battery usage and gives programming options <ref name='cyclapsepro'>Harbortronics. (n.d.-b). Cyclapse Pro - Starter | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-starter/</ref>. Total costs (dependent on specific add-ons), are around $4000 USD, which is significantly larger than for the GoPro.


Since the Cyclapse Pro module is much more expensive, and similar performance is expected, it is chosen to go further with the GoPro Hero7 Black. Now some power calculations will be done to determine if the solar panel will suffice as a power source.
Since the Cyclapse Pro module is much more expensive, and similar performance is expected, it is chosen to go further with the GoPro Hero7 Black. Now some power calculations will be done with the previously mentioned CamDo calculator to determine if the solar panel will suffice as a power source <ref name ='calculator'></ref>.


===Data storage and energy consumption===
===Data Storage and Energy Consumption===
To be able to draw a conclusion on the power source, the data storage and energy consumption of the camera should be known. To calculate the data storage and energy consumption, the time lapse and solar power calculator from CamDo <ref name=CamDoCalculator>CamDo Time Lapse and Solar Power Calculator. Retrieved June 05, 2020, from https://cam-do.com/pages/photography-time-lapse-calculator?_ga=2.4808368.207575993.1590147015-1651516203.1590147015 </ref> are used. This gives a rough estimation of the memory and energy needed, however to get better results the equipment has to be tested in a real life environment. But, for the purpose of this report, the estimations of the calculator have been taken and a margin has been built in.  
To be able to draw a conclusion on the power source, the data storage and energy consumption of the camera should be known. To calculate the data storage and energy consumption, the time lapse and solar power calculator from CamDo <ref name ='calculator'></ref> are used. This gives a rough estimation of the memory and energy needed. The estimations of the calculator will be taken with a margin, to be sure it will work in real life.  


The image recognition needs a video as input, also the images have to be taken one second after each other. Further, the fps doesn’t matter since the image recognition observes every single frame, but the GoPro automatically transfers the videos in 30 fps <ref name=TimeLapseSettings>Time lapse settings. Retrieved June 05, 2020, from https://www.youtube.com/watch?v=-9htjymU5d8 </ref>. When the camera shoots 24 hours a day and has a time interval of 1 second between every photo, a 48 minute video with 86400 frames will be the result.  
The image recognition needs a video as input, also the images have to be taken one second after each other. Further, the fps does not matter since the image recognition observes every single frame, but the GoPro automatically transfers the videos in 30 fps <ref name=TimeLapseSettings>Time lapse settings. Retrieved June 05, 2020, from https://www.youtube.com/watch?v=-9htjymU5d8 </ref>. When the camera shoots 24 hours a day and has a time interval of 1 second between every photo, a 48 minute video with 86400 frames will be the result.  


First, the data storage. The data will be stored on a SD card. There has been chosen to store it on a normal SD card. A WiFi SD card has been considered, however there is too much data to transfer, therefore it is easier to manually use two SD cards and change them. To make a choice of SD card, there should be known how many minutes of video can be saved on the card. The following table is obtained with numbers from the CamDo calculator <ref name=CamDoCalculator>CamDo Time Lapse and Solar Power Calculator. Retrieved June 05, 2020, from https://cam-do.com/pages/photography-time-lapse-calculator?_ga=2.4808368.207575993.1590147015-1651516203.1590147015  </ref>.
First, the data storage. The data will be stored on an SD card. There has been chosen to store it on a normal SD card. A WiFi SD card has been considered, however there is too much data to transfer, therefore it is easier to manually use two SD cards and change them. To make a choice of SD card, there should be known how many minutes of video can be saved on the card. The following table is obtained with numbers from the CamDo calculator <ref name ='calculator'></ref>.
{| border=1 style="border-collapse: collapse;" cellpadding = 2
{| border=1 style="border-collapse: collapse;" cellpadding = 2
! style="background: #BFBFBF;" colspan="1"| '''SD card size'''
! style="background: #BFBFBF;" colspan="1"| '''SD card size [GB]'''
! style="background: #BFBFBF;" colspan="1"| '''Number of minutes that can be saved'''
! style="background: #BFBFBF;" colspan="1"| '''Number of minutes that can be saved'''
|-
|-
Line 440: Line 428:
|}
|}


The eventual choice of SD card depends also depends on the energy source. This is because the energy source might have to be switched after a certain period as time as well. The most effective is if both the SD card and power source are switched at the same time.
The eventual choice of SD card also depends on the energy source. This is because the energy source might have to be switched after a certain period of time as time as well. The most effective is if both the SD card and power source are switched at the same time.


The approximation of energy needed is 40.80 Wh. The internal battery delivers 4.7 Wh, meaning the GoPro needs external energy. The two options considered are a solar panel that powers an external battery or only the external battery.  
The approximation of energy needed is 40.80[Wh]. The internal battery delivers 4.7[Wh], meaning the GoPro needs external energy. The two options considered are a solar panel that powers an external battery or using only an external battery that will have to be switched manually.  


In the CamDo calculator, the solar irradiance can be filled in, this is combined with the solar panel of choice and the delivered energy will be calculated. The solar irradiance can be found at the Solar Electricity Handbook, 2019 Edition website <ref name=Irradiance>Solar Electricity Handbook. Retrieved June 05, 2020, from http://www.solarelectricityhandbook.com/solar-irradiance.html </ref>. The city taken to obtain the solar irradiance was Maastricht, since that is a city close to the Maas. The following values are obtained in the ideal situation, meaning the solar panel always faces the sun.  
In the CamDo calculator, the solar irradiance can be filled in, this is combined with the solar panel of choice and the delivered energy will be calculated. The solar irradiance can be found at the Solar Electricity Handbook <ref name=Irradiance>Solar Electricity Handbook. Retrieved June 05, 2020, from http://www.solarelectricityhandbook.com/solar-irradiance.html </ref>. The city taken to obtain the solar irradiance was Maastricht, since that is a city close to the Maas. The following values are obtained in the ideal situation, meaning the solar panel always faces the sun.  
[[File:SolarIrradianceMaastricht.JPG|400px|Image: 800 pixels|center|thumb|Solar irradiance in Maastricht in ideal situation]]  
[[File:SolarIrradianceMaastricht.JPG|400px|Image: 800 pixels|center|thumb|Solar irradiance in Maastricht in ideal situation.]]  


The 9 W solar panel delivers 36 Wh of energy on a average day in June, where the highest solar irradiation is obtained. This means that multiple solar panels are needed to power the external battery needed to power the GoPro in June alone. In December, the solar radiation, ideally has an average of 1.19. Two solar panels then deliver a combined energy of 17 Wh. This is not enough to constantly power the GoPro, when it has to take a picture every second. Also, since the SD card needs to be switched, it is a better option to use only external batteries, since this is much cheaper, see the table below.
The 9[W] solar panel delivers 36[Wh] of energy on a average day in June, where the highest solar irradiation is obtained. This means that multiple solar panels are needed to power the external battery needed to power the GoPro in June alone. In December, the solar radiation, ideally has an average of 1.19[kWh/m^2/day]. Two solar panels then deliver a combined energy of 17[Wh]. This is not enough to constantly power the GoPro, when it has to take a picture every second. Also, since the SD card needs to be switched, it is a better option to only use external batteries, since this is much cheaper, see the table below.
{| border=1 style="border-collapse: collapse;" cellpadding = 2
{| border=1 style="border-collapse: collapse;" cellpadding = 2
! style="background: #BFBFBF;" colspan="1"| '''Item'''
! style="background: #BFBFBF;" colspan="1"| '''Item'''
! style="background: #BFBFBF;" colspan="1"| '''Price'''
! style="background: #BFBFBF;" colspan="1"| '''Price'''
|-
|-
| 2x 9W solar panel + 44Wh external battery<ref name ='CamdoSolar'>CamDo. (n.d.-b). SolarX Solar Upgrade Kit. Retrieved May 22, 2020, from https://cam-do.com/products/solarx-gopro-solar-system</ref>  
| 2x 9[W] solar panel + 44[Wh] external battery<ref name ='CamdoSolar'>CamDo. (n.d.-b). SolarX Solar Upgrade Kit. Retrieved May 22, 2020, from https://cam-do.com/products/solarx-gopro-solar-system</ref>  
| €1761,08
| €1761,08
|-
|-
Line 460: Line 448:
The Anker Astro E7 external battery is chosen because it is compatible with the GoPro Hero7 black and has an extremely high capacity. This option is much cheaper taking into account that a person already has to manually switch the SD card.
The Anker Astro E7 external battery is chosen because it is compatible with the GoPro Hero7 black and has an extremely high capacity. This option is much cheaper taking into account that a person already has to manually switch the SD card.


Now a choice of SD card can be made, since it is now known that the battery has to be switched once every two days. Two days of shooting is 96 minutes of video. The CamDo calculator assumes that there can be a 30% difference in capacity used on the SD card. Therefore, a time of (96*1.3=) 124.8 minutes is taken.  This means that the card size has to be 64 GB. The SanDisk Extreme microSDXC has been chosen, because this card is recommended by both GoPro<ref name=SDCards>SD cards compatible with GoPro. Retrieved June 05, 2020, from https://community.gopro.com/t5/en/SD-Cards-that-Work-with-GoPro-Cameras/ta-p/394308#HERO7 </ref> and internet reviewershttps://alikgriffin.com/best-memory-cards-gopro-hero-7-black-silver-white/. The price is €12 per card<ref name=SandiskMicroSDXC> Sandisk Micro SDXC. Retrieved June 05, 2020, from https://www.amazon.nl/dp/B07HB8SLMV/ref=asc_df_B07HB8SLMV1591629000000/?tag=kieskeurig-21&creative=380333&creativeASIN=B07HB8SLMV&linkCode=asn </ref>.
Now a choice of SD card can be made, since it is now known that the battery has to be switched once every two days. Two days of shooting is 96 minutes of video. The CamDo calculator assumes that there can be a 30[%] difference in capacity used on the SD card. Therefore, a time of (96*1.3=) 124.8 minutes is taken.  This means that the card size has to be 64[GB]. The SanDisk Extreme microSDXC has been chosen, because this card is recommended by GoPro<ref name=SDCards>SD cards compatible with GoPro. Retrieved June 05, 2020, from https://community.gopro.com/t5/en/SD-Cards-that-Work-with-GoPro-Cameras/ta-p/394308#HERO7 </ref>. The price is €12 per card<ref name=SandiskMicroSDXC> Sandisk Micro SDXC. Retrieved June 05, 2020, from https://www.amazon.nl/dp/B07HB8SLMV/ref=asc_df_B07HB8SLMV1591629000000/?tag=kieskeurig-21&creative=380333&creativeASIN=B07HB8SLMV&linkCode=asn </ref>.
====Total price====
====Total price====
{| border=1 style="border-collapse: collapse;" cellpadding = 2
{| border=1 style="border-collapse: collapse;" cellpadding = 2
Line 476: Line 464:
! style="background: #BFBFBF;" colspan="1"|  '''€322,92'''
! style="background: #BFBFBF;" colspan="1"|  '''€322,92'''
|}
|}
This price seems to be very reasonable, however it is very deceptive. This is because the costs of the travel and time of the person switching the SD card and battery hasn’t been taken into account yet.
This price seems to be very reasonable, however it is very deceptive. This is because the costs of the travel and time of the person switching the SD card and battery have not been taken into account yet. During more busy days the Noria has to be emptied almost every day. This means that then someone can also swap the battery and SD card, this will only take a few minutes extra. However, it can also occur that the Noria does not have to be emptied for days. This means someone will have to go the Noria to only change the battery and SD. This will increase the costs. However, for now this is the best solution since the SolarX will not provide enough energy and there is no solution for automatic and wireless data transfer. In the future there could be looked at other solutions to these problems, like hydropower as an energy source.
 
=== Night mode ===
In order to make sure that images of the waste can be made 24/7 for the image recognition, it will be necessary to also makes pictures when it is dark. The GoPro Hero7 Black does offer a night photo mode, which can make low-light photography easier <ref name = 'nightmode'>GoPro. (2018, October 12). Master the Modes: Improved Night Photo Mode. Retrieved June 3, 2020, from https://gopro.com/en/us/news/mastering-hero7-night-photo-mode</ref>. However, this mode uses very long shutter times in order to capture enough light. Long shutter times will not be possible in the case of this project, since every few seconds an image needs to be taken of the waste. Also, this waste moves with the current of the water. Using long shutter times on moving environments will result in blurry pictures. The night mode offered by GoPro is mainly made to do astro-photography. Using it in this project will not be a good idea. Therefore, it is best to place some lights near the GoPro, that makes sure objects in the dark still remain recognizable.


=== Lighting ===
=== Lighting ===
GoPro itself offers some lights. For example, the Zeus MiniMagnetic Swivel Clip Light, that is rechargeable and weatherproof <ref name = goprolight'>GoPro. (n.d.-c). Zeus MiniMagnetic Swivel Clip Light. Retrieved June 3, 2020, from https://gopro.com/en/nl/shop/switch?currency=EUR&countryCode=NL&pipeline=Product-Show&qstring=pid%3DALTSK-002-EU</ref>. It has 3 levels of brightness. At the lowest level (20 lumes) the battery works for 6 hours, when using 60 lumes it works 2 hours and when using 125 lumes it only works for 1 hour. At the back there is an USB-port to recharge the battery. This means the battery could also be recharged via a battery powered by the solar panel, however this will make that it is not weatherproof anymore and a solution for this will have to be found. Besides, the light requires manual use, i.e. it won't switch on/off automatically when it is night or day.
In order to make sure that images of the waste can be made 24/7 for the image recognition, it will be necessary to also makes pictures when it is dark. Therefore, a light will need to be added to the design. This external lighting has to meet some requirements. The lighting must work from dusk till dawn. It has to have its own power source, preferably through solar energy. The lighting must have at least an IP64 rating. The IP rating 'classifies and rates the degree of protection provided by mechanical casings'<ref name ='IPrating'>IP rating at Wikipedia. Retrieved June 12, 2020, from https://en.wikipedia.org/wiki/IP_Code#Second_digit:_Liquid_ingress_protection</ref>. The IP64 rating means that the casing is dust-tight and protected against the splashing of water. The external lighting must be able to be mounted on the frame of the Noria. Lastly, the lighting needs to light up the space in front enough so that the image recognition works. In this project it is not possible to test if the first and last requirements is met. The first requirement is elucidated later. To test the last requirements, a setup with the lighting and camera in the dark has to be made. Then a lot of photos need to be made in order to train the image recognition. Finally, the image recognition needs to be tested if it recognizes at least 90[%] of the plastic in the night with the lighting. This is not possible to test in the current time frame. Because a test is not possible, there is no 100[%] guarantee that the current chosen lighting is suitable. However, there will be an elaborate explanation to why this particular lighting is chosen.   
[[File:HikerenIP65SolarLight.jpg|175px|Image: 50 pixels|right|thumb|Hikeren IP65 Waterproof Solar Lights. <ref name ='HikerenIP65'>Hikeren IP65 Waterproof Solar Lights at Amazon. Retrieved June 12, 2020, from https://www.amazon.com/Hikeren-Waterproof-Spotlight-Install-Security/dp/B01DNMRUIQ</ref>.]]
 
Because the option of using the lights from the GoPro is not usable, there is looked at external lighting. This external lighting has to meet three requirements. The lighting must work from dusk till dawn. It has to have its own power source, preferably through solar energy. The lighting must have at least an IP64 rating. The IP rating 'classifies and rates the degree of protection provided by mechanical casings'<ref name ='IPrating'>IP rating at Wikipedia. Retrieved June 12, 2020, from https://en.wikipedia.org/wiki/IP_Code#Second_digit:_Liquid_ingress_protection</ref>. The IP64 rating means that the casing is dust-tight and protected against the splashing of water. The external lighting must be able to be mounted on the frame of the Noria. Lastly, the lighting needs to light up the space in front enough so that the image recognition works. In this project it is not possible to test if the first and last requirements is met. The first requirement is elucidated later. To test the last requirements, a setup with the lighting and camera in the dark has to be made. Then a lot of photos need to be made in order to train the image recognition. Finally, the image recognition needs to be tested if it recognizes at least 90% of the plastic in the night with the lighting. This is not possible to test in the current time frame. Because a test is not possible, there is no 100% guarantee that the current chosen lighting is suitable. However, there will be an elaborate explanation to why this particular lighting is chosen.   


The chosen light is Solar Lights Outdoor, Hikeren IP65 Waterproof Solar Lights<ref name ='HikerenIP65'>Hikeren IP65 Waterproof Solar Lights at Amazon. Retrieved June 12, 2020, from https://www.amazon.com/Hikeren-Waterproof-Spotlight-Install-Security/dp/B01DNMRUIQ</ref>. The first requirement is that the lighting has to work from dusk till dawn. For this requirement the times of the sunset and sunrise on the shortest day in the Netherlands are used. The longest time between sunset and sunrise is from December 20 to 21 <ref name ='DuskTillDawn'>Zonsopkomst en Zonsondergang. Retrieved June 12, 2020, from http://www.zonsondergangtijden.nl/zonsondergang-2020.html</ref>. The sun goes down at 16:29 and rises at 8:46 meaning the lighting has to be able to work for 16 hours and 17 minutes straight. The Hikeren IP65 Waterproof Solar Light can work up to 18 hours straight when fully charged. In order to fully charge the lighting needs 6.5 hours of illumination<ref name ='HikerenIP65'>Hikeren IP65 Waterproof Solar Lights at Amazon. Retrieved June 12, 2020, from https://www.amazon.com/Hikeren-Waterproof-Spotlight-Install-Security/dp/B01DNMRUIQ</ref>. Because there are 24 hours in a day, and the longest dusk till dawn takes 16 hours and 17 minutes, this would still leave 7 hours and 43 minutes for charging. This should be enough, however this has to be tested. This will not be done in this project. If it turns out that the lighting doesn't last the full night, a second light should be added. The problem that should be solved then is to make sure that the lights are not working at the same time but one light turns on just before the other one turns off. This will not be investigated in this project, but should be kept in mind for a potential extension of this project. The second requirement is already discussed, since the Hikeren IP65 Waterproof Solar Light works on solar energy.
The chosen light is Solar Lights Outdoor, Hikeren IP65 Waterproof Solar Lights<ref name ='HikerenIP65'>Hikeren IP65 Waterproof Solar Lights at Amazon. Retrieved June 12, 2020, from https://www.amazon.com/Hikeren-Waterproof-Spotlight-Install-Security/dp/B01DNMRUIQ</ref>. The first requirement is that the lighting has to work from dusk till dawn. For this requirement the times of the sunset and sunrise on the shortest day in the Netherlands are used. The longest time between sunset and sunrise is from December 20 to 21 <ref name ='DuskTillDawn'>Zonsopkomst en Zonsondergang. Retrieved June 12, 2020, from http://www.zonsondergangtijden.nl/zonsondergang-2020.html</ref>. The sun goes down at 16:29 and rises at 8:46 meaning the lighting has to be able to work for 16 hours and 17 minutes straight. The Hikeren IP65 Waterproof Solar Light can work up to 18 hours straight when fully charged. In order to fully charge the lighting, the solar panel needs 6.5 hours of illumination<ref name ='HikerenIP65'>Hikeren IP65 Waterproof Solar Lights at Amazon. Retrieved June 12, 2020, from https://www.amazon.com/Hikeren-Waterproof-Spotlight-Install-Security/dp/B01DNMRUIQ</ref>. Because there are 24 hours in a day, and the longest dusk till dawn takes 16 hours and 17 minutes, this would still leave 7 hours and 43 minutes for charging. This should be enough, however this has to be tested in real life. If it turns out that the lighting does not last the full night, a second light should be added. The problem that should be solved then is to make sure that the lights are not working at the same time but one light turns on just before the other one turns off. This will not be investigated in this project, but should be kept in mind for a potential extension of this project. The Hikeren IP65 Waterproof Solar will automatically switch on and off when it becomes either light or dark.


The Hikeren IP65 Waterproof Solar Light, has as stated in the name an IP rating of 65 meaning it is dust-proof and protected against water jets<ref name ='IPrating'>IP rating at Wikipedia. Retrieved June 12, 2020, from https://en.wikipedia.org/wiki/IP_Code#Second_digit:_Liquid_ingress_protection</ref>. This rating should be at least IP64 and therefore, the Hikeren Solar light meets this requirement.
The Hikeren IP65 Waterproof Solar Light, has as stated in the name an IP rating of 65, meaning it is dust-proof and protected against water jets<ref name ='IPrating'>IP rating at Wikipedia. Retrieved June 12, 2020, from https://en.wikipedia.org/wiki/IP_Code#Second_digit:_Liquid_ingress_protection</ref>. The rating should be at least IP64 and therefore, the Hikeren Solar light meets this requirement.


Finally, the fourth requirement stated that the lighting has to be able to be mounted on the frame. This can be done and is discussed in further detail in the assembly section.
Finally, the lighting has to be able to be mounted on the frame. This can be done and is discussed in further detail in the assembly section.


=== Field of view (FOV) ===
=== Field Of View (FOV) ===
As mentioned before, GoPro offers different FOVs, that determine the area that is covered within a shot by means of the angle in which it can shoot. A linear and wide FOV are provided. The wide FOV has a larger angle, which means a fisheye effect is caused. The difference between the two FOVs is shown in the figure below. The fisheye effect caused by the wide FOV can be seen clearly. It can also be seen that because of that effect a larger area is covered within the photo, while the linear FOV causes the picture to be cut-off in comparison with the wide FOV.
GoPro offers different FOVs (Fields Of View), that determine the area that is covered within a shot by means of the angle in which it can shoot. A linear and wide FOV are provided. The wide FOV has a larger angle, which means a fisheye effect is caused. The difference between the two FOVs is shown in the figure below. The fisheye effect caused by the wide FOV can be seen clearly. It can also be seen that because of that effect a larger area is covered within the photo, while the linear FOV causes the picture to be cut-off in comparison with the wide FOV.


[[File:FOV.PNG|650px|Image: 650 pixels|center|thumb|The two FOVs offered by GoPro. <ref name ='FOVs'>Coleman, D. (2020a, March 20). GoPro Linear FOV: Pros, Cons, and Examples. Retrieved June 3, 2020, from https://havecamerawilltravel.com/gopro/gopro-fov-linear/</ref>.]]
[[File:FOV.PNG|650px|Image: 650 pixels|center|thumb|The two FOVs offered by GoPro <ref name ='FOVs'>Coleman, D. (2020a, March 20). GoPro Linear FOV: Pros, Cons, and Examples. Retrieved June 3, 2020, from https://havecamerawilltravel.com/gopro/gopro-fov-linear/</ref>.]]


The wide FOV curves the horizons and straight lines. Also, subjects in the middle of the frame will look artificially big compared to the surroundings <ref name ='FOVs'>Coleman, D. (2020a, March 20). GoPro Linear FOV: Pros, Cons, and Examples. Retrieved June 3, 2020, from https://havecamerawilltravel.com/gopro/gopro-fov-linear/</ref>. Besides, when using wide FOV the chance is bigger that the sun sneaks into the shot, which can cause that some details are not visible anymore <ref name ='FOV'>Michaels, P. (2018, September 20). GoPro Hero7: The Smoothest-Looking Action Cam Yet. Retrieved May 31, 2020, from https://www.tomsguide.com/us/go-pro-hero-7,review-5755.html</ref>. A Linear FOV corrects for the fisheye distortion by straightening horizons and verticals and narrowing the perspective. A linear FOV is often used when making aerial footage from drones or when a more 'normal' perspective is desired. So the overall look is less distorted when using linear FOV. However, it also has a few disadvantages. The area that is covered within an image or frame is smaller due to the smaller angle. Also, parts near the edges can get a more stretched look. This is because when choosing linear FOV, the GoPro applies software correction to the lens distortion before saving it to the memory card <ref name ='FOVs'>Coleman, D. (2020a, March 20). GoPro Linear FOV: Pros, Cons, and Examples. Retrieved June 3, 2020, from https://havecamerawilltravel.com/gopro/gopro-fov-linear/</ref>.  
The wide FOV curves the horizons and straight lines. Also, subjects in the middle of the frame will look artificially big compared to the surroundings <ref name ='FOVs'>Coleman, D. (2020a, March 20). GoPro Linear FOV: Pros, Cons, and Examples. Retrieved June 3, 2020, from https://havecamerawilltravel.com/gopro/gopro-fov-linear/</ref>. Besides, when using wide FOV the chance is bigger that the sun sneaks into the shot, which can cause that some details are not visible anymore <ref name ='FOV'>Michaels, P. (2018, September 20). GoPro Hero7: The Smoothest-Looking Action Cam Yet. Retrieved May 31, 2020, from https://www.tomsguide.com/us/go-pro-hero-7,review-5755.html</ref>. A Linear FOV corrects for the fisheye distortion by straightening horizons and verticals and narrowing the perspective. A linear FOV is often used when making aerial footage from drones or when a more 'normal' perspective is desired. So the overall look is less distorted when using linear FOV. However, it also has a few disadvantages. The area that is covered within an image or frame is smaller due to the smaller angle. Also, parts near the edges can get a more stretched look. This is because when choosing linear FOV, the GoPro applies software correction to the lens distortion before saving it to the memory card <ref name ='FOVs'>Coleman, D. (2020a, March 20). GoPro Linear FOV: Pros, Cons, and Examples. Retrieved June 3, 2020, from https://havecamerawilltravel.com/gopro/gopro-fov-linear/</ref>.  


If wide FOV is used, the image recognition will have to be trained on such pictures in order to work accurate enough. This is not a huge issue, but taking into account that the perspective is quite distorted when using wide FOV and that sizes of objects can be distorted as well, it is best to use a linear FOV. Maybe in the future, the image recognition could be expanded, so it can also measure sizes of objects. For this, it will be important to have a normal perspective. Also, when using wide FOV more of the water, and maybe even the horizon will be within the image. Then the sun can cause distortions within the images (directly or via the water), which is not desirable. With linear FOV, this area will be kept as small as possible, while still keeping the waste that floats directly into the Noria within its view.
If wide FOV is used, the image recognition will have to be trained on such pictures in order to work accurate enough. This is not a huge issue, but taking into account that the perspective is quite distorted when using wide FOV and that sizes of objects can be distorted as well, it is best to use a linear FOV. Maybe in the future, the image recognition could be expanded, so it can also measure sizes of objects. For this, it will be important to have a normal perspective. Also, when using wide FOV more of the water, and maybe even the horizon will be within the image. Then the sun can cause distortions within the images (directly or via the water), which is not desirable. With linear FOV, this area will be kept as small as possible, while still keeping the waste that floats directly into the Noria within its view.


===Assembly===
===Assembly===
The camera needs to be mounted to the Noria <ref name ='noriamachine'>Noria i.o.v. Rijkswaterstaat. (2020, April 1). Pilot vangsysteem voor plastic afval bij stuw Borgharen. Retrieved June 3, 2020, from https://zwerfafval.rijkswaterstaat.nl/@235156/pilot-vangsysteem-plastic-afval-stuw-borgharen/</ref>. To not interfere with rotating parts, and to have easy mounting, it is probably best to attach the camera to the shaft indicated in the figure below with the green arrow. The dimensions of the steel shaft are approximated in order to make a design for the mounting of the camera. To determine the height of the camera, it is important to know the FOV (Field Of View) of the camera. It was decided that the linear FOV of the GoPro will be used, which is 102 degrees <ref name ='FOV'>Michaels, P. (2018, September 20). GoPro Hero7: The Smoothest-Looking Action Cam Yet. Retrieved May 31, 2020, from https://www.tomsguide.com/us/go-pro-hero-7,review-5755.html</ref>. It is approximated that the camera should be able to take images across a width of around 1.2 [m]. This means that the camera should be placed at a height of at least 50 [cm] for the linear FOV (relative to the water surface). It is approximated that the steel shaft itself will already be at around 30 [cm] above the water. The final design will need to contain an arm of around 20[cm] high, to reach the desire height with respect to the water surface.
The camera needs to be mounted to the Noria <ref name ='noriamachine'>Noria i.o.v. Rijkswaterstaat. (2020, April 1). Pilot vangsysteem voor plastic afval bij stuw Borgharen. Retrieved June 3, 2020, from https://zwerfafval.rijkswaterstaat.nl/@235156/pilot-vangsysteem-plastic-afval-stuw-borgharen/</ref>. To not interfere with rotating parts, and to have easy mounting, it is probably best to attach the camera to the shaft indicated in the figure below with the green arrow. The dimensions of the steel shaft are approximated in order to make a design for the mounting of the camera. To determine the height of the camera, it is important to know the FOV (Field Of View) of the camera. It was decided that the linear FOV of the GoPro will be used, which is 102 degrees <ref name ='FOV'>Michaels, P. (2018, September 20). GoPro Hero7: The Smoothest-Looking Action Cam Yet. Retrieved May 31, 2020, from https://www.tomsguide.com/us/go-pro-hero-7,review-5755.html</ref>. It is approximated that the camera should be able to take images across a width of around 1.2 [m]. This means that the camera should be placed at a height of at least 50 [cm] for the linear FOV (relative to the water surface). It is approximated that the steel shaft itself will already be at around 30 [cm] above the water. The final design will need to contain an arm of around 20[cm] high, to reach the desired height with respect to the water surface.


[[File:noriabalk.PNG|500px|Image: 500 pixels|center|thumb|The Noria with indicated the shaft to which the camera can be mounted <ref name ='noriamachine'>Noria i.o.v. Rijkswaterstaat. (2020, April 1). Pilot vangsysteem voor plastic afval bij stuw Borgharen. Retrieved June 3, 2020, from https://zwerfafval.rijkswaterstaat.nl/@235156/pilot-vangsysteem-plastic-afval-stuw-borgharen/</ref>.]]
[[File:noriabalk.PNG|500px|Image: 500 pixels|center|thumb|The Noria with indicated the shaft to which the camera can be mounted <ref name ='noriamachine'>Noria i.o.v. Rijkswaterstaat. (2020, April 1). Pilot vangsysteem voor plastic afval bij stuw Borgharen. Retrieved June 3, 2020, from https://zwerfafval.rijkswaterstaat.nl/@235156/pilot-vangsysteem-plastic-afval-stuw-borgharen/</ref>.]]


A final design for the total assembly has been made. In view 1 the total design can be seen. A casing has been made which contains the camera <ref name='cadcamera'>Velimir. (2020, May 8). Free CAD Designs, Files & 3D Models | The GrabCAD Community Library. Retrieved May 31, 2020, from https://grabcad.com/library/gopro-hero-4</ref> and the external battery (see view 2). The camera is held in place by some small vertical plates and a Velcro fastener (see view 3 and 4). This way the camera will always be placed in the right way. The battery is also held in place by a Velcro fastener (see view 5). The Velcro fasteners can be looped through a small elevation within the casing (see view 4). In front of the camera lens a thin plastic plate is added, which keeps the casing weatherproof (see view 2). One half of the casing has a slot, into which the other half of the casing can be slid. A rubber could be added to the slot to make sure the casing has better watertightness (see view 3 and 5). This would give the casing an IP rating of IP65, which means the casing is dust-tight (IP6x) and it is protected against water coming from any direction at 12.5[L/min] (IPx5) <ref name='IP'>Wikipedia contributors. (2020a, March 5). IP-code. Retrieved June 10, 2020, from https://nl.wikipedia.org/wiki/IP-code</ref>. This makes the casing not completely watertight, but it does make it weatherproof for normal conditions. When the two casings are slid together, they can be held in place by 4 screws at the corners of the casing (see view 2, 3 and 5).  
A final design for the total assembly has been made. In view 1 the total design can be seen. A custom casing has been made which contains the camera <ref name='cadcamera'>Velimir. (2020, May 8). Free CAD Designs, Files & 3D Models | The GrabCAD Community Library. Retrieved May 31, 2020, from https://grabcad.com/library/gopro-hero-4</ref> and the external battery (see view 2). The camera is held in place by some small vertical plates and a Velcro fastener (see view 3 and 4). This way the camera will always be placed in the right way. The battery is also held in place by a Velcro fastener (see view 5). The Velcro fasteners can be looped through a small elevation within the casing (see view 4). In front of the camera lens a thin plastic plate is added, which keeps the casing weatherproof (see view 2), while still being able to make video footage. One half of the casing has a slot, into which the other half of the casing can be slid. A rubber could be added to the slot to make sure the casing has better watertightness (see view 3 and 5). This would give the casing an IP rating of IP65, which means the casing is dust-tight (IP6x) and it is protected against water coming from any direction at 12.5[L/min] (IPx5) <ref name='IP'>Wikipedia contributors. (2020a, March 5). IP-code. Retrieved June 10, 2020, from https://nl.wikipedia.org/wiki/IP-code</ref>. This makes the casing not completely watertight, but it does make it weatherproof for normal conditions. When the two casings are slid together, they can be held in place by 4 screws at the corners of the casing (see view 2, 3 and 5).  


Also, the arm that is connected to the shaft of the Noria is connected to the casing with a metal plate and 4 screws (see view 1). Those 4 screws are at the same place as the ones of the casing. However, they must be screwed in from the other side. The hole of these screws will not reach the ones of the screws that hold together the casing (see view 8).  
Also, the arm that is connected to the shaft of the Noria is connected to the casing with a metal plate and 4 screws (see view 1). Those 4 screws are at the same place as the ones of the casing. However, they must be screwed in from the other side. The hole of these screws will not reach the ones of the screws that hold together the casing (see view 8).  
Line 515: Line 496:


Besides, a light is added to the arm (see view 1 and 7). This will make sure that useful footage can be taken 24/7. The light is powered by a solar panel (see view 1). The angle of the solar panel can be adjusted with a joint, so it can be placed the most optimal for a certain location. Also, if necessary, the solar panel can be moved. The cable is 5[m], so if more power can be generated at a different location at the Noria, the placement of the panel can be adjusted.  
Besides, a light is added to the arm (see view 1 and 7). This will make sure that useful footage can be taken 24/7. The light is powered by a solar panel (see view 1). The angle of the solar panel can be adjusted with a joint, so it can be placed the most optimal for a certain location. Also, if necessary, the solar panel can be moved. The cable is 5[m], so if more power can be generated at a different location at the Noria, the placement of the panel can be adjusted.  


[[File:Assembly1new.png|620px|Image: 620 pixels|left|thumb|View 1: Total assembly.]]
[[File:Assembly1new.png|620px|Image: 620 pixels|left|thumb|View 1: Total assembly.]]
[[File:Assembly2.png|620px|Image: 620 pixels|right|thumb|View 2: Casing with camera and battery.]]
[[File:Assembly2.png|620px|Image: 620 pixels|right|thumb|View 2: Casing with camera and battery.]]
[[File:Assembly4.png|620px|Image: 620 pixels|left|thumb|View 3: Camera inside casing with Velcro (klittenband).]]
[[File:Strap.png|620px|Image: 620 pixels|right|thumb|View 4: Velcro (klittenband) for camera inside casing.]]
[[File:Assembly5.png|620px|Image: 620 pixels|left|thumb|View 5: Battery inside casing with Velcro (klittenband).]]
[[File:Assembly3.png|620px|Image: 620 pixels|right|thumb|View 6: Camera and battery.]]
[[File:Lightfoto.png|620px|Image: 620 pixels|left|thumb|View 7: Light.]]
[[File:Holes.png|620px|Image: 620 pixels|right|thumb|View 8: Screws for attachment arm and closing case.]]


=== From data to information ===
[[File:Assembly4.png|620px|Image: 620 pixels|left|thumb|View 3: Camera inside casing with Velcro fasteners.]]
Stakeholders like Rijkswaterstaat do not want raw data, but information. So the data will need to be converted to useful information, to be able to satisfy stakeholders. To do so the DIKAR model can be used, which stands for data, information, knowledge, action and result <ref name ='DIKAR'>Carpenter, D. (2016, November 17). DIKAR: Aligning Technology And Organisational Strategies. Retrieved May 30, 2020, from http://blog.myceo.com.au/dikar-aligning-technology-and-organisational-strategies</ref>. Data represents the raw numbers, stored, but not managed in a way that makes that they can easily be processed. Information comes from data when it is processed. It gets a form which makes it easier to understand or to find relationships. The last three stages: knowledge, action and result, are carried out by the stakeholder. When information is understood it becomes knowledge of the stakeholder. With this knowledge actions can be taken, that in the end give results. The design collects data by means of images. These images are then labeled by the image recognition program. All these separate images with labels can form the information when it is processed right. The image recognition will label what kind of waste can be seen on an image or within a frame. This all could be combined to information that provides how much of a certain type of waste is picked up, within a certain amount of time. With this, it could be included at what location this waste is picked up, i.e. at which location the Noria was released. This can show relationships between amount of (type of) waste and location. Also, if the image recognition enables this, size of (plastic) waste could be included, as well as the brand of the product (e.g. Coca Cola, Red Bull etc.). This all can provide useful information and relationships, that can be used to take action. Below two examples of useful information display are given. A counting list can be made like in the first figure. The amount of waste per 100[m] riverbank can, for example, be changed to the amount of waste per time unit. An extra column could be the specific location where the Noria is released. With this lists, information like in the second figure can be provided. Hotspots of waste can be recognized, when the Noria has been released for a while on multiple locations. This can provide useful information for what actions need to be taken, but also where.  
[[File:Strap.png|620px|Image: 620 pixels|right|thumb|View 4: Velcro fasteners for camera inside casing.]]


[[File:Assembly5.png|620px|Image:620 pixels|left|thumb|View 5: Battery inside casing with Velcro fasteners.]]
[[File:Assembly3.png|620px|Image: 620 pixels|right|thumb|View 6: Camera and battery.]]


<div><ul>  
<div><ul>  
<li style="display: inline-block;"> [[File:Ex1information.PNG|400px|Image: 400 pixels|left|thumb|Example of information display of counted waste <ref name='plasticresearch'> Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf</ref>.]] </li>
<li style="display: inline-block;">[[File:Lightfoto.png|620px|Image: 620 pixels|left|thumb|View 7: Light.]] </li>
<li style="display: inline-block;"> [[File:Ex2information.PNG|400px|Image: 400 pixels|right|thumb|Example of information display of hotspots of waste <ref name='plasticresearch'> Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf</ref>.]] </li>
<li style="display: inline-block;">[[File:Holes.png|620px|Image: 620 pixels|right|thumb|View 8: Screws for attachment arm and closing case.]] </li>
</ul></div>
</ul></div>


= From Data to Information =
Stakeholders like Rijkswaterstaat do not want raw data, but information. So the data will need to be converted to useful information, to be able to satisfy stakeholders. To do so the DIKAR model can be used, which stands for data, information, knowledge, action and result <ref name ='DIKAR'>Carpenter, D. (2016, November 17). DIKAR: Aligning Technology And Organisational Strategies. Retrieved May 30, 2020, from http://blog.myceo.com.au/dikar-aligning-technology-and-organisational-strategies</ref>. Data represents the raw numbers, stored, but not managed in a way that makes that they can easily be processed. Information comes from data when it is processed. It gets a form which makes it easier to understand or to find relationships. The last three stages: knowledge, action and result, are carried out by the stakeholder. When information is understood it becomes knowledge of the stakeholder. With this knowledge actions can be taken, that in the end give results.


==== How to go from data to information ====
====Types of Information====
 
To transform the collected data into something that is more readable, cohesive and accessible there could be looked at different information types. The most interesting types of information about waste in rivers are:
Data should be transformed into something that is more readable, cohesive and accessible. The most interesting types of information about waste in rivers are:
*Time-dependency
*Time-dependency
*Waste types
*Waste types
Line 564: Line 541:
==== Jupyter Notebooks ====
==== Jupyter Notebooks ====


With the group, it was decided that the final information delivered will be a pie chart of waste type distribution per location. A Jupyter Notebook was written that could interpret data from a .csv file and could make pie charts out of it. An example of this can be found below. If needed, these pie charts can be placed onto a map by hand, based on the location it is for.
It is decided that the final information delivered will be a pie chart of waste type distribution per location. A Jupyter Notebook is written that can interpret data from a .csv file and can make pie charts out of it. An example of this can be found below. If needed, these pie charts can be placed onto a map by hand, based on the location it is for. Also, above the map a time indication could be added. Then it will display the amount of each waste type, at different locations and within a certain time span.


[[File:pie2.jpg|400px|Image: 400 pixels|center|thumb|An example of a produced pie chart.]]
[[File:pie2.jpg|400px|Image: 400 pixels|center|thumb|An example of a produced pie chart.]]


=Test Plan=
=Evaluation=
=====Goal=====
Test the amount of correctly identified waste pieces in the water.


=====Hypothesis=====  
===Design===
At least 85% of the waste will be identified correctly out of 50 images of waste in water.
No ideal solution has been found for data transfer and an energy source. Now, these need to be changed manually and that will add costs that are not included in the project. These costs are not included since they are not determinable. The SD card and battery have to be changed once every two days, however it is not always necessary for a person to drive there to only switch these components. A lot of times the container of the Noria has to be emptied but an exact number has not been determined for this yet. Therefore, the costs of travelling and switching are not taken into account.  


=====Materials=====
In the future, another solution could be found for this. For example by looking at other forms of energy generation, such as hydropower. Since the Noria is set in the Maas where there is a current, this could be used to induce power. Also, there can be looked at making the data transfer autonomous. This was not possible in the timeframe of this project but could be possible in a continuation of this project.
* Camera
* Different types of waste
* Image recognition software
* Reservoir with water
 
=====Method=====
* Throw different types of waste in the water
* Take 50 different images of this from above, with the camera
* Add the images to a folder
* Run the image recognition software
* Analyze how much pieces of waste are correctly identified


=Evaluation=
To conclude, this was the best possible design in the time frame of the project but multiple improvements are possible with more time.
 
===Design===
No ideal solution has been found for data transfer and an energy source. These now need to be changed manually and that will incur additional costs that are not included now. In the future, another solution could be found for this. For example, by not using solar energy, but by looking at other forms such as hydropower. The other requirements have been met.


===Cost Problem===
===Cost Problem===
The original idea was to place the Waste Identifier on the Noria. However, conversations with Rinze de Vries have shown that it is better to use the Waste Identifier in a different way than directly on the Noria. When the Waste Identifier should be placed on each individual Noria, this would entail a large cost. Therefore, it might be less attractive for the user to purchase the Waste Identifier. A proposal by Rinze for this was to apply the Waste Identifier to a place where the waste is collected. Here, the waste collected by many different waste collection actions can be counted in a central location that will only require one Waste Identifier. This will reduce costs considerably and will make it easier to create a dataset due to the background environment of the video images. Creating the dataset becomes easier because the object detection software only needs to be trained on a fixed background. For example, if a black conveyor belt is used where the waste is on and this conveyor belt is filmed from above while moving, the software will only need to be trained on images with a black background.
The original idea was to place the Waste Identifier on the Noria. However, conversations with Rinze de Vries have shown that it is better to use the Waste Identifier in a different way than directly on the Noria. When the Waste Identifier should be placed on each individual Noria, this would entail a large cost. Therefore, it might be less attractive for the user to purchase the Waste Identifier. A proposal by Rinze for this was to apply the Waste Identifier to a place where the waste is collected. Here, the waste collected by many different waste collection actions can be counted in a central location that will only require one Waste Identifier. This will reduce costs considerably and will make it easier to create a dataset due to the background environment of the video images. Creating the dataset becomes easier because the object detection software only needs to be trained on a fixed background. For example, if a black conveyor belt is used, this belt is filmed from above while moving, the software will only need to be trained on images with a black background. However, there is also one disadvantage when using the Waste Identifier on a central collection point. Namely, it is also important to know at what location the waste is collected, and with this it will be harder to distinct those locations. This can be solved by identifying each Noria's waste separately at the collection points, so that the original location is not lost in the process.


===Experiment===
===Experiment===
Experiments have shown that the Waste Identifier can recognize a large part of the bottles and cans. Unfortunately, we have not been able to conduct extensive tests to draw firm conclusions about the accuracy of the Waste Identifier. We didn't have enough cans and bottles to make a video big enough for this. A solution for this could have been to collect plastic bottles and cans. However, we came up with this idea too late and therefore did not have enough time to implement it.
Experiments have shown that the Waste Identifier can recognize a large part of the bottles and cans. Unfortunately, we have not been able to conduct extensive tests to draw firm conclusions about the accuracy of the Waste Identifier. We did not have enough cans and bottles to make a video big enough for this. A solution for this could have been to collect plastic bottles and cans. However, we came up with this idea too late and therefore did not have enough time to implement it.
As can also be seen in the video, the Waste Identifier will also keep track of how many plastic bottles and cans it has recognized so that it can be used to identify the waste problem in the water. As previously stated, the data will be converted into information via a pie chart in which the amount of waste per category will be displayed at a specific location.
As can also be seen in the video, the Waste Identifier will also keep track of how many plastic bottles and cans it has recognized so that it can be used to identify the waste problem in the water. As previously stated, the data will be converted into information via a pie chart in which the amount of waste per category will be displayed at a specific location.


===Points for improvement: Object Detection Software===
===Object Detection Software===
At the moment, the object detection software was made so that we could show what the idea could ultimately look like. It is therefore not yet working optimally. It has now been decided to view the image and recognize the objects every 10 frames. The object is only counted when it is in the top part of the image. This could be improved by applying tracking. The object that comes into view is followed and when it would pass a certain line or it is located in a certain box, the object could be counted. The network should also be trained on several different types of waste, so that a larger part of the waste problem can be identified.
At the moment, the object detection software was made so that we could show what the idea could ultimately look like. It is therefore not yet working optimally. It has now been decided to view the image and recognize the objects every 10 frames. The object is only counted when it is in the top part of the image. This could be improved by applying tracking. The object that comes into view is followed, and when it would pass a certain line or it is located in a certain box, the object could be counted. The network should also be trained on several different types of waste, so that a larger part of the waste problem can be identified. There are also some problems when using different backgrounds. This problem should be solvable by using more images with different backgrounds in the training data set.


===Conclusion===
===Conclusion===
In general, it could therefore be said that the Waste Identifier could certainly be an important tool for reducing the Plastic Soup. However, improvements will still be needed in the design and operation of the Waste Identification, such as better classification and counting of the waste, and enlarging the dataset with different types of waste.
In general, it could therefore be said that the Waste Identifier could certainly be an important tool for reducing the Plastic Soup. However, improvements will still be needed in the design and operation of the Waste Identification, such as better classification and counting of the waste, and enlarging the dataset with different types of waste.


= Useful sources =
===Process of the Project===
Convolutional neural networks for visual recognition <ref>CS231n: Convolutional Neural Networks for Visual Recognition. (n.d.). Retrieved April 22, 2020, from https://cs231n.github.io/neural-networks-1/</ref>
The process started off too ambitious. The goal was to design an underwater robot that could recognize and identify different types of plastics. With this goal, the capabilities of the team were overestimated. After conducting an interview with Hans Brinkhof (Rijkswaterstaat) it became clear that the initial goal was too complicated for the limited time of the project. Based on a suggestion of Hans the focus was shifted towards an extension of the Noria. This gave more structure and realizable goals to the project. After an interview with Rinze de Vries (owner Noria), new requirements and goals were set. With these requirements and goals, more structure was realized. At this moment, four valuable weeks had passed. However, from this point forward the project progressed more rapidly. For the design part, every aspect became much more concrete. The design was simplified a lot and therefore, it was possible to realize a sufficient design in the time frame. The image recognition part had to change datasets, but obtaining the new dataset also became less complicated since this dataset could be handmade more easily . Because of these simplifications, eventually, a design and a working image recognition based on the self-made database are realized. This would not have been possible with the original idea. After discussing the full process of the project with Rinze de Vries for feedback a few conclusions are drawn.


=== Datasets (Marijn) ===
In future projects, there should be started with a good demarcation of the project. That means, set goals and subgoals. Each goal and subgoal has a corresponding question. With these questions, find out what activities are related to it and put these activities out in time.


* Dataset #1 underwater plastic - J-EDI dataset: http://www.godac.jamstec.go.jp/catalog/dsdebris/metadataList?lang=en
From this project there can be learned that structure is the most important part of a project. The capability to bring structure in a project has to be developed throughout ones career. This means that it has to be learned to not set the bar too high. This mainly leads to a lot of unfinished tasks. It is better to set the bar a little lower so that all simpler tasks can be done extensively and a good conclusion can be drawn. This project is a perfect example of that.


* Dataset #2 underwater debris, plants, and fish - MBARI research image gallery: https://www.mbari.org/products/image-gallery/
= Final Video =


Data from the two sources above will have to be annotated by hand. The OID has annotated images:
Here is a link to the final video: https://youtu.be/6C9-dH9V9Kg


* Dataset #3 plastic in all environments - Google Open Images Dataset: https://storage.googleapis.com/openimages/web/visualizer/index.html?set=valtest&type=segmentation&c=%2Fm%2F05gqfk (images can be downloaded from https://storage.googleapis.com/openimages/web/download.html)
=Conducted Interviews=
[[File:Minutes_interview_Hans_Brinkhof_19_May.pdf]]


=== Datasets (Dennis) ===
[[File:Minutes_interview_Rinze_de_Vries_27_May.pdf]]


* Dataset #1 underwater debris - The Australian Marine Debris Database: https://www.tangaroablue.org/database/
[[File:Minutes_Feedback_Moment_with_Rinze_de_Vries_12_June.pdf]]
:Since 2004 more than 15 million pieces of data have been inputted into the Australian Marine Debris Database, creating a comprehensive overview of what amounts and types of marine debris are impacting beaches around the country.
 
* Dataset #2 waste dataset - Kaggle (need an account)
:A dataset containing images of cardboard, glass, metal, paper, plastic and trash.


= Logbook =
= Logbook =
Line 786: Line 744:
|-
|-
|Lotte Rassaerts
|Lotte Rassaerts
|11.25
|12.75
|Meeting (3h), researching data transfer GoPro (3h), Data to information (1.5h), Mounting (5h), updating requirements after interview Rinze (0.25h)
|Meeting (3h), researching data transfer GoPro (3h), Data to information (1.5h), Mounting (5h), updating requirements after interview Rinze (0.25h)
|}
|}
Line 813: Line 771:
|-
|-
|Lotte Rassaerts
|Lotte Rassaerts
|15.5
|17
|Meeting (3h), FOV + Night mode (4h), making new 3D drawings (10h)
|Meeting (3h), FOV + Night mode (4h), making new 3D drawings (10h)
|}
|}
Line 828: Line 786:
|-
|-
|Menno Cromwijk
|Menno Cromwijk
|6.5
|8
|Meeting (3h), working on counting software (4h), updating wiki (1h)
|Meeting (3h), working on counting software (4h), updating wiki (1h)
|-
|-
|Dennis Heesmans
|Dennis Heesmans
|10
|11.5
|Meeting (3h), Working on counting software (4h), Meeting with Rinze (0.5h), Conclusion (4h)
|Meeting (3h), Working on counting software (4h), Meeting with Rinze (0.5h), Conclusion (4h)
|-
|-
Line 840: Line 798:
|-
|-
|Lotte Rassaerts
|Lotte Rassaerts
|11.5
|13
|Meeting (3h), editing assembly and writing text (3h), cleaning up design part of wiki-page (0.5h), adding light and solar panel to design + improved assembly text (5h), text video + record text (1.5h)
|Meeting (3h), editing assembly and writing text (3h), cleaning up design part of wiki-page (0.5h), adding light and solar panel to design + improved assembly text (5h), text video + record text (1.5h)
|}
|}
Line 851: Line 809:
|-
|-
|Kevin Cox
|Kevin Cox
|hrs
|2.5
|description (Xh)
|Meeting (2.5h)
|-
|-
|Menno Cromwijk
|Menno Cromwijk
|hrs
|2.5
|description (Xh)
|Meeting (2.5h)
|-
|-
|Dennis Heesmans
|Dennis Heesmans
|hrs
|5.5
|description (Xh)
|Meeting (2.5h), Writing about counting (3h)
|-
|-
|Marijn Minkenberg
|Marijn Minkenberg
|hrs
|15.5
|description (Xh)
|Meeting (2.5h), Video editing & uploading (13h)
|-
|-
|Lotte Rassaerts
|Lotte Rassaerts
|hrs
|8.5
|description (Xh)
|Meeting (2.5h), Finalizing general wiki part (problem statement, SoTA, further exploration) and design part (2h), Final edit wiki (4h)
|}
|}


= References =
= References =
<references />
<references />

Latest revision as of 15:45, 20 June 2020

Logo.png

Group Members

Student name Student ID Study E-mail
Kevin Cox 1361163 Mechanical Engineering k.j.p.cox@student.tue.nl
Menno Cromwijk 1248073 Biomedical Engineering m.w.j.cromwijk@student.tue.nl
Dennis Heesmans 1359592 Mechanical Engineering d.a.heesmans@student.tue.nl
Marijn Minkenberg 1357751 Mechanical Engineering m.minkenberg@student.tue.nl
Lotte Rassaerts 1330004 Mechanical Engineering l.rassaerts@student.tue.nl

Problem Statement

Over 5 trillion pieces of plastic are currently floating around in the oceans [1]. For a part, this so-called plastic soup, exists of large plastics, like bags, straws, and cups. But it also contains a vast concentration of microplastics: these are pieces of plastic smaller than 5[mm] in size [2]. There are five garbage patches across the globe [1]. In the garbage patch in the Mediterranean sea, the most prevalent microplastics were found to be polyethylene and polypropyline [3].

A study in the Northern Sea showed that 5.4[%] of the fish had ingested plastic [4]. The plastic consumed by the fish accumulates - new plastic does go into the fish, but does not come out. The buildup of plastic particles results in stress in their livers [5]. Beside that, fish can become stuck in the larger plastics. Thus, the plastic soup is becoming a threat for sea life.

The locations of the five garbage patches around the globe[1].

A lot of this plastic comes from rivers. A study published in 2017 found that about 80[%] of plastic trash is flowing into the sea from 10 rivers that run through heavily populated regions. The other 20[%] of plastic waste enters the ocean directly [6], for example, trash blown from a beach or discarded from ships.

In 2019, over 200 volunteers walked along parts of the Maas and Waal [7], and they found 77.000 pieces of litter of which 84[%] was plastic. This number was higher than expected. The best way to help cleaning up the oceans is to first make sure the influx stops. In order to do so, it is important to know how much waste flows from certain rivers to the ocean. At this moment there is no good monitoring of waste flow in rivers, usually everything is counted by hand.

In this project, a contribution will be made to the gathering of information on the litter flowing through the river Maas, specifically the part in Limburg. There will be worked together with the company Noria. They made a machine that removes waste from the water. More information on their project and their interests is provided within the 'Users' section. The device that will be designed will be placed on the Noria as an information-gathering device. It will use image recognition to identify the waste. A design will be made and the image recognition will be tested. Lastly, it will be thought out how the device will be able to save information and communicate it.

Objectives

  • Do research into the state of the art of current recognition software, river cleanup devices and neural networks.
  • Create a software tool that recognizes and counts different types of waste.
  • Test this software tool and form a conclusion on the effectiveness of the tool.
  • Create a design for the image recognition device.
  • Think of a way to save and communicate the information gathered.

Users

In this part the different users or stakeholders will be discussed.

Schone Rivieren (Schone Maas)

Schone rivieren is a foundation which is established by IVN Natuureducatie, Plastic Soup Foundation and Stichting De Noordzee [8]. This foundation has the goal to have all Dutch rivers plastic-free in 2030. They rely on volunteers to collectively clean up the rivers and gather information. They would benefit a lot from the information gathered by the Waste Identifier, because it provides the organization with useful data that can be used to optimize the river cleanup.

A few of the partners will be listed below. These give an indication of the organizations this foundation is involved with.

  • Rijkswaterstaat (executive agency of the Ministry of Infrastructure and Water Management) - Rijkswaterstaat is interested in information about the amount of waste in rivers and the clean up of this.
  • Nationale Postcode Loterij (national lottery) - They donated 1.950.000 euros to the foundation. This indicates that the problem is seen as significant. This donation helps the foundation to grow and allows them to use resources.
  • Tauw - Tauw is a consultancy and engineering agency that offers consultancy, measurement and monitoring services in the environmental field. It also works on the sustainable development of the living environment for industry and governments.

Lastly, the foundation also works together with the provinces Noord-Brabant, Gelderland, Limburg, and Utrecht.

Rijkswaterstaat

Rijkswaterstaat is the executive agency of the Ministry of Infrastructure and Water Management, as mentioned before [9]. This means that it is the part of the government that is responsible for the rivers of the Netherlands. They also are the biggest source of data regarding rivers and all water related topics in the Netherlands. Other independent researchers can request data from their database. This makes them a good user, since this project could add important data to that database. Rijkswaterstaat also funds projects, which can prove helpful if the concept that is worked out in the project is ever realized.

RanMarine Technology (WasteShark)

RanMarine Technology is a company that is specialized in the design and development of industrial autonomous surface vessels (ASVs) for ports, harbors and other marine and water environments. The company is known for the WasteShark. This device floats on the water surface of rivers, ports and marinas to collect plastics, bio-waste and other debris [10]. It currently operates at coasts, in rivers and in harbors around the world - also in the Netherlands. The idea is to collect the plastic waste before a tide takes it out into the deep ocean, where the waste is much harder to collect.

The WasteShark in action[10].

WasteSharks can collect 200 liters of trash at a time, before having to return to an on-land unloading station. They also charge there. The WasteShark has no carbon emissions, operating on solar power and batteries. The batteries can last 8-16 hours. Both an autonomous model and a remote-controlled model are available [10]. The autonomous model is even able to collaborate with other WasteSharks in the same area. They can thus make decisions based on shared knowledge [11]. An example of that is, when one WasteShark senses that it is filling up very quickly, other WasteSharks can come join it, for there is probably a lot of plastic waste in that area.

This concept does seem to tick all the boxes (autonomous, energy neutral, and scalable) set by The Dutch Cleanup. A fully autonomous model can be bought for under $23000 [12], making it pretty affordable for governments to invest in.

The autonomous WasteShark detects floating plastic that lies in the path of the WasteShark using laser imaging detection and ranging (LIDAR) technology. This means the WasteShark sends out a signal, and measures the time it takes until a reflection is detected [13]. From this, the software can figure out the distance of the object that caused the reflection. The WasteShark can then decide to approach the object, or stop / back up a little in case the object is coming closer [12], this is probably for self-protection. The design of the WasteShark makes it so that plastic waste can go in easily, but can hardly go out of it. The only moving parts of the design are two thrusters which propel the WasteShark forward or backward [11]. This means that the design is very robust, which is important in the environment it is designed to work in.

The fully autonomous version of the WasteShark can also simultaneously collect water quality data, scan the seabed to chart its shape, and filter the water from chemicals that might be in it [12]. These extra measurement devices and gadgets are offered as add-ons. To perform autonomously, this design also has a mission planning ability. In the future, the device should even be able to construct a predictive model of where trash collects in the water [11]. The information provided by the Waste Identifier can be used by RanMarine Technology in the future to guide the WasteShark to areas with a high number of litter.

Noria

Noria focuses on the development of innovative methods and techniques to tackle the plastic waste problem in the water. They focus on tackling this problem from the time the plastic ends up in the water until it reaches the sea [14]. In the figure below, the system of Noria can be seen. It works with a large rotating mill, of which the blades consist of sieves. As the blades rotate, using an electric motor, macroplastics and other debris are removed out of the top layer of the water. Eventually the waste ends up in the middle of the machine, where it falls into a storage bin. Via Rijkswaterstaat, contact has been made with the founder and owner of Noria, Rinze de Vries. Rinze de Vries is interested in working together for this project. Therefore, there is decided to apply an image recognition system on the Noria system to detect the amount and type of waste that is collected by the system of Noria.

System of Noria in action [14].

A pilot has been executed with the Noria. This pilot is aimed at testing a plastic catch system in the lock of Borgharen. The following conclusions can be drawn from this pilot:

  • More than 95[%] of the waste released into the lock was taken out of the water with the Noria system. This applies to waste as well as organic waste with a size of 10 to 700 mm.
  • At this moment, it is quite a challenge to drain the waste from the system.

Requirements

For the Waste Identifier a number of requirements has been set that are listed below. In order to make the requirements concrete and relevant it has been decided to contact potential users. One of the users, Rijkswaterstaat, responded to the request and decided that it was allowed to conduct an interview with one of their employees, Ir. Brinkhof, who is a project manager. He is specialized in the region of the Maas and has insight in all projects and maintenance. Another interview has been conducted with Ir. Rinze de Vries, who is the owner of Noria. Both these interviews can be found at the end of this page in the section 'Conducted interviews'. Based on the conducted interviews the following requirements have been set.

Requirements for the Software

  • The program should be able to identify and classify different types of waste;
  • The program should be able to count the amount of each waste type that flows into the Noria;
  • The program should be able to identify and count waste in the water correctly for at least 90[%] of the time;
  • Data should be converted to information;
  • The same piece of waste should not be counted multiple times. The same threshold of counting 90[%] correctly applies here.

Requirements for the Design

  • The design should be weatherproof;
  • The design should operate at all moments when Noria is also operating;
  • The design should be robust, so it should not be damaged easily;
  • The design should not interfere with the rotating parts of the Noria;
  • The design should have its own power source.

Finally, literature research about the current state of the art must be provided. At least 25 sources must be used for the literature research of the software and design.

Planning

Approach

For the planning, a Gantt Chart is created with the most important goals and subgoals that need to be tackled. The group is split into people who create the design and applications of the Waste Identifier, and people who work on the creation of the neural network. The overall view of the planning is that in the first two weeks, a lot of research has to be done. This needs to be done for, among other things, the problem statement, users and the current technology. In the second week, more information about different types of neural networks and the working of different layers should be investigated to gain more knowledge. Also, this could lead to installing multiple packages or programs on laptops, which needs time to test whether they work. During this second week, a data-set should be created or found that can be used to train our model. If this cannot be found online and thus should be created, this would take much more time than one week. However, it is hoped to be finished after the third week. After week 5, an idea of the design should be elaborated with the use of drawings or digital visualizations. Also all the possible neural networks should be elaborated and tested, so that in week 8 conclusions can be drawn for the best working neural network. This means that in week 8, the Wiki-page can be finished with a conclusion and discussion about the neural network that should be used and about the working of the device. Finally, week 9 is used to prepare for the presentation.

The activities are subdivided related to the neural network/image recognition and the design of the device. Kevin and Lotte will work on the design of the device and Menno, Marijn and Dennis will look work on the neural networks.

Project planning.

Milestones

Week Milestones
1 (April 20th till April 26th) Gather information and knowledge about chosen topic.
2 (April 27th till May 3rd) Further research on different types of neural networks and having a working example of a neural network.
3 (May 4th till May 10th) Elaborate the first ideas of the design of the device and find or create a usable database.
4 (May 11th till May 17th) First findings of correctness of different neural networks and tests of different types of neural networks.
5 (May 18th till May 24th) Conclusion of the best working neural network and making final visualisation of the design.
6 (May 25th till May 31st) First set-up of wiki page with the found conclusions of neural networks and design with correct visualisation of the findings.
7 (June 1st till June 7th) Creation of the final wiki-page.
8 (June 8th till June 14th) Presentation and visualisation of final presentation.

Deliverables

  • Design of the Waste Identifier
  • Software for image recognition
  • Complete wiki-page
  • Final presentation

State-of-the-Art

Quantifying Waste

Plastic debris in rivers has been quantified before in three ways [15]. First of all, by quantifying the sources of plastic waste. Second of all, by quantifying plastic transport through modelling. Lastly, by quantifying plastic transport through observations. The last one is most in line with what will be done in this project. No uniform method for counting plastic debris in rivers was made. So, several plastic monitoring studies each thought of their own way to do so. The methods can be divided up into 5 different subcategories [15]:

1. Plastic tracking: Using GPS (Global Positioning System) to track the travel path of plastic pieces in rivers. The pieces are altered beforehand so that the GPS can pick up on it. This method can show where cluttering happens, where preferred flowlines are, etc.

2. Active sampling: Collecting samples from riverbanks, beaches, or from a net hanging from a bridge or a boat. This method does not only quantify the plastic transport, it also qualifies it - since it is possible to inspect what kinds of plastics are in the samples, how degraded they are, how large, etc. This method works mainly in the top layer of the river. The area of the riverbed can be inspected by taking sediment samples, for example using a fish fyke [16].

3. Passive sampling: Collecting samples from debris accumulations around existing infrastructure. In the few cases where infrastructure to collect plastic debris is already in place, it is just as easy to use them to quantify and qualify the plastic that gets caught. This method does not require any extra investment. It is, like active sampling, more focused on the top layer of the plastic debris, since the infrastructure is too.

4. Visual observations: Watching plastic float by from on top of a bridge and counting it. This method is very easy to execute, but it is less certain than other methods, due to observer bias, and due to small plastics in a river possibly not being visible from a bridge. This method is adequate for showing seasonal changes in plastic quantities.

5. Citizen science: Using the public as a means to quantify plastic debris. Several apps have been made to allow lots of people to participate in ongoing research for classifying plastic waste. This method gives insight into the transport of plastic on a global scale.

Automatic Visual Observations

Cameras can be used to improve visual observations. One study did such a visual observation on a beach, using drones that flew about 10 meters above it. Based on input from cameras on the UAVs, plastic debris could be identified, located and classified (by a machine learning algorithm) [17]. Similar systems have also been used to identify macroplastics on rivers.

Another study made a deep learning algorithm (a CNN - to be exact, a "Visual Geometry Group-16 (VGG16) model, pre-trained on the large-scale ImageNet dataset" [18]) that was able to classify different types of plastic from images. These images were taken from above the water, so this study also focused on the top layer of plastic debris.

The plastic debris in these images was automatically classified by a deep learning algorithm.

The algorithm had a training set accuracy of 99[%]. But that does not say much about the performance of the algorithm, because it only says how well it categorizes the training images, which it has seen lots of times before. To find out the performance of an algorithm, it has to look at images it has never seen before (so images that are not in the training set). The algorithm recognized plastic debris on 141 out of 165 brand new images that were fed into the system [18]. That leads to a validation accuracy of 86[%]. It was concluded that this shows the algorithm is pretty good at what it should do.

Their improvement points are that the accuracy could be even higher and more different kinds of plastic could be distinguished, while not letting the computational time be too long.

Image Recognition

Over the past decade or so, great steps have been made in developing deep learning methods for image recognition and classification [19]. In recent years, convolutional neural networks (CNNs) have shown significant improvements on image classification [20]. It is demonstrated that the representation depth is beneficial for the classification accuracy [21]. Another method is the use of VGG networks, that are known for their state-of-the-art performance in image feature extraction. Their setup exists out of repeated patterns of 1, 2 or 3 convolution layers and a max-pooling layer, finishing with one or more dense layers. The convolutional layer transforms the input data to detect patterns and edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed, is by choosing a different activation function or kernel size [21].

There are still limitations to the current image recognition technologies. First of all, most methods are supervised, which means they need big amounts of labelled training data, that need to be put together by someone [19]. This can be solved by using unsupervised deep learning instead of supervised. For unsupervised learning, instead of large databases, only some labels will be needed to make sense of the world. Currently, there are no unsupervised methods that outperform supervised. This is because supervised learning can better encode the characteristics of a set of data. The hope is that in the future unsupervised learning will provide more general features so any task can be performed [22]. Another problem is that sometimes small distortions can cause a wrong classification of an image [19] [23]. This can already be caused by shadows on an object that can cause color and shape differences [24]. A different pitfall is that the output feature maps are sensitive to the specific location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Using this, has the effect of making the resulting down sampled feature maps more robust to changes in the position of the feature in the image [21].

Neural Networks

Neural networks are a set of algorithms that are designed to recognize patterns. They interpret sensory data through machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors. Real-world data, such as images, sound, text or time series, need to be translated into such numerical data to process it [25].

There are different types of neural networks [26]:

  • Recurrent neural network: Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. These networks are mostly used in the fields of natural language processing and speech recognition [27].
  • Convolutional neural networks: Convolutional neural networks, also known as CNNs, are used for image classification.
  • Hopfield networks: Hopfield networks are used to collect and retrieve memory like the human brain. The network can store various patterns or memories. It is able to recognize any of the learned patterns by uncovering data about that pattern [28].
  • Boltzmann machine networks: Boltzmann machines are used for search and learning problems [29].

Convolutional Neural Networks

In this project, the neural network should retrieve data from images. Therefore a convolutional neural network could be used. Convolutional neural networks are generally composed of the following layers [30]:

Layers in a convolutional neural network.

The convolutional layer transforms the input data to detect patterns, edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed are by choosing a different activation function, or kernel size. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). Max pooling is applied to reduce overfitting. A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. This has the effect of making the resulting downsampled feature maps more robust to changes in the position of the feature in the image. The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Fully connected layers connect all input values via separate connections to an output channel. Since this project has to deal with a binary problem, the final fully connected layer will consist of 1 output. Stochastic gradient descent (SGD) is the most common and basic optimizer used for training a CNN [31]. It optimizes the model using parameters based on the gradient information of the loss function. However, many other optimizers have been developed that could have a better result. Momentum keeps the history of the previous update steps and combines this information with the next gradient step to reduce the effect of outliers [32]. RMSProp also tries to keep the updates stable, but in a different way than momentum. RMSprop also takes away the need to adjust learning rate [33]. Adam takes the ideas behind both momentum and RMSprop and combines into one optimizer [34]. Nesterov momentum is a smarter version of the momentum optimizer that looks ahead and adjusts the momentum based on these parameters [35]. Nadam is an optimizer that combines RMSprop and Nesterov momentum [36].

YOLO

YOLO is a deep learning algorithm which came out on may 2016. It is popular because it’s very fast compared with other deep learning algorithms [37]. For YOLO, a completely different approach is used than for prior detection systems. In prior detection systems, a model is applied to an image at multiple locations and scales. High scoring regions of the image are considered detections. For YOLO a single deep convolutional neural network is applied to the full image. This network divides the image into a grid of cells and each cell directly predicts a bounding box and object classification [38]. These bounding boxes are weighted by the predicted probabilities [39].


The newest version of YOLO is YOLO v3. It uses a variant of Darknet for training and testing. Darknet originally has 53 layers trained on ImageNet. For the task of detection, 53 more layers are stacked onto it. In total, this means that a 106 layer fully convolutional underlying architecture is used for YOLO v3. In the figure below it can be seen what the architecture of YOLO v3 looks like [40].

YOLO network structure [40].

LabelImg

The network needs to be trained on images of the object that is needed to be identified by the network. These images, on which the network will be trained, need to be labeled to assign them to a certain class. This can be done with LabelImg. LabelImg is a graphical image annotation tool which can be seen below. The objects need to be identified manually by creating a rectangular box around it and assigning them a label.

LabelImg.

At the end, the network should be able to detect the objects that are trained on it. This can be done with different formats: photos, videos or via webcam. In the figure below, an example of the working of the network can be seen. First, the network divides the image into regions and predicts the bounding boxes and probabilities for each region. Then, these bounding boxes are weighted by the predicted probabilities.

Example object detection [41].

Further Exploration

Location

Rivers are seen as a major source of debris in the oceans [42] . The tide has a big influence on the direction of the floating waste. During low tide the waste flows towards the sea, and during high tide it can flow over the river towards the river banks [43].

A big consequence of plastic waste in rivers, seas, oceans and river banks is that a lot of animals can mistake plastic for food, often resulting in death. There are also economic consequences. More waste in waters, means more difficult water purification, especially because of microplastics. It costs extra money to be able to purify the water. Also, cleaning of waste in river areas, costs millions a year [44].

A large-scale investigation has taken place into the wash-up of waste on the banks of rivers. At river banks of the Maas, an average of 630 pieces of waste per 100 meters of river bank was counted, of which 81[%] is plastic. Some measurement locations showed a count of more than 1200 pieces of waste per 100 meters riverbank, and can be marked as hotspots. A big concentration of these hotspots can be found at the riverbanks of the Maas in the south of Limburg. A lot of waste, originating from France and Belgium, flows into the Dutch part of the Maas here. Evidence for this, is the great amount of plastic packaging with French texts. Also, in these hotspots the proportion of plastic is even higher, namely 89[%] instead of 81[%] [43].

The Waste Identifier should help to tackle the problem of the plastic soup at its roots, the rivers. Because of the high plastic concentration in the Maas in the south of Limburg, there will be specifically looked into designing the image recognition module, for this part of the Maas. The Noria is often placed in locks to make sure it does not interfere with other water traffic. Therefore, there will also be focused on those specific parts of the river Maas in the South of Limburg.

Waste

An extensive research into the amount of waste on the river banks of the Maas has been executed [43]. As explained before, waste in rivers can float into the oceans or can end up on river banks. Therefore, the counted amount of waste on the river banks of the Maas is only a part of the total amount of litter in the rivers, since another part flows into the ocean. The exact numbers of how much flows into the oceans are not clear. However, it is certain that at the South of Limburg an average of more than 1200 pieces of waste per 100 meters of riverbank of the Maas were counted, of which 89[%] is plastic.

A top 15 was made of which types of waste were encountered the most. The type of plastic most commonly found is indefinable pieces of soft/hard plastic and plastic film that are smaller than 50 [cm], including styrofoam. This indefinable pieces also include nurdles. This are small plastic granules, that are used as a raw element for plastic products. Again, the south of Limburg has the highest concentration of this type of waste. This is because there are relatively more industrial areas there. Another big part of the counted plastics are disposable plastics, often used as food and drink packaging. In total 25[%] of all encountered plastic is disposable plastic from food and drink packages.

Only litter that has washed up on the riverbanks has been counted. The Waste Identifier can help with monitoring the waste flow in the water of the rivers to get a more complete view of hotspots and often encountered waste types.

Image Database

The CNN or YOLO can be pretrained on the large-scale ImageNet. Due to this pre-training, the model has learned certain image features from this large dataset. Secondly the neural network should be trained on a database specified on this subject. This database should then randomly be divided into 3 groups. The biggest group is the training data, which the neural network uses to see patterns and to predict the outcome of the second dataset, the validation data. Ones this validation data has been analyzed, a new epoch is started, which means that the validation data is part of the training data. Once a final model has been created, a test dataset can be used to analyzed its performance.

It is difficult to find a database perfectly corresponding to our subject. First of all, a big dataset of plastic waste in the ocean is available [45]. A big dataset of plastic shapes can be used, although these are not from waste in the water it can still be useful [46]. Using image preprocessing, it could be possible to still find corresponding shapes of plastic from pictures in the water that the camera takes. Lastly, a dataset can be created by ourselves.

Neural Network Design

Because of the higher frame rate that can be created using YOLO in comparison to CNN, it has been chosen to use YOLO as the object detection method. A dataset, that will be further explained later, will be labelled and used for training and validation. This training is done using Google Colab, so that an external GPU computer, made available by Google, can be used to improve the training speed. Here, weights from Darknet, which is the framework for YOLO, is downloaded and iterationally changed to fit the database and gain the lowest validation loss. However, the connection to Google Colab can only be made for 12 hours. Because of this, the training has been done multiple times by restarting a new training using the final weights from the previous training. By doing this, the validation loss has been reduced to 0.045. These weights have finally been trained on new images and videos, to verify the low loss. Also a counting software has been created. For this, it needs to be assumed that there is a current in the water and new waste objects only appear from the top of the frame. If an object has been detected, and no object was higher than this object, it means that this is a new object and it will be counted.

Although, that part of the code for testing and training has been obtained from the PySource blog [47], these codes need to be adapted to our problem statement and number of classifications. Besides, code for counting and changing the label number to our specific problem, have been created. The files can be downloaded from the following GitHub link: https://github.com/mennocromwijkk/Robots_Everywhere_3. Here, the file "bottle_and_can_Train_YoloV3 .ipynb" can be used to train the model in Google Colab. the zip-file "obj.zip" needs to be placed in a map named "yolov3" in Google Drive. The files "yolo_object_detection.py" can be used for the detection of images and "real_time_yoloV2.py" for the detection of videos, where the counting software has been added.

Data Augmentation

The dataset does not contain as many images as desired. If there is not enough data, neural networks tend to over-fit to the little amount of data there is, which is undesirable. That is why some way has to be found to increase the size of the dataset. One way to increase the size of a dataset is by use of data augmentation [48][49] . If this is used, then not only are the original images fed into the neural network, but also slightly altered images. Alterations include:

  • Translation
  • Rotation
  • Scaling
  • Flipping
  • Illumination
  • Overlapping images
  • Gaussian noise, etc.
Different uses of data augmentation. Every image is a completely new one to the neural network.

Every altered image counts as completely new data for the neural network, which is why it is able to train using this duplicated data without over-fitting to it.

Neural networks benefit from having more data to train on, simply because the classifications become stronger with more data. But on top of that, neural networks that are trained on translated, resized or rotated images are much better at classifying objects that are slightly altered in any way (this is called invariance). In the case of waste in water, training a neural network to be invariant makes a lot of sense: there is no saying whether a piece of waste will be upside-down, slightly damaged, not fully visible etc. A data augmentation code has been written and can be implemented once the dataset is final. It can be found via the following GitHub link: https://github.com/mennocromwijkk/Robots_Everywhere_3 at "data_aug.py".

Dataset

The idea is that a test setup will be created and placed in a black reservoir. If the final proof-of-concept test setup is likewise, it makes sense that the dataset should have similar conditions. This is why the dataset will consist of self-taken pictures using a similar setup (position, camera angle, lighting) as the test setup, and of an online dataset which contains images of waste outside the water. This way, a dataset that is large enough to train the neural network is obtained. The final dataset can be found via the following GitHub link: https://github.com/mennocromwijkk/Robots_Everywhere_3.

The self-taken pictures will be made from a very slight angle (so not directly above the plastic), in reasonable shade, with as little reflections as possible, to avoid confusing the neural network. The amount of useful images can later be increased using data augmentation. Different types of river waste will be gathered and submerged in the water of the reservoir. There will be images of:

  • Plastic bottles
  • Drinking cans

The ground truth of the images will be categorized by hand using the labelling program 'labelImg'. With the latter, the position of the object can also be indicated. More than one type of waste can be on one image. There will also be some noise in the water, to make it a bit harder for the neural network to recognize the waste. This noise will come in the form of leaves, similar to the noise that will be faced on Noria's actual installation.

This means that our final product will be mostly a proof-of-concept. If the idea is actually realized on the Noria, it is advised to recreate the dataset in its river environment, so that the neural network does not get confused over any sudden changes. Given the effectiveness in the black vat setup, the possible effectiveness of the neural network in Noria's environment can be discussed at the end of the project.

Photos

Eventually 78 photos have been taken by ourselves, containing bottles and cans, to train a neural network with, together with the online dataset of plastic bottles and cans on a white back ground[46]. This is a TrashNet database that is open for everybody and downloaded on GitHub.. They are compressed to 500 x 500 pixels using a Photoshop script. See the examples below.

4 of the photos that were taken, and 4 augmented photos.

Data augmentation was applied to this set of photos, expanding the dataset to 529 photos. In the Data Augmentation, use was made of:

  • Scaling
  • Translation
  • Rotation
  • Flipping

Each of these augmentations was done at random. The random range was made very slight so that little problems occured with trash being stretched out unrealistically. The resulting dataset is only 18 MB and can be found via the following Github link: https://github.com/mennocromwijkk/Robots_Everywhere_3. The augmented images were categorized by hand as 'plastic' and 'can', using labelImg. They were then zipped into a folder and submitted into Google Colab to start training.

Test Plan

Goal:

Test the amount of correctly identified and counted waste pieces in the water.

Hypothesis:

At least 90[%] of the waste will be identified and counted correctly out of at least 50 images and a video of waste in water.

Materials:

  • Camera
  • Different types of waste
  • Image recognition software
  • Reservoir with water

Method:

  • Throw different types of waste in the water
  • Take at least 50 different images of this from above, with the camera (there can be more pieces of waste within one image)
  • Make a video of the floating waste
  • Add the images to a folder
  • Run the image recognition software
  • Analyze how much pieces of waste are correctly identified and counted

Note: due to limited resources it was not possible to make a long video of floating waste, so separate videos are made that are placed one behind the other. To get more reliable results in the future, more images and videos can be used.

Testing Results (photos)

New test photos are taken of individual waste items in the reservoir, and of a very crowded reservoir filled with lots of trash. This test dataset can also be found in GitHub (https://github.com/mennocromwijkk/Robots_Everywhere_3). The idea was to see if the current neural network could also handle crowded photos of trash, granted that recognizing individual items might be pretty easy for it.

Most of the test photos gave correct results. The test photos of the individual waste items all worked perfectly. However, in some of the crowded photos, one or maybe two items were missed. Sometimes this was caused by the object being a little out of frame. In other cases, the object was behind the surrounding objects, making it too hidden to be recognized.

4 examples of image recognition on the crowded test photos.

Waste being out of frame should not be an issue in the final application of the image recognition, since the camera will film over the entire width where the trash can be. Waste being too close together might cause a problem on the Noria though, as clogging of waste is a realistic problem. This problem could be (mostly) solved by training on very crowded images, which will force the neural network to look for smaller parts of waste hidden behind other parts.

Testing Results (videos)

The most important part of this project is to visualize the amount and sort of waste that is being removed from the water. This will be done by using the object detection software. A script is written that classifies the waste and also counts the amount of waste objects that are removed from the water. The same neural network that is used for the photos can be used for this purpose. The neural network individually interprets each frame in this case.

To test whether the counting works, new videos are recorded where the camera moves over the trash, making it as if the trash is moving from top to bottom. The idea is to use these videos to test whether the counting of waste is working or not. In the video below, a demonstration of the counting software can be seen. At this moment the software is made in a relatively simple way, because it would be good enough to show the working of the concept and there is not enough time to make a more complicated better working script.

Demonstration of counting software.

In the second video can be seen that some objects are counted double. This occurs because the object is not detected in a certain frame. To solve this, tracking software can be used so the object can be followed. The object could then be counted when it passes a certain line (see https://www.youtube.com/watch?v=WcKx9u6XmDI) or when it is in a certain box (see https://www.youtube.com/watch?v=3Tw7q0YdcHA).

Two objects are counted double.

Design

Besides the image recognition program, the module itself will need to meet the requirements mentioned at the beginning of this page. There, it is mentioned that the robot should operate at all moments when the Noria is also operating. This means that battery life should be long or some kind of power generation must be present at the Noria itself. Also, the design should be weatherproof and robust. The robot will need to have certain functionalities to be able to meet these requirements. There will be focused on specific parts of the robot that are essential to the operation of the robot. This includes:

  • Image recognition hardware
  • Data transfer
  • Power source
  • General assembly

Image Recognition Hardware

The camera that must be used, should be weatherproof. Also, the device should not run out of energy. Besides, it must be possible to retract the images from the camera to be able to use them for image recognition. Finally, the quality should of course be high enough to be able to let the image recognition work well. A common used camera is the GoPro. The GoPro Hero6, Hero7 and Hero8 can be powered externally, also with a weatherproof connection [50] [51]. The internal battery can be left in place as a safety net in case external power cannot be provided. Without an internal battery, the camera will turn off when the external power flow stops and it will not turn back on automatically when the power source is restored. With an internal battery it will switch seamlessly when necessary. The disadvantage is of course that the internal battery can also run out of power. GoPro does not offer very long battery life when shooting for a long time, however there are ways to improve this and this will be elaborated on in the next part. For now there will be focused on the resolution that the GoPro cameras have to offer. The newest GoPro, the GoPro Hero8 Black, takes photos in 12MP and makes video footage (including timelapses) in 4K up to 60fps. Additionally, it has improved video stabilization, called HyperSmooth 2.0, which can come in handy when there are more waves, by e.g. rougher weather [52]. However, lots of external extension (like additional power sources from external companies) are not compatible with the newest GoPros yet. The GoPro Hero7 Black has about the same specs when it comes to image and video quality. It also has video stabilization, but an older version, namely HyperSmooth [53]. More extensions are possible for the GoPro Hero7 Black, so it is better to use that version.

GoPros are a compact and a relatively cheap option compared to DSLR cameras (Digital Single Lens Reflex). However, as mentioned before, battery life can be an issue. Therefore, another option could be to use the Cyclapse Pro, which can also come with extensions such as solar panels. They have a build-in Nikon or Canon camera, which can provide a higher quality [54]. The standard implemented camera is the Canon T7, that provides 24.1MP pictures and can provide full HD videos at 30 fps [55]. The camera itself is $700 USD (2 times more expensive than GoPros), and the costs increase quickly when additional components are bought. The complete Cyclapse Pro includes a Digisnap Pro controller with Bluetooth to enable time-lapsing, a Cyclapse weatherproof housing and a lithium ion battery [54]. Because of this, this Cyclapse Pro module costs over $3000 USD. Also, the module is not as compact as a GoPro, since DSLR cameras themselves are already much larger than GoPros. Before a choice can be made between both options, there must be looked at data transfer options and additional power sources.

Data Transfer

A GoPro creates its own Wifi signal to which you could connect a phone using the GoPro app. Then data could be sent from there to a computer. Another option could be Auto Upload which is part of GoPro Plus. For a monthly or yearly fee, the GoPro automatically uploads its footage to The Cloud [56] [57]. However, this works together with the GoPro app which requires a mobile device. The image recognition itself will use a computer. Also, when auto uploading to The Cloud, the images/videos are not deleted from the storage within the GoPro. This will be necessary for operation of the device, since otherwise the GoPro storage will be filled up quickly. Besides, it is not completely clear if Auto Upload requires that the GoPro and mobile device are connected to the same Wifi network. Finally, to auto upload, the GoPro must be connected with a power source and it needs to be charged to at least 70[%], and it may not be possible to always keep the battery above this 70[%].

A solution could be to connect GoPro to the FlashAir™ W-04 wireless SD card. This SD card can save up to 64 GB of data. The SD card can be accessed with a phone or laptop and the pictures have to be manually saved. Then the pictures can be used for the image recognition. Also, a normal SD card could be used, but this requires that the SD card is manually swapped at certain times.

The Cyclapse Pro also offers Wifi options to be able to transfer data [58]. The DigiSnap Pro within the Cyclapse Pro can transfer images from the camera to an FTP (File Transfer Protocol) server on the local network or internet. The DigiSnap Pro most popularly uses FTP image transfers via USB cellular modems and local USB download. The Digisnap Pro also provides an Android app. Every image taken by the camera can be configured within the DigiSnap Pro Android Application to automatically transfer to a specified FTP folder location on the internet using a USB cellular modem.

Power Source

GoPro with SolarX extension[59].
Cyclapse with solar panel extension[60].

CamDo offers an add-on to the GoPro Hero3 to Hero7. It is called SolarX which is a weatherproof solar panel module [59]. This enables long term operation of GoPro cameras for time lapse photography. It includes a 9[W] solar panel to recharge the included V50 battery. The solar panel can be upgraded to 18[W] for use in cloudy or rainy areas. The solar panel charges the included lithium polymer battery which outputs 5[V] to power the camera and can also power other accessories within the weatherproof enclosure. The solar panel can directly be attached to the casing or can be placed separately for optimal usage. The complete module adds significant size to the GoPro, but within the casing there is extra space for additional accessories. If the camera can run indefinitely with only the solar panel, depends on the weather and camera settings. CamDo made a calculator to determine battery life and the best setup [61]. This calculator will be used in the next section to determine whether the solar panel will provide enough power to the camera if it has to make videos 24/7.

Cyclapse Pro also offers a solar panel extension [60]. Without solar panel, a full battery can make around 3000 images [58]. The 20W solar panel can make sure the battery is charged. A second battery pack can be included, to increase the duration the system will operate without charging (e.g. for cloudy skies) [62]. It uses a controller, the Digisnap Pro, to reduce battery usage and gives programming options [54]. Total costs (dependent on specific add-ons), are around $4000 USD, which is significantly larger than for the GoPro.

Since the Cyclapse Pro module is much more expensive, and similar performance is expected, it is chosen to go further with the GoPro Hero7 Black. Now some power calculations will be done with the previously mentioned CamDo calculator to determine if the solar panel will suffice as a power source [61].

Data Storage and Energy Consumption

To be able to draw a conclusion on the power source, the data storage and energy consumption of the camera should be known. To calculate the data storage and energy consumption, the time lapse and solar power calculator from CamDo [61] are used. This gives a rough estimation of the memory and energy needed. The estimations of the calculator will be taken with a margin, to be sure it will work in real life.

The image recognition needs a video as input, also the images have to be taken one second after each other. Further, the fps does not matter since the image recognition observes every single frame, but the GoPro automatically transfers the videos in 30 fps [63]. When the camera shoots 24 hours a day and has a time interval of 1 second between every photo, a 48 minute video with 86400 frames will be the result.

First, the data storage. The data will be stored on an SD card. There has been chosen to store it on a normal SD card. A WiFi SD card has been considered, however there is too much data to transfer, therefore it is easier to manually use two SD cards and change them. To make a choice of SD card, there should be known how many minutes of video can be saved on the card. The following table is obtained with numbers from the CamDo calculator [61].

SD card size [GB] Number of minutes that can be saved
4 9 minutes
8 18 minutes
16 37 minutes
32 74 minutes
64 148 minutes
128 297 minutes
200 464 minutes
256 594 minutes

The eventual choice of SD card also depends on the energy source. This is because the energy source might have to be switched after a certain period of time as time as well. The most effective is if both the SD card and power source are switched at the same time.

The approximation of energy needed is 40.80[Wh]. The internal battery delivers 4.7[Wh], meaning the GoPro needs external energy. The two options considered are a solar panel that powers an external battery or using only an external battery that will have to be switched manually.

In the CamDo calculator, the solar irradiance can be filled in, this is combined with the solar panel of choice and the delivered energy will be calculated. The solar irradiance can be found at the Solar Electricity Handbook [64]. The city taken to obtain the solar irradiance was Maastricht, since that is a city close to the Maas. The following values are obtained in the ideal situation, meaning the solar panel always faces the sun.

Solar irradiance in Maastricht in ideal situation.

The 9[W] solar panel delivers 36[Wh] of energy on a average day in June, where the highest solar irradiation is obtained. This means that multiple solar panels are needed to power the external battery needed to power the GoPro in June alone. In December, the solar radiation, ideally has an average of 1.19[kWh/m^2/day]. Two solar panels then deliver a combined energy of 17[Wh]. This is not enough to constantly power the GoPro, when it has to take a picture every second. Also, since the SD card needs to be switched, it is a better option to only use external batteries, since this is much cheaper, see the table below.

Item Price
2x 9[W] solar panel + 44[Wh] external battery[59] €1761,08
2x Anker Astro E7 external battery[65] (96.48Wh) €95,77

The Anker Astro E7 external battery is chosen because it is compatible with the GoPro Hero7 black and has an extremely high capacity. This option is much cheaper taking into account that a person already has to manually switch the SD card.

Now a choice of SD card can be made, since it is now known that the battery has to be switched once every two days. Two days of shooting is 96 minutes of video. The CamDo calculator assumes that there can be a 30[%] difference in capacity used on the SD card. Therefore, a time of (96*1.3=) 124.8 minutes is taken. This means that the card size has to be 64[GB]. The SanDisk Extreme microSDXC has been chosen, because this card is recommended by GoPro[66]. The price is €12 per card[67].

Total price

1x GoPro Hero 7 black €203,15
2x Anker Astro E7 external battery (96.48Wh) €95,77
2x SanDisk Extreme microSDXC €24,00
Total €322,92

This price seems to be very reasonable, however it is very deceptive. This is because the costs of the travel and time of the person switching the SD card and battery have not been taken into account yet. During more busy days the Noria has to be emptied almost every day. This means that then someone can also swap the battery and SD card, this will only take a few minutes extra. However, it can also occur that the Noria does not have to be emptied for days. This means someone will have to go the Noria to only change the battery and SD. This will increase the costs. However, for now this is the best solution since the SolarX will not provide enough energy and there is no solution for automatic and wireless data transfer. In the future there could be looked at other solutions to these problems, like hydropower as an energy source.

Lighting

In order to make sure that images of the waste can be made 24/7 for the image recognition, it will be necessary to also makes pictures when it is dark. Therefore, a light will need to be added to the design. This external lighting has to meet some requirements. The lighting must work from dusk till dawn. It has to have its own power source, preferably through solar energy. The lighting must have at least an IP64 rating. The IP rating 'classifies and rates the degree of protection provided by mechanical casings'[68]. The IP64 rating means that the casing is dust-tight and protected against the splashing of water. The external lighting must be able to be mounted on the frame of the Noria. Lastly, the lighting needs to light up the space in front enough so that the image recognition works. In this project it is not possible to test if the first and last requirements is met. The first requirement is elucidated later. To test the last requirements, a setup with the lighting and camera in the dark has to be made. Then a lot of photos need to be made in order to train the image recognition. Finally, the image recognition needs to be tested if it recognizes at least 90[%] of the plastic in the night with the lighting. This is not possible to test in the current time frame. Because a test is not possible, there is no 100[%] guarantee that the current chosen lighting is suitable. However, there will be an elaborate explanation to why this particular lighting is chosen.

The chosen light is Solar Lights Outdoor, Hikeren IP65 Waterproof Solar Lights[69]. The first requirement is that the lighting has to work from dusk till dawn. For this requirement the times of the sunset and sunrise on the shortest day in the Netherlands are used. The longest time between sunset and sunrise is from December 20 to 21 [70]. The sun goes down at 16:29 and rises at 8:46 meaning the lighting has to be able to work for 16 hours and 17 minutes straight. The Hikeren IP65 Waterproof Solar Light can work up to 18 hours straight when fully charged. In order to fully charge the lighting, the solar panel needs 6.5 hours of illumination[69]. Because there are 24 hours in a day, and the longest dusk till dawn takes 16 hours and 17 minutes, this would still leave 7 hours and 43 minutes for charging. This should be enough, however this has to be tested in real life. If it turns out that the lighting does not last the full night, a second light should be added. The problem that should be solved then is to make sure that the lights are not working at the same time but one light turns on just before the other one turns off. This will not be investigated in this project, but should be kept in mind for a potential extension of this project. The Hikeren IP65 Waterproof Solar will automatically switch on and off when it becomes either light or dark.

The Hikeren IP65 Waterproof Solar Light, has as stated in the name an IP rating of 65, meaning it is dust-proof and protected against water jets[68]. The rating should be at least IP64 and therefore, the Hikeren Solar light meets this requirement.

Finally, the lighting has to be able to be mounted on the frame. This can be done and is discussed in further detail in the assembly section.

Field Of View (FOV)

GoPro offers different FOVs (Fields Of View), that determine the area that is covered within a shot by means of the angle in which it can shoot. A linear and wide FOV are provided. The wide FOV has a larger angle, which means a fisheye effect is caused. The difference between the two FOVs is shown in the figure below. The fisheye effect caused by the wide FOV can be seen clearly. It can also be seen that because of that effect a larger area is covered within the photo, while the linear FOV causes the picture to be cut-off in comparison with the wide FOV.

The two FOVs offered by GoPro [71].

The wide FOV curves the horizons and straight lines. Also, subjects in the middle of the frame will look artificially big compared to the surroundings [71]. Besides, when using wide FOV the chance is bigger that the sun sneaks into the shot, which can cause that some details are not visible anymore [72]. A Linear FOV corrects for the fisheye distortion by straightening horizons and verticals and narrowing the perspective. A linear FOV is often used when making aerial footage from drones or when a more 'normal' perspective is desired. So the overall look is less distorted when using linear FOV. However, it also has a few disadvantages. The area that is covered within an image or frame is smaller due to the smaller angle. Also, parts near the edges can get a more stretched look. This is because when choosing linear FOV, the GoPro applies software correction to the lens distortion before saving it to the memory card [71].

If wide FOV is used, the image recognition will have to be trained on such pictures in order to work accurate enough. This is not a huge issue, but taking into account that the perspective is quite distorted when using wide FOV and that sizes of objects can be distorted as well, it is best to use a linear FOV. Maybe in the future, the image recognition could be expanded, so it can also measure sizes of objects. For this, it will be important to have a normal perspective. Also, when using wide FOV more of the water, and maybe even the horizon will be within the image. Then the sun can cause distortions within the images (directly or via the water), which is not desirable. With linear FOV, this area will be kept as small as possible, while still keeping the waste that floats directly into the Noria within its view.

Assembly

The camera needs to be mounted to the Noria [73]. To not interfere with rotating parts, and to have easy mounting, it is probably best to attach the camera to the shaft indicated in the figure below with the green arrow. The dimensions of the steel shaft are approximated in order to make a design for the mounting of the camera. To determine the height of the camera, it is important to know the FOV (Field Of View) of the camera. It was decided that the linear FOV of the GoPro will be used, which is 102 degrees [72]. It is approximated that the camera should be able to take images across a width of around 1.2 [m]. This means that the camera should be placed at a height of at least 50 [cm] for the linear FOV (relative to the water surface). It is approximated that the steel shaft itself will already be at around 30 [cm] above the water. The final design will need to contain an arm of around 20[cm] high, to reach the desired height with respect to the water surface.

The Noria with indicated the shaft to which the camera can be mounted [73].

A final design for the total assembly has been made. In view 1 the total design can be seen. A custom casing has been made which contains the camera [74] and the external battery (see view 2). The camera is held in place by some small vertical plates and a Velcro fastener (see view 3 and 4). This way the camera will always be placed in the right way. The battery is also held in place by a Velcro fastener (see view 5). The Velcro fasteners can be looped through a small elevation within the casing (see view 4). In front of the camera lens a thin plastic plate is added, which keeps the casing weatherproof (see view 2), while still being able to make video footage. One half of the casing has a slot, into which the other half of the casing can be slid. A rubber could be added to the slot to make sure the casing has better watertightness (see view 3 and 5). This would give the casing an IP rating of IP65, which means the casing is dust-tight (IP6x) and it is protected against water coming from any direction at 12.5[L/min] (IPx5) [75]. This makes the casing not completely watertight, but it does make it weatherproof for normal conditions. When the two casings are slid together, they can be held in place by 4 screws at the corners of the casing (see view 2, 3 and 5).

Also, the arm that is connected to the shaft of the Noria is connected to the casing with a metal plate and 4 screws (see view 1). Those 4 screws are at the same place as the ones of the casing. However, they must be screwed in from the other side. The hole of these screws will not reach the ones of the screws that hold together the casing (see view 8).

Furthermore, the arm is divided into two parts (see view 1). With the joint between the two arms, the angle of the camera can be changed slightly, if necessary. The joint can be held in place with a bolt. The arm is attached to the shaft of the Noria by means of a plate and some screws (see view 1).

Besides, a light is added to the arm (see view 1 and 7). This will make sure that useful footage can be taken 24/7. The light is powered by a solar panel (see view 1). The angle of the solar panel can be adjusted with a joint, so it can be placed the most optimal for a certain location. Also, if necessary, the solar panel can be moved. The cable is 5[m], so if more power can be generated at a different location at the Noria, the placement of the panel can be adjusted.

View 1: Total assembly.
View 2: Casing with camera and battery.
View 3: Camera inside casing with Velcro fasteners.
View 4: Velcro fasteners for camera inside casing.
View 5: Battery inside casing with Velcro fasteners.
View 6: Camera and battery.
  • View 7: Light.
  • View 8: Screws for attachment arm and closing case.

From Data to Information

Stakeholders like Rijkswaterstaat do not want raw data, but information. So the data will need to be converted to useful information, to be able to satisfy stakeholders. To do so the DIKAR model can be used, which stands for data, information, knowledge, action and result [76]. Data represents the raw numbers, stored, but not managed in a way that makes that they can easily be processed. Information comes from data when it is processed. It gets a form which makes it easier to understand or to find relationships. The last three stages: knowledge, action and result, are carried out by the stakeholder. When information is understood it becomes knowledge of the stakeholder. With this knowledge actions can be taken, that in the end give results.

Types of Information

To transform the collected data into something that is more readable, cohesive and accessible there could be looked at different information types. The most interesting types of information about waste in rivers are:

  • Time-dependency
  • Waste types
  • Location

How to process data for these types is worked out below.

Time-dependency

MATLAB's geobubble() figure.

A simple histogram (not a line plot!) can be used to show the time-dependency of waste collected by the Noria. For good, trustworthy results, data over a long period of time will be needed. To make such histograms, Jupyter could be used to process the data. It can first be used to make a dataframe containing the amount of waste observed per day. Then it can also be used to create a histogram from this using hist(), which is part of pandas.

Conclusions that can be drawn from this visualization are whether there is less or more waste at night than during the day, whether there is more waste during certain seasons, during weekends, etc.

Waste types

Using a pie-chart or a bar chart, the types of observed waste can be displayed. Exact quantities for each type can be added as extra information. Again, these visualizations can be made in Jupyter, this time using plot.pie() or plot.bar() - of course, after storing the data in dataframes by type. Data can be taken from certain settings (locations, timespans) and compared very easily.

These visualizations effectively show the ratios between certain types of waste, for certain environments.

Location

The MATLAB geobubble() function is good for showing data values and types for different locations around the globe. One can input latitude and longitude coordinates as borders of the chart. It may not be great to show data on the scale of the Netherlands (the grey map is not very detailed), it can serve as an inspiration for a good way of displaying the data by location.

This way of displaying could be combined with the 'types' display, by making every circle into a (bigger) pie-chart on its own. This could be a great visualization for the Noria.

Jupyter Notebooks

It is decided that the final information delivered will be a pie chart of waste type distribution per location. A Jupyter Notebook is written that can interpret data from a .csv file and can make pie charts out of it. An example of this can be found below. If needed, these pie charts can be placed onto a map by hand, based on the location it is for. Also, above the map a time indication could be added. Then it will display the amount of each waste type, at different locations and within a certain time span.

An example of a produced pie chart.

Evaluation

Design

No ideal solution has been found for data transfer and an energy source. Now, these need to be changed manually and that will add costs that are not included in the project. These costs are not included since they are not determinable. The SD card and battery have to be changed once every two days, however it is not always necessary for a person to drive there to only switch these components. A lot of times the container of the Noria has to be emptied but an exact number has not been determined for this yet. Therefore, the costs of travelling and switching are not taken into account.

In the future, another solution could be found for this. For example by looking at other forms of energy generation, such as hydropower. Since the Noria is set in the Maas where there is a current, this could be used to induce power. Also, there can be looked at making the data transfer autonomous. This was not possible in the timeframe of this project but could be possible in a continuation of this project.

To conclude, this was the best possible design in the time frame of the project but multiple improvements are possible with more time.

Cost Problem

The original idea was to place the Waste Identifier on the Noria. However, conversations with Rinze de Vries have shown that it is better to use the Waste Identifier in a different way than directly on the Noria. When the Waste Identifier should be placed on each individual Noria, this would entail a large cost. Therefore, it might be less attractive for the user to purchase the Waste Identifier. A proposal by Rinze for this was to apply the Waste Identifier to a place where the waste is collected. Here, the waste collected by many different waste collection actions can be counted in a central location that will only require one Waste Identifier. This will reduce costs considerably and will make it easier to create a dataset due to the background environment of the video images. Creating the dataset becomes easier because the object detection software only needs to be trained on a fixed background. For example, if a black conveyor belt is used, this belt is filmed from above while moving, the software will only need to be trained on images with a black background. However, there is also one disadvantage when using the Waste Identifier on a central collection point. Namely, it is also important to know at what location the waste is collected, and with this it will be harder to distinct those locations. This can be solved by identifying each Noria's waste separately at the collection points, so that the original location is not lost in the process.

Experiment

Experiments have shown that the Waste Identifier can recognize a large part of the bottles and cans. Unfortunately, we have not been able to conduct extensive tests to draw firm conclusions about the accuracy of the Waste Identifier. We did not have enough cans and bottles to make a video big enough for this. A solution for this could have been to collect plastic bottles and cans. However, we came up with this idea too late and therefore did not have enough time to implement it. As can also be seen in the video, the Waste Identifier will also keep track of how many plastic bottles and cans it has recognized so that it can be used to identify the waste problem in the water. As previously stated, the data will be converted into information via a pie chart in which the amount of waste per category will be displayed at a specific location.

Object Detection Software

At the moment, the object detection software was made so that we could show what the idea could ultimately look like. It is therefore not yet working optimally. It has now been decided to view the image and recognize the objects every 10 frames. The object is only counted when it is in the top part of the image. This could be improved by applying tracking. The object that comes into view is followed, and when it would pass a certain line or it is located in a certain box, the object could be counted. The network should also be trained on several different types of waste, so that a larger part of the waste problem can be identified. There are also some problems when using different backgrounds. This problem should be solvable by using more images with different backgrounds in the training data set.

Conclusion

In general, it could therefore be said that the Waste Identifier could certainly be an important tool for reducing the Plastic Soup. However, improvements will still be needed in the design and operation of the Waste Identification, such as better classification and counting of the waste, and enlarging the dataset with different types of waste.

Process of the Project

The process started off too ambitious. The goal was to design an underwater robot that could recognize and identify different types of plastics. With this goal, the capabilities of the team were overestimated. After conducting an interview with Hans Brinkhof (Rijkswaterstaat) it became clear that the initial goal was too complicated for the limited time of the project. Based on a suggestion of Hans the focus was shifted towards an extension of the Noria. This gave more structure and realizable goals to the project. After an interview with Rinze de Vries (owner Noria), new requirements and goals were set. With these requirements and goals, more structure was realized. At this moment, four valuable weeks had passed. However, from this point forward the project progressed more rapidly. For the design part, every aspect became much more concrete. The design was simplified a lot and therefore, it was possible to realize a sufficient design in the time frame. The image recognition part had to change datasets, but obtaining the new dataset also became less complicated since this dataset could be handmade more easily . Because of these simplifications, eventually, a design and a working image recognition based on the self-made database are realized. This would not have been possible with the original idea. After discussing the full process of the project with Rinze de Vries for feedback a few conclusions are drawn.

In future projects, there should be started with a good demarcation of the project. That means, set goals and subgoals. Each goal and subgoal has a corresponding question. With these questions, find out what activities are related to it and put these activities out in time.

From this project there can be learned that structure is the most important part of a project. The capability to bring structure in a project has to be developed throughout ones career. This means that it has to be learned to not set the bar too high. This mainly leads to a lot of unfinished tasks. It is better to set the bar a little lower so that all simpler tasks can be done extensively and a good conclusion can be drawn. This project is a perfect example of that.

Final Video

Here is a link to the final video: https://youtu.be/6C9-dH9V9Kg

Conducted Interviews

File:Minutes interview Hans Brinkhof 19 May.pdf

File:Minutes interview Rinze de Vries 27 May.pdf

File:Minutes Feedback Moment with Rinze de Vries 12 June.pdf

Logbook

Week 1

Name Total hours Break-down
Kevin Cox 6 Meeting (1h), Problem statement and objectives (1.5h), Who are the users (1h), Requirements (0.5h), Adjustments on wiki-page (2h)
Menno Cromwijk 9 Meeting (1h), Thinking about project-ideas (4h), Working out previous CNN work (2h), creating planning (2h).
Dennis Heesmans 8.5 Meeting (1h), Thinking about project-ideas (3h), State-of-the-art: neural networks (3h), Adjustments on wiki-page (1.5h)
Marijn Minkenberg 7 Meeting (1h), Setting up wiki page (1h), State-of-the-art: ocean-cleaning solutions (part of which was moved to Problem Statement) (4h), Reading through wiki page (1h)
Lotte Rassaerts 7 Meeting (1h), Thinking about project-ideas (2h), State of the art: image recognition (4h)

Week 2

Name Total hours Break-down
Kevin Cox 4.5 Meeting (1.5h), Checking the wiki page (1h), Research and writing maritime transport (2h)
Menno Cromwijk 10 Meeting (1.5h), Installing CNN tools (2h), searching for biodiversity (4.5h), reading and updating wiki (2h)
Dennis Heesmans 6.5 Meeting (1.5h), Installing CNN tools (2h), USE analysis (3h)
Marijn Minkenberg 9 Meeting (1.5h), Checking the wiki page (1h), Installing CNN tools (2h), Research & writing WasteShark (4.5h)
Lotte Rassaerts 5.5 Meeting (1.5h), Research & writing Location and Plastic (4h)

Week 3

Name Total hours Break-down
Kevin Cox 7.5 Meeting (4h), project statement, objectives, users and requirements rewriting (3.5h)
Menno Cromwijk 16 Meeting (4h), planning (1h), reading wiki (1h), searching for database (6h), research Albatros (2h), reading and updating wiki page (2h)
Dennis Heesmans 11 Meeting (4.5h), Research & writing Plastic under water (2h), Calling IMS Services and installing keras (3h), Requirements and Test plan (1.5h)
Marijn Minkenberg 13.5 Meeting (4.5h), Research & writing Quantifying plastic waste (4h), Calling IMS Services and installing keras (3h), Reading through, and updating, wiki page (2h)
Lotte Rassaerts 13 Meeting (4h), Research & rewriting further exploration (3h), Research & writing robot requirements & functionalities (3h), Requirements and Test plan (1.5h), Start on ideas for robot design (1.5h)

Week 4

Name Total hours Break-down
Kevin Cox 10 Meeting (3h), contacting users (2h), Writing and researching localization and obstacle avoidance + contacting company about possibilities (5h)
Menno Cromwijk 11.5 Meeting (3.5h) Working on VGG model (4h) Working on tansfer learning (4h)
Dennis Heesmans 11 Meeting (4.5h), Installing Anaconda and corresponding packages (3h), Running test script (0.5h), Search database (1h), Trying to test CNN with other images (2h)
Marijn Minkenberg 11 Meeting (4.5h), Re-installing Anaconda and keras (1.5h), Running test script (0.5h), Finding 3 datasets (2.5h), Research & Writing Data Augmentation (2h)
Lotte Rassaerts 9 Meeting (3h), Thinking of/reseraching ideas/concepts for design (2h), Researching power source/battery life (4h)

Week 5

Name Total hours Break-down
Kevin Cox 9 Meeting (4.5h), Meeting with Hans Brinkhof (Rijkswaterstaat) (1h), Getting in touch with Hans Brinkhof (1h), Writing minutes regarding the meeting (0.5h), Getting in touch with Rinze de Vries (1h), meeting with Rinze de Vries (1h)
Menno Cromwijk 17.5 Meeting (4.5h), improving CNN (2h), researching and implementation of YOLO (8h), labelling dataset (2h), creating python files (1h)
Dennis Heesmans 19 Meeting (4.5h), Implement different datasets in neural network (8h), Searching datasets (1h), Adjustments planning (0.5h), Wiki cleanup (2h), Write about Noria (2h), Watch object detection videos (1h)
Marijn Minkenberg 15.5 Meeting (4.5h), Coding Data Augmentation (3h), Dataset plan (2h), Set-up and taking test photos (3h), Processing photos for dataset using Photoshop and Spyder (3h)
Lotte Rassaerts 15 Meeting (4.5h), Meeting with Hans Brinkhof (Rijkswaterstaat) (1h), Adjusting planning and requirements (0.5h), Working on design (camera and power source) (6h), Working on design (data transfer and decision matrix) (3h)

Week 6

Name Total hours Break-down
Kevin Cox 10.75 Meeting (3h), Meeting with Rinze (1h), Writing minutes regarding meeting with Rinze (0.5h), researching methods for autonomous image uploading (unsuccessful) (6h), Written final conclusion. (0.25h)
Menno Cromwijk 13 Meeting (3h), research on multiple classes and implementing this for our database (3h), combine all labeled databases with created python-files (1h), training model via Google Colab (failed) (3h) training model via Google Colab with Marijn (1h), Testing on different videos and photos (2h).
Dennis Heesmans 12 Meeting (3h), Watching videos YOLO (3h), Labeling images (4h), Meeting with Rinze (1h), Writing about YOLO (1h)
Marijn Minkenberg 12 Meeting (3h), Taking new pictures for test set (2h), Training model via Google Colab (2h), Expanding test set and labelling (4h), Updating wiki (1h)
Lotte Rassaerts 12.75 Meeting (3h), researching data transfer GoPro (3h), Data to information (1.5h), Mounting (5h), updating requirements after interview Rinze (0.25h)

Week 7

Name Total hours Break-down
Kevin Cox 10 Meeting (3h), Researching Data storage and energy consumption (6h), writing Data storage and energy consumption (1h)
Menno Cromwijk 13 Meeting (3h), re-training on entire-database (2h), resume last training to improve loss (2h), testing on multiple photos and videos (2h), working on counting software (4h).
Dennis Heesmans 13 Meeting (3h), Research Accuracy and Counting (3h), Working on counting software (7h)
Marijn Minkenberg 14 Meeting (3h), Test set photos+video (3h), Data to information (2h), Branding (1h), Converting video to low-res and 1 FPS (1h), Jupyter pie charts (4h)
Lotte Rassaerts 17 Meeting (3h), FOV + Night mode (4h), making new 3D drawings (10h)

Week 8

Name Total hours Break-down
Kevin Cox 10.75 Meeting (3h), editing wiki page + contacting Rinze de Vries (1h), Meeting with Rinze (0.5h), Writing minutes (0.25h), Researching lighting (5h), Writing lighting (1h)
Menno Cromwijk 8 Meeting (3h), working on counting software (4h), updating wiki (1h)
Dennis Heesmans 11.5 Meeting (3h), Working on counting software (4h), Meeting with Rinze (0.5h), Conclusion (4h)
Marijn Minkenberg 21 Meeting (3h), Writing: Testing results (1h), Script for video (5h), Video editing (12h)
Lotte Rassaerts 13 Meeting (3h), editing assembly and writing text (3h), cleaning up design part of wiki-page (0.5h), adding light and solar panel to design + improved assembly text (5h), text video + record text (1.5h)

Week 9

Name Total hours Break-down
Kevin Cox 2.5 Meeting (2.5h)
Menno Cromwijk 2.5 Meeting (2.5h)
Dennis Heesmans 5.5 Meeting (2.5h), Writing about counting (3h)
Marijn Minkenberg 15.5 Meeting (2.5h), Video editing & uploading (13h)
Lotte Rassaerts 8.5 Meeting (2.5h), Finalizing general wiki part (problem statement, SoTA, further exploration) and design part (2h), Final edit wiki (4h)

References

  1. 1.0 1.1 1.2 Oceans. (2020, March 18). Retrieved April 23, 2020, from https://theoceancleanup.com/oceans/
  2. Wikipedia contributors. (2020, April 13). Microplastics. Retrieved April 23, 2020, from https://en.wikipedia.org/wiki/Microplastics
  3. Suaria, G., Avio, C. G., Mineo, A., Lattin, G. L., Magaldi, M. G., Belmonte, G., … Aliani, S. (2016). The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters. Scientific Reports, 6(1). https://doi.org/10.1038/srep37551
  4. Foekema, E. M., De Gruijter, C., Mergia, M. T., van Franeker, J. A., Murk, A. J., & Koelmans, A. A. (2013). Plastic in North Sea Fish. Environmental Science & Technology, 47(15), 8818–8824. https://doi.org/10.1021/es400931b
  5. Rochman, C. M., Hoh, E., Kurobe, T., & Teh, S. J. (2013). Ingested plastic transfers hazardous chemicals to fish and induces hepatic stress. Scientific Reports, 3(1). https://doi.org/10.1038/srep03263
  6. Stevens, A. (2019, December 3). Tiny plastic, big problem. Retrieved May 10, 2020, from https://www.sciencenewsforstudents.org/article/tiny-plastic-big-problem
  7. Peels, J. (2019). Plasticsoep in de Maas en de Waal veel erger dan gedacht, vrijwilligers vinden 77.000 stukken afval. Retrieved May 6, from https://www.omroepbrabant.nl/nieuws/2967097/plasticsoep-in-de-maas-en-de-waal-veel-erger-dan-gedacht-vrijwilligers-vinden-77000-stukken-afval
  8. Schone Rivieren. (2020, May 19). Schone Rivieren. Retrieved June 17, 2020, from https://www.schonerivieren.org/
  9. Rijkswaterstaat. (2020, June 12). Rijkswaterstaat. Retrieved June 17, 2020, from https://www.rijkswaterstaat.nl/
  10. 10.0 10.1 10.2 WasteShark ASV | RanMarine Technology. (2020, February 27). Retrieved May 2, 2020, from https://www.ranmarine.io/
  11. 11.0 11.1 11.2 CORDIS. (2019, March 11). Marine Litter Prevention with Autonomous Water Drones. Retrieved May 2, 2020, from https://cordis.europa.eu/article/id/254172-aquadrones-remove-deliver-and-safely-empty-marine-litter
  12. 12.0 12.1 12.2 Swan, E. C. (2018, October 31). Trash-eating “shark” drone takes to Dubai marina. Retrieved May 2, 2020, from https://edition.cnn.com/2018/10/30/middleeast/wasteshark-drone-dubai-marina/index.html
  13. Wikipedia contributors. (2020, May 2). Lidar. Retrieved May 2, 2020, from https://en.wikipedia.org/wiki/Lidar
  14. 14.0 14.1 Noria - Schonere wateren door het probleem bij de bron aan te pakken. (2020, January 27). Retrieved May 21, 2020, from https://nlinbusiness.com/steden/munchen/interview/noria-schonere-wateren-door-het-probleem-bij-de-bron-aan-te-pakken-ZG9jdW1lbnQ6LUx6YXdoalp2cGpvcEVXbVZYaFI=
  15. 15.0 15.1 Emmerik, T., & Schwarz, A. (2019). Plastic debris in rivers. WIREs Water, 7(1). https://doi.org/10.1002/wat2.1398
  16. Morritt, D., Stefanoudis, P. V., Pearce, D., Crimmen, O. A., & Clark, P. F. (2014). Plastic in the Thames: A river runs through it. Marine Pollution Bulletin, 78(1–2), 196–200. https://doi.org/10.1016/j.marpolbul.2013.10.035
  17. Martin, C., Parkes, S., Zhang, Q., Zhang, X., McCabe, M. F., & Duarte, C. M. (2018). Use of unmanned aerial vehicles for efficient beach litter monitoring. Marine Pollution Bulletin, 131, 662–673. https://doi.org/10.1016/j.marpolbul.2018.04.045
  18. 18.0 18.1 Kylili, K., Kyriakides, I., Artusi, A., & Hadjistassou, C. (2019). Identifying floating plastic marine debris using a deep learning approach. Environmental Science and Pollution Research, 26(17), 17091–17099. https://doi.org/10.1007/s11356-019-05148-4
  19. 19.0 19.1 19.2 Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef
  20. Lee, G., & Fujita, H. (2020). Deep Learning in Medical Image Analysis. New York, United States: Springer Publishing.
  21. 21.0 21.1 21.2 Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf
  22. Culurciello, E. (2018, December 24). Navigating the Unsupervised Learning Landscape - Intuition Machine. Retrieved April 22, 2020, from https://medium.com/intuitionmachine/navigating-the-unsupervised-learning-landscape-951bd5842df9
  23. Bosse, S., Becker, S., Müller, K.-R., Samek, W., & Wiegand, T. (2019). Estimation of distortion sensitivity for visual quality prediction using a convolutional neural network. Digital Signal Processing, 91, 54–65. https://doi.org/10.1016/j.dsp.2018.12.005
  24. Brooks, R. (2018, July 15). [FoR&AI] Steps Toward Super Intelligence III, Hard Things Today – Rodney Brooks. Retrieved April 22, 2020, from http://rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/
  25. Nicholson, C. (n.d.). A Beginner’s Guide to Neural Networks and Deep Learning. Retrieved April 22, 2020, from https://pathmind.com/wiki/neural-network
  26. Cheung, K. C. (2020, April 17). 10 Use Cases of Neural Networks in Business. Retrieved April 22, 2020, from https://algorithmxlab.com/blog/10-use-cases-neural-networks/#What_are_Artificial_Neural_Networks_Used_for
  27. Amidi, Afshine , & Amidi, S. (n.d.). CS 230 - Recurrent Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
  28. Hopfield Network - Javatpoint. (n.d.). Retrieved April 22, 2020, from https://www.javatpoint.com/artificial-neural-network-hopfield-network
  29. Hinton, G. E. (2007). Boltzmann Machines. Retrieved from https://www.cs.toronto.edu/~hinton/csc321/readings/boltz321.pdf
  30. Amidi, A., & Amidi, S. (n.d.). CS 230 - Convolutional Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-convolutional-neural-networks
  31. Yamashita, Rikiya & Nishio, Mizuho & Do, Richard & Togashi, Kaori. (2018). Convolutional neural networks: an overview and application in radiology. Insights into Imaging. 9. 10.1007/s13244-018-0639-9
  32. Qian, N. (1999, January 12). On the momentum term in gradient descent learning algorithms. - PubMed - NCBI. Retrieved April 22, 2020, from https://www.ncbi.nlm.nih.gov/pubmed/12662723
  33. Hinton, G., Srivastava, N., Swersky, K., Tieleman, T., & Mohamed , A. (2016, December 15). Neural Networks for Machine Learning: Overview of ways to improve generalization [Slides]. Retrieved from http://www.cs.toronto.edu/~hinton/coursera/lecture9/lec9.pdf
  34. Kingma, D. P., & Ba, J. (2015). Adam: A Method for Stochastic Optimization. Presented at the 3rd International Conference for Learning Representations, San Diego.
  35. Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence o(1/k^2).
  36. Dozat, T. (2016). Incorporating Nesterov Momentum into Adam. Retrieved from https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ
  37. Canu, S. (2019, June 27). YOLO object detection using Opencv with Python. Retrieved May 26, 2020, from https://pysource.com/2019/06/27/yolo-object-detection-using-opencv-with-python/
  38. Brownlee, J. (2019, October 7). How to Perform Object Detection With YOLOv3 in Keras. Retrieved May 29, 2020, from https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/
  39. Redmon, J. (2019, November 15). pjreddie/darknet. Retrieved May 29, 2020, from https://github.com/pjreddie/darknet/wiki/YOLO:-Real-Time-Object-Detection
  40. 40.0 40.1 Kathuria, A. (2018, April 23). What’s new in YOLO v3? Retrieved May 29, 2020, from https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b
  41. Bhattarai, S. (2019, December 25). What is YOLO v2 (aka YOLO 9000)? Retrieved June 1, 2020, from https://saugatbhattarai.com.np/what-is-yolo-v2-aka-yolo-9000/
  42. Lebreton. (2018, January 1). OSPAR Background document on pre-production Plastic Pellets. Retrieved May 3, 2020, from https://www.ospar.org/documents?d=39764
  43. 43.0 43.1 43.2 Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/wp-content/uploads/2020/05/Schone_Rivieren_rapportage_2019.pdf
  44. Staatsbosbeheer. (2019, September 12). Dossier afval in de natuur. Retrieved May 3, 2020, from https://www.staatsbosbeheer.nl/over-staatsbosbeheer/dossiers/afval-in-de-natuur
  45. Buffon X. (2019, May 20) Robotic Detection of Marine Litter Using Deep Visual Detection Models. Retrieved May 9, 2020, from https://ieeexplore.ieee.org/abstract/document/8793975
  46. 46.0 46.1 Thung G. (2017, Apr 10) Dataset of images of trash Torch-based CNN for garbage image classification. Retrieved May 9, 2020, from https://github.com/garythung/trashnet
  47. Sergio Canu (2020, April 1) Train YOLO to detect a custom object (online with free GPU)
  48. https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/
  49. Goyal, S. (2019, December 17). MachineX: Image Data Augmentation Using Keras. Retrieved June 19, 2020, from https://towardsdatascience.com/machinex-image-data-augmentation-using-keras-b459ef87cd22
  50. Coleman, D. (2020, April 8). Can You Run a GoPro HERO8, HERO7, or HERO6 with External Power but Without an Internal Battery? Retrieved May 22, 2020, from https://havecamerawilltravel.com/gopro/external-power-internal-battery/
  51. Air Photography. (2018, April 29). Weatherproof External Power for GoPro Hero 5/6/7 | X~PWR-H5. Retrieved May 22, 2020, from https://www.youtube.com/watch?v=S6Y7a3ZtoeE
  52. GoPro. (n.d.). HERO8 Black Tech Specs. Retrieved May 22, 2020, from https://gopro.com/en/nl/shop/hero8-black/tech-specs?pid=CHDHX-801-master
  53. GoPro. (n.d.-a). HERO7 Black Action Camera | GoPro. Retrieved May 22, 2020, from https://gopro.com/en/nl/shop/cameras/hero7-black/CHDHX-701-master.html
  54. 54.0 54.1 54.2 Harbortronics. (n.d.-b). Cyclapse Pro - Starter | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-starter/ Cite error: Invalid <ref> tag; name "cyclapsepro" defined multiple times with different content Cite error: Invalid <ref> tag; name "cyclapsepro" defined multiple times with different content
  55. Canon USA. (n.d.). Canon U.S.A., Inc. | EOS Rebel T7 EF-S 18-55mm IS II Kit. Retrieved May 22, 2020, from https://www.usa.canon.com/internet/portal/us/home/products/details/cameras/eos-dslr-and-mirrorless-cameras/dslr/eos-rebel-t7-ef-s-18-55mm-is-ii-kit
  56. GoPro. (2020, May 22). Auto Uploading Your Footage to the Cloud With GoPro Plus. Retrieved May 23, 2020, from https://community.gopro.com/t5/en/Auto-Uploading-Your-Footage-to-the-Cloud-With-GoPro-Plus/ta-p/388304#
  57. GoPro. (2020a, May 14). How to Add Media to GoPro PLUS. Retrieved May 23, 2020, from https://community.gopro.com/t5/en/How-to-Add-Media-to-GoPro-PLUS/ta-p/401627
  58. 58.0 58.1 Harbortronics. (n.d.-d). Support / DigiSnap Pro / Frequently Asked Questions | Cyclapse. Retrieved May 23, 2020, from https://cyclapse.com/support/digisnap-pro/frequently-asked-questions-faq/
  59. 59.0 59.1 59.2 CamDo. (n.d.-b). SolarX Solar Upgrade Kit. Retrieved May 22, 2020, from https://cam-do.com/products/solarx-gopro-solar-system
  60. 60.0 60.1 Harbortronics. (n.d.-a). Cyclapse Pro - Standard | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-standard/
  61. 61.0 61.1 61.2 61.3 https://cam-do.com/pages/photography-time-lapse-calculator?_ga=2.4808368.207575993.1590147015-1651516203.1590147015
  62. Harbortronics. (n.d.-a). Cyclapse Pro - Glacier | Cyclapse. Retrieved May 22, 2020, from https://cyclapse.com/products/cyclapse-pro-glacier/
  63. Time lapse settings. Retrieved June 05, 2020, from https://www.youtube.com/watch?v=-9htjymU5d8
  64. Solar Electricity Handbook. Retrieved June 05, 2020, from http://www.solarelectricityhandbook.com/solar-irradiance.html
  65. Anker Astro E7 26800 mAh external battery. Retrieved June 05, 2020, from https://chargewithpower.com/product/buy-anker-astro-e7-26800mah-ultra-high-capacity-3-port-4a-compact-portable-charger-external-battery-power-bank-with-poweriq-technology-for-iphone-ipad-nintendo-switch-and-more-online/
  66. SD cards compatible with GoPro. Retrieved June 05, 2020, from https://community.gopro.com/t5/en/SD-Cards-that-Work-with-GoPro-Cameras/ta-p/394308#HERO7
  67. Sandisk Micro SDXC. Retrieved June 05, 2020, from https://www.amazon.nl/dp/B07HB8SLMV/ref=asc_df_B07HB8SLMV1591629000000/?tag=kieskeurig-21&creative=380333&creativeASIN=B07HB8SLMV&linkCode=asn
  68. 68.0 68.1 IP rating at Wikipedia. Retrieved June 12, 2020, from https://en.wikipedia.org/wiki/IP_Code#Second_digit:_Liquid_ingress_protection
  69. 69.0 69.1 Hikeren IP65 Waterproof Solar Lights at Amazon. Retrieved June 12, 2020, from https://www.amazon.com/Hikeren-Waterproof-Spotlight-Install-Security/dp/B01DNMRUIQ
  70. Zonsopkomst en Zonsondergang. Retrieved June 12, 2020, from http://www.zonsondergangtijden.nl/zonsondergang-2020.html
  71. 71.0 71.1 71.2 Coleman, D. (2020a, March 20). GoPro Linear FOV: Pros, Cons, and Examples. Retrieved June 3, 2020, from https://havecamerawilltravel.com/gopro/gopro-fov-linear/
  72. 72.0 72.1 Michaels, P. (2018, September 20). GoPro Hero7: The Smoothest-Looking Action Cam Yet. Retrieved May 31, 2020, from https://www.tomsguide.com/us/go-pro-hero-7,review-5755.html
  73. 73.0 73.1 Noria i.o.v. Rijkswaterstaat. (2020, April 1). Pilot vangsysteem voor plastic afval bij stuw Borgharen. Retrieved June 3, 2020, from https://zwerfafval.rijkswaterstaat.nl/@235156/pilot-vangsysteem-plastic-afval-stuw-borgharen/
  74. Velimir. (2020, May 8). Free CAD Designs, Files & 3D Models | The GrabCAD Community Library. Retrieved May 31, 2020, from https://grabcad.com/library/gopro-hero-4
  75. Wikipedia contributors. (2020a, March 5). IP-code. Retrieved June 10, 2020, from https://nl.wikipedia.org/wiki/IP-code
  76. Carpenter, D. (2016, November 17). DIKAR: Aligning Technology And Organisational Strategies. Retrieved May 30, 2020, from http://blog.myceo.com.au/dikar-aligning-technology-and-organisational-strategies