PRE2019 4 Group3: Difference between revisions
Line 104: | Line 104: | ||
There are different types of neural networks <ref name=typesneural>Cheung, K. C. (2020, April 17). 10 Use Cases of Neural Networks in Business. Retrieved April 22, 2020, from https://algorithmxlab.com/blog/10-use-cases-neural-networks/#What_are_Artificial_Neural_Networks_Used_for</ref>: | There are different types of neural networks <ref name=typesneural>Cheung, K. C. (2020, April 17). 10 Use Cases of Neural Networks in Business. Retrieved April 22, 2020, from https://algorithmxlab.com/blog/10-use-cases-neural-networks/#What_are_Artificial_Neural_Networks_Used_for</ref>: | ||
* Recurrent neural network: | * Recurrent neural network: Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. These networks are mostly used in the fields of natural language processing and speech recognition <ref name=recurrent>Amidi, Afshine , & Amidi, S. (n.d.). CS 230 - Recurrent Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-recurrent-neural-networks</ref>. | ||
Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. These networks are mostly used in the fields of natural language processing and speech recognition <ref name=recurrent>Amidi, Afshine , & Amidi, S. (n.d.). CS 230 - Recurrent Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-recurrent-neural-networks</ref>. | |||
* Convolutional neural networks: Convolutional neural networks, also known as CNNs, are used for image classification | * Convolutional neural networks: Convolutional neural networks, also known as CNNs, are used for image classification | ||
* Hopfield networks: Hopfield networks are used to collect and retrieve memory like the human brain. The network can store various patterns or memories. It is able to recognize any of the learned patterns by uncovering data about that pattern <ref name=hopfield> Hopfield Network - Javatpoint. (n.d.). Retrieved April 22, 2020, from https://www.javatpoint.com/artificial-neural-network-hopfield-network</ref>. | * Hopfield networks: Hopfield networks are used to collect and retrieve memory like the human brain. The network can store various patterns or memories. It is able to recognize any of the learned patterns by uncovering data about that pattern <ref name=hopfield> Hopfield Network - Javatpoint. (n.d.). Retrieved April 22, 2020, from https://www.javatpoint.com/artificial-neural-network-hopfield-network</ref>. |
Revision as of 19:52, 23 April 2020
Group members
Student name | Student ID | Study | |
---|---|---|---|
Kevin Cox | 1361163 | Mechanical Engineering | k.j.p.cox@student.tue.nl |
Menno Cromwijk | 1248073 | Biomedical Engineering | m.w.j.cromwijk@student.tue.nl |
Dennis Heesmans | 1359592 | Mechanical Engineering | d.a.heesmans@student.tue.nl |
Marijn Minkenberg | 1357751 | Mechanical Engineering | m.minkenberg@student.tue.nl |
Lotte Rassaerts | 1330004 | Mechanical Engineering | l.rassaerts@student.tue.nl |
SPlaSh: The Plastic Shark
Problem statement
Everyday more and more garbage is dropped in the oceans. The amount of garbage floating in the ocean is so huge that it has become impossible to estimate the amount of it. This leads to huge garbage patches, which are disrupting marine life. A large part of this garbage is plastic in which fishes or other sea species can get stuck in or eat it. This problem is created by humans and therefore, humans must provide a solution for this problem.
In this project we would like to contribute to this solution. At this moment cleaning devices are already in use to clean up the ocean. However, these cleaning devices are harmful for the marine life, since they are not sophisticated enough to avoid fishes or other species. Therefore, in this project there will be looked to contribute to the solution for this problem by providing a software tool that is able to distinguish garbage from marine life. This information could help the cleaning device navigate through the ocean and cleaning up the garbage without harming the marine life.
Objectives
• Do research into the state of the art of current recognition software, ocean cleanup devices and neural networks.
• Create a software tool that distinguishes garbage from marine life.
• Test this software tool and form a conclusion on the effectiveness of the tool.
Who are the users
Businesses/organizations
Until now, not much businesses have taken on the challenge of cleaning up the oceans. Reasons for this are that no money can be made from it. Which means non-profit organizations are the main contributors. These organizations have to collect money to fund its project. This also means that they can use all help any university or other institution can provide. Another reason why businesses don’t take on the challenge is that when you reach your goal, that means you are out of a job. This means that non-profit organizations really have to rely on good intentions of people, because investments most likely won’t provide to be profitable.
Governments
Another group of users are governments. Since garbage patches are in the ocean, no government is held accountable for its cleanup. And no government wants to take on the responsibility. This means that providing technology that makes it easier to clean up the oceans could lead to eventually governments taking this responsibility.
Society
Society is one of the main contributors to polluting the oceans. However, they are also a big source of funding. Harming marine life isn’t contributing to the image of ocean cleaning devices and therefore hurts the funding. This means that developing technology that improves this image could lead to more funding from society.
Marine Life
The marine life is the group that is most harmed by ocean pollution. They would benefit the most of technology that makes it possible to distinguish garbage and marine life. Firstly, because it would speed up the implementation on larger scale of garbage cleaning equipment. And secondly, because it makes this equipment less harmful to them.
Requirements
The following points are the requirements. These requirements are conditions or tasks that must be completed to ensure the completion of the project.
• A program must be written that is able to identify the difference between garbage and marine life in 95% of the time based on a dataset of … examples.
• At least 25 sources must be used in the literature research:
• The literature research must provide information on at least the following subjects.
o Neural networks
o Ocean Cleanup and current solution
o Image recognition
Approach, milestones and deliverables (Menno)
For the planning, A Gannt Chart is created with the most important things. The overall view of our planning is that in the first two weeks, a lot of research has to be done. This needs to be done for, among other things, the problem statement, users and the current technology. Which is the wanted to be done in the first week. In the second week, more information about different types of neural networks and the working of different layers should be investigated to gain more knowledge. Also, this could lead to installing multiple packages or programs on our Laptops, which needs time to test if they work. During this second week, a data-set should be created or found that can be used to train our model. If this cannot be found online and thus should be created, this would take much more time than one week. But it’s hoped to be finished after the third week. After this, the group is split into people who creates the design and applications of the robot, and people who work on the creation of the neural network. After week 5, an idea of the robotics should be elaborated with the use of drawings or digital visualizations. Also all the possible Neural Networks should be elaborated and tried so that in week 6, conclusions can be drawn for the best working Neural Network. This means that in week 7, the Wiki-page can be concluded with a conclusion and discussion about the neural network that should be used and the working of the device. Week 8 is finally used to prepare for the presentation.
Currently, no real subdivision has been done to devide between the robotics hardware and software. This should be done in the following weeks and then the Gannt chart, visual below, can be filled in per person.
"ik weet nog niet hoe ik hier een plaatje krijg van de gannt chart"
State-of-the-Art
Neural Networks
Neural Neural networks are a set of algorithms that are designed to recognize patterns. They interpret sensory data through machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors. Real-world data, such as images, sound, text or time series, needs to be translated into such numerical data to process it [1].
There are different types of neural networks [2]:
- Recurrent neural network: Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. These networks are mostly used in the fields of natural language processing and speech recognition [3].
- Convolutional neural networks: Convolutional neural networks, also known as CNNs, are used for image classification
- Hopfield networks: Hopfield networks are used to collect and retrieve memory like the human brain. The network can store various patterns or memories. It is able to recognize any of the learned patterns by uncovering data about that pattern [4].
- Boltzmann machine networks: Boltzmann machines are used for search and learning problems [5].
Convolutional Neural Networks
In this project, the neural network should retrieve data from images. Therefore a convolutional neural network will be used. Convolutional neural networks are a specific type of neural networks that are generally composed of the following layers [6]:
The convolutional layer transforms the input data to detect patterns, edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed are by choosing a different activation function, or kernel size. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). Max pooling is applied to reduce overfitting. A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. This has the effect of making the resulting downsampled feature maps more robust to changes in the position of the feature in the image. The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Fully connected layers connect all input values via separate connections to an output channel. Since this project has to deal with a binary problem, the final fully connected layer will consist of 1 output. Stochastic gradient descent (SGD) is the most common and basic optimizer used for training a CNN [7]. It optimizes the model using parameters based on the gradient information of the loss function. However, many other optimizers have been developed that could have a better result. Momentum keeps the history of the previous update steps and combines this information with the next gradient step to reduce the effect of outliers [8]. RMSProp also tries to keep the updates stable, but in a different way than momentum. RMSprop also takes away the need to adjust learning rate [9]. Adam takes the ideas behind both momentum and RMSprop and combines into one optimizer [10]. Nesterov momentum is a smarter version of the momentum optimizer that looks ahead and adjusts the momentum based on these parameters [11]. Nadam is an optimizer that combines RMSprop and Nesterov momentum [12].
Ocean cleaning technology (Marijn)
Current situation
Over 5 trillion pieces of plastic are currently floating around in the oceans [13]. For a part, this so-called plastic soup exists of large plastics, like bags, straws, and cups. But it also contains a vast concentration of microplastics: these are pieces of plastic smaller than 5mm in size [14]. There are five garbage patches across the globe [13]. In the garbage patch in the Mediterranean sea, the most prevalent microplastics were found to be polyethylene and polypropyline [15].
A study in the Northern Sea showed that 5.4% of the fish had ingested plastic [16]. The plastic consumed by the fish accumulates - new plastic does go into the fish, but does not come out. The buildup of plastic particles results in stress in their livers [17]. Beside that, fish can become stuck in the larger plastics.
Solutions
To clean up the plastic soup, a couple of ideas are already proposed. An example that is already functioning is the WasteShark [18]. Other ideas to clean up the plastic soup are usually still concepts [19] that involve futuristic technology or lots of effort from local fishery.
The Ocean Cleanup [13] is a Dutch foundation founded in 2013, which aims to develop advanced technologies to get the plastic out of the ocean. They say that all measurements should be autonomous, energy neutral, and scalable. Systems can be made autonomous through the help of algorithms. They can be made energy neutral by using solar energy-powered electronics. Scalability will be achieved by gradually increasing the amount of systems in the oceans.
Image Recognition
Over the past decade or so, great steps have been made in developing deep learning methods for image recognition and classification [20]. In recent years, convolutional neural networks (CNNs) have shown significant improvements on image classifiction [21]. It is demonstrated that the representation depth is beneficial for the classification accuracy [22]. Another method is the use of VGG networks, that are known for their state-of-the-art performance in image feature extraction. Their setup exists out of repeated patterns of 1, 2 or 3 convolution layers and a max-pooling layer, finishing with one or more dense layers. The convolutional layer transforms the input data to detect patterns and edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed is by choosing a different activation function, or kernel size [22].
There are still limitations to the current image recognition technologies. First of all, most methods are supervised, which means they need big amounts of labelled training data, that need to be put together by someone [20]. This can be solved by using unsupervised deep learning instead of supervised. For unsupervised learning, instead of large databases, only some labels will be needed to make sense of the world. Currently, there are no unsupervised methods that outperform supervised. This is because supervised learning can better encode the characteristics of a set of data. The hope is that in the future unsupervised learning will provide more general features so any task can be performed [23]. Another problem is that sometimes small distortions can cause a wrong classification of an image [20] [24]. This can already be caused by shadows on an object that can cause color and shape differences [25]. A different pitfall is that the output feature maps are sensitive to the specific location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s) and it is used to reduce overfitting. The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Using this has the effect of making the resulting down sampled feature maps more robust to changes in the position of the feature in the image [22].
Specific research has been carried out into image recognition and classification of fish in the water. For example, a study that used state-of-the-art object detection to detect, localize and classify fish species using visual data obtained by underwater cameras has been carried out. The initial goal was to recognize herring and mackerel and this work was specifically developed for poorly conditioned waters. Their experiments on a dateset obtained at sea, showed a successful detection rate of 66.7% and successful classification rate of 89.7% [26]. There are also studies that researched image recognition and classification of micro plastics. Using computer vision for analyzing required images and machine learning techniques to develop classifiers for four types of micro plastics, an accuracy of 96.6% was achieved [27].
Logbook
Week 1
Name | hrs | Break-down |
---|---|---|
Kevin Cox | 4 | Meeting (1h), Problem statement and objectives (1.5h), Who are the users (1h), Requirements (0.5h). |
Menno Cromwijk | hrs | Meeting (1h), |
Dennis Heesmans | 7 | Meeting (1h), Thinking about project-ideas (3h), State-of-the-art: neural networks (3h) |
Marijn Minkenberg | 1 | Meeting (1h), Setting up wiki page (1h), State-of-the-art: ocean-cleaning technology (3h) |
Lotte Rassaerts | 7 | Meeting (1h), Thinking about project-ideas (2h), State of the art: image recognition (4h) |
Template
Name | Total hours | Break-down |
---|---|---|
Kevin Cox | hrs | description (Xh) |
Menno Cromwijk | hrs | description (Xh) |
Dennis Heesmans | hrs | description (Xh) |
Marijn Minkenberg | hrs | description (Xh) |
Lotte Rassaerts | hrs | description (Xh) |
References
- ↑ Nicholson, C. (n.d.). A Beginner’s Guide to Neural Networks and Deep Learning. Retrieved April 22, 2020, from https://pathmind.com/wiki/neural-network
- ↑ Cheung, K. C. (2020, April 17). 10 Use Cases of Neural Networks in Business. Retrieved April 22, 2020, from https://algorithmxlab.com/blog/10-use-cases-neural-networks/#What_are_Artificial_Neural_Networks_Used_for
- ↑ Amidi, Afshine , & Amidi, S. (n.d.). CS 230 - Recurrent Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
- ↑ Hopfield Network - Javatpoint. (n.d.). Retrieved April 22, 2020, from https://www.javatpoint.com/artificial-neural-network-hopfield-network
- ↑ Hinton, G. E. (2007). Boltzmann Machines. Retrieved from https://www.cs.toronto.edu/~hinton/csc321/readings/boltz321.pdf
- ↑ Amidi, A., & Amidi, S. (n.d.). CS 230 - Convolutional Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-convolutional-neural-networks
- ↑ Yamashita, Rikiya & Nishio, Mizuho & Do, Richard & Togashi, Kaori. (2018). Convolutional neural networks: an overview and application in radiology. Insights into Imaging. 9. 10.1007/s13244-018-0639-9
- ↑ Qian, N. (1999, January 12). On the momentum term in gradient descent learning algorithms. - PubMed - NCBI. Retrieved April 22, 2020, from https://www.ncbi.nlm.nih.gov/pubmed/12662723
- ↑ Hinton, G., Srivastava, N., Swersky, K., Tieleman, T., & Mohamed , A. (2016, December 15). Neural Networks for Machine Learning: Overview of ways to improve generalization [Slides]. Retrieved from http://www.cs.toronto.edu/~hinton/coursera/lecture9/lec9.pdf
- ↑ Kingma, D. P., & Ba, J. (2015). Adam: A Method for Stochastic Optimization. Presented at the 3rd International Conference for Learning Representations, San Diego.
- ↑ Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence o(1/k^2).
- ↑ Dozat, T. (2016). Incorporating Nesterov Momentum into Adam. Retrieved from https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ
- ↑ 13.0 13.1 13.2 Oceans. (2020, March 18). Retrieved April 23, 2020, from https://theoceancleanup.com/oceans/
- ↑ Wikipedia contributors. (2020, April 13). Microplastics. Retrieved April 23, 2020, from https://en.wikipedia.org/wiki/Microplastics
- ↑ Suaria, G., Avio, C. G., Mineo, A., Lattin, G. L., Magaldi, M. G., Belmonte, G., … Aliani, S. (2016). The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters. Scientific Reports, 6(1). https://doi.org/10.1038/srep37551
- ↑ Foekema, E. M., De Gruijter, C., Mergia, M. T., van Franeker, J. A., Murk, A. J., & Koelmans, A. A. (2013). Plastic in North Sea Fish. Environmental Science & Technology, 47(15), 8818–8824. https://doi.org/10.1021/es400931b
- ↑ Rochman, C. M., Hoh, E., Kurobe, T., & Teh, S. J. (2013). Ingested plastic transfers hazardous chemicals to fish and induces hepatic stress. Scientific Reports, 3(1). https://doi.org/10.1038/srep03263
- ↑ Nobleo Technology. (n.d.). Fully Autonomous WasteShark. Retrieved April 23, 2020, from https://nobleo-technology.nl/project/fully-autonomous-wasteshark/
- ↑ Onze missie. (2020, April 6). Retrieved April 23, 2020, from https://www.plasticsoupfoundation.org
- ↑ 20.0 20.1 20.2 Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef
- ↑ Lee, G., & Fujita, H. (2020). Deep Learning in Medical Image Analysis. New York, United States: Springer Publishing.
- ↑ 22.0 22.1 22.2 Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf
- ↑ Culurciello, E. (2018, December 24). Navigating the Unsupervised Learning Landscape - Intuition Machine. Retrieved April 22, 2020, from https://medium.com/intuitionmachine/navigating-the-unsupervised-learning-landscape-951bd5842df9
- ↑ Bosse, S., Becker, S., Müller, K.-R., Samek, W., & Wiegand, T. (2019). Estimation of distortion sensitivity for visual quality prediction using a convolutional neural network. Digital Signal Processing, 91, 54–65. https://doi.org/10.1016/j.dsp.2018.12.005
- ↑ Brooks, R. (2018, July 15). [FoR&AI] Steps Toward Super Intelligence III, Hard Things Today – Rodney Brooks. Retrieved April 22, 2020, from http://rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/
- ↑ Christensen, J. H., Mogensen, L. V., Galeazzi, R., & Andersen, J. C. (2018). Detection, Localization and Classification of Fish and Fish Species in Poor Conditions using Convolutional Neural Networks. 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV). https://doi.org/10.1109/auv.2018.8729798
- ↑ Castrillon-Santana , M., Lorenzo-Navarro, J., Gomez, M., Herrera, A., & Marín-Reyes, P. A. (2018, January 1). Automatic Counting and Classification of Microplastic Particles. Retrieved April 23, 2020, from https://www.scitepress.org/Papers/2018/67250/67250.pdf