PRE2019 4 Group2: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
No edit summary
Line 32: Line 32:
'''System requirements'''
'''System requirements'''


Since the approach and views of sustainable farmers may differ, one of the requirements of the system is that it is flexible in its views what may be concerned as weeds, and what as useful plants (Perrins, Williamson, Fitter, 1992). It should thus be able to distinguish multiple plants instead of merely classifying weeds/non-weeds. Based on user feedback, the following list of plant types should be recognised as weeds: Atriplex [https://en.wikipedia.org/wiki/Atriplex],  Shepherd's purse [https://en.wikipedia.org/wiki/Capsella_bursa-pastoris], Redshank [https://en.wikipedia.org/wiki/Persicaria_maculosa], Chickweed [https://en.wikipedia.org/wiki/Stellaria_media], Red Dead-Nettle [https://en.wikipedia.org/wiki/Lamium_purpureum], Goosefoot [https://en.wikipedia.org/wiki/Chenopodium_album], Creeping Thistle [https://en.wikipedia.org/wiki/Cirsium_arvense] and Bitter Dock [https://en.wikipedia.org/wiki/Rumex_obtusifolius]. Furthermore, regarding the set-up of agroforestry, the system should be able to deal with different kinds of plants in a small region, thus it should be able to recognise multiple plants in one image. Next, the accuracy of the system should be as close as possible to 1, however realistically an accuracy of at least 0.95 is desired.
Since the approach and views of sustainable farmers may differ, one of the requirements of the system is that it is flexible in its views what may be concerned as weeds, and what as useful plants (Perrins, Williamson, Fitter, 1992). It should thus be able to distinguish multiple plants instead of merely classifying weeds/non-weeds. Based on user feedback, the following list of plant types should be recognised as weeds: Atriplex [https://en.wikipedia.org/wiki/Atriplex],  Shepherd's purse [https://en.wikipedia.org/wiki/Capsella_bursa-pastoris], Redshank [https://en.wikipedia.org/wiki/Persicaria_maculosa], Chickweed [https://en.wikipedia.org/wiki/Stellaria_media], Red Dead-Nettle [https://en.wikipedia.org/wiki/Lamium_purpureum], Goosefoot [https://en.wikipedia.org/wiki/Chenopodium_album], Creeping Thistle [https://en.wikipedia.org/wiki/Cirsium_arvense] and Bitter Dock [https://en.wikipedia.org/wiki/Rumex_obtusifolius]. Furthermore, regarding the set-up of agroforestry, the system should be able to deal with different kinds of plants in a small region, thus it should be able to recognise multiple plants in one image. Next, the accuracy of the system should be as close as possible to 100%, however realistically an accuracy of at least 95% should be achieved. Lastly, based on constraints on both the training/testing and possible implementation, the neural network should be as efficient and compact as possible, so that it can classify plant images in at most 1 second per image.


== Approach and Milestones ==
== Approach and Milestones ==

Revision as of 23:32, 3 May 2020

link titleLeighton van Gellecom, Hilde van Esch, Timon Heuwekemeijer, Karla Gloudemans, Tom van Leeuwen


Problem statement


Current farming methods such as monocropping are outdated and have negative effects on soil quality, greenhouse gas emissions, the presence of invasive species and the increase in crop diseases and pests (Plourde et al., 2013). Herbicides are often used to control pests or weeds, because the use of herbicides could mean better crop yield. Moreover, the use of herbicides can be much cheaper than hiring manual weeding labor, even by 50% (Haggblade et al., 2017). This is problematic because the increasing use of agricultural chemicals poses environmental and human health risks (Pingali, 2001). Some seek the answer to such problems in the rise of precision farming. Precision farming’s promises are to reduce waste thereby cutting private and environmental costs (Finger et al., 2019). Others look further into the future and consider agroforestry. In the book Agroforestry Implications (Koh, 2010) the following definition is used: “agroforestry is loosely defined as production systems or practices that integrate trees with agricultural crops or livestock”. The author of the book poses that agroforestry compromises on expanding production while maintaining the potential for forest protection, the need for biodiversity and alleviating poverty.

Agroforestry is labor intensive, therefore the need arises for automation taking over some tasks. A particular task is that of weed identification and removal. The definition of weeds might differ between people. Some definitions include that of Ferreira et al. (2017) who define weeds as “undesirable plants that grow in agricultural crops, competing for elements such as sunlight and water, causing losses to crop yields. ” or a definition by their features (Tang et al., 2016): fast growth rate, greater growth increment and competition for resources such as water, fertilizer and space. The main conclusion that could be drawn from these definitions is that weeds harm the agricultural crops, thus they need to be removed.

Such a weeding robot, or even a general purpose machine, would need many different modules. Each module should operate independently to complete its task, but it should also communicate with other modules. This research restricts itself by specifically looking at weed detection in a setting of agroforestry where between different (fruit) trees plants grow. The aim is to identify weeds by means of computer vision.

Users


User profile

Farmers that adopt a sustainable farming method differ significantly from conventional farmers on personal characteristics. Sustainable farmers tend to have a higher level of education, be younger, have more off-farm income, and adopt more new farming practices (Ekanem & co., 1999). The sustainable farming has other goals than conventional farming as it focuses on aspects like biodiversity and soil quality in addition to the usual high productivity and high profit. The individual differences suggest that sustainable farmers are more likely to originally not be farmers. Also, having more off-farm income indicates limited time devotion to the farm. The willingness to adopt new farming practices could benefit our new software, as it might be more likely to be accepted and tried out.

There is a growing trend of sustainable farming, with the support of the EU, which has set goals for sustainable farming and promotes these guidelines (Ministerie van Landbouw, Natuur en Voedselkwaliteit, 2019). This trend expresses itself in the transition from conventional to sustainable methods within farms, and new initiatives, such as Herenboeren.

Agroforestry imposes more difficulty in the removal of weeds, due to the mixed crops. Weeding is a physically heavy and dreadful job. These reasons cuase a growing need for weeding systems from farmers who made a transition to agroforestry. This is also ascribed by Marius Moonen, co-founder of CSSF and initiator in the field of agroforestry.

Spraying pesticides preventively reduces food quality and poses the problem of environmental pollution (Tang, J., Chen, X., Miao, R., & Wang, D.,2016). The users of the software for weed detection would not only be the sustainable farmers, but also indirectly the consumers of farming products, as it poses an influence on their food and environment.

This research is in cooperation with CSSF. In line with their advice, we will focus on the type of agroforestry farming where both crops and trees grow, in strips on the land. To test the functionality of our design, we will be working in cooperation with farmer Jon van Heesakkers, who has shifted from livestock farming towards this form of agroforestry recently. Therefore, his case will be our role model to design the system.

System requirements

Since the approach and views of sustainable farmers may differ, one of the requirements of the system is that it is flexible in its views what may be concerned as weeds, and what as useful plants (Perrins, Williamson, Fitter, 1992). It should thus be able to distinguish multiple plants instead of merely classifying weeds/non-weeds. Based on user feedback, the following list of plant types should be recognised as weeds: Atriplex [1], Shepherd's purse [2], Redshank [3], Chickweed [4], Red Dead-Nettle [5], Goosefoot [6], Creeping Thistle [7] and Bitter Dock [8]. Furthermore, regarding the set-up of agroforestry, the system should be able to deal with different kinds of plants in a small region, thus it should be able to recognise multiple plants in one image. Next, the accuracy of the system should be as close as possible to 100%, however realistically an accuracy of at least 95% should be achieved. Lastly, based on constraints on both the training/testing and possible implementation, the neural network should be as efficient and compact as possible, so that it can classify plant images in at most 1 second per image.

Approach and Milestones


The main challenge is the ability to distinguish undesired (weeds) and desired (crops) plants. Previous attempts (Su et al., 2020)(Raja et al., 2020) have utilised chemicals to mark plants as a measurable classification method, and other attempts only try to distinguish a single type of crop. In sustainable farming based on biodiversity, a large variety of crops are grown at the same time, meaning that it is extremely important for automatic weed detection software to be able to recognise many different crops as well. To achieve this, the first main objective is collecting data, and determines which plants can be recognised. The data should be colour images of many species of plants, of an as high as possible quality, meaning that it should be of high resolution, in focus and with good lighting. Species that do not have enough images will be removed. Next, using the gathered data, the next main objective will be training and testing Convolutional Neural Networks (CNN)s with varying architectures. The architectures can range from very simple networks with one hidden layer to using pre-existing networks, such as ResNet (He et al., 2015) trained on datasets such as ImageNet (Russakovsky et al., 2015). Then, weeds will be defined as a species of plant that is not desired, or not recognised. Based on this, the final objective will be testing the best neural network(s) using new images from a farm, to see its accuracy in a real environment.

To summarize:

  1. Images of plants will be collected for training.
  2. CNNs will be trained to recognise plants and weeds.
  3. The best CNN(s) will be tested in real situations.


Deliverables


The main deliverable will be a Convolutional Neural Network that is trained to distinguish desired plants and undesired plants on a diverse farm, that is as accurate as possible, and can recognise as many different species as possible. The performance of this CNN, as well as the explored architectures and encountered problems will be described in this wiki, which is the second part of the deliverables.


Planning


End of week 1:

Milestone Responsible Done
Form a group Everyone Yes
Choose a subject Everyone Yes
Make a plan Everyone Yes

End of week 2:

Milestone Responsible Done
Improve user section Hilde
Specify requirements Tom
Make an informed choice for the network structure Leighton
Read up on (Python) neural networks Everyone

End of week 3:

Milestone Responsible Done
Set up a collaborative development environment Timon
Have a training data set Karla

End of week 4:

Milestone Responsible Done
Implement basic neural network structure
Justify all design choices on the wiki

End of week 5:

Milestone Responsible Done
Implement a working neural network

End of week 6:

Milestone Responsible Done
Explain our process of tweaking the hyperparameters

End of week 7:

Milestone Responsible Done
Finish tweaking the hyperparameters and collect results

End of week 8:

Milestone Responsible Done
Create the final presentation Everyone
Hand in peer review Everyone

Week 9:

Milestone Responsible Done
Do the final presentation Everyone

State of the art


This section contains the results of many researches done on the subject of the project. Following are the main conclusions drawn from the literature research. In most existing cases, the camera observes the plants from above. This will be difficult when there are also trees. Three-dimensional images could be a solution. Secondly, lighting has a big influence on the functioning of the weed recognition software. This has to be taken into account when working on the project. A solution could be turning the images into binary black- and white pictures. Also, there are already many neural networks that can make the distinction between weeds and crops. It is also used in practice. However, all of the applications are used in monoculture agriculture. The challenge of agroforestry is the combination of multiple crops. Another conclusion is that the resolution of the camera has to be high enough. This has a large impact on the accurcay of the system. In most cases an RGB camera is used, since a hyperspectral camera is very expensive. RGB images are also sufficient enough to work with. A conclusion can be drawn about datasets. Most researches mention the problem of obtaining a sufficient dataset to use for training the neural network. This slows down the process of improving weed recognition software. At last, recognition can be based on color, shape, texture, feature extraction or 3D image. There are many options to choose from for this project.

A weed is a plant that is unwanted at the place where it grows. This is a rather broad definition, though, and therefore Perrins et al. (1992) looked into what plants are regarded as weeds among 56 scientists. Again, it was discovered that views greatly differed among the scientists. Therefore it is not possible to clearly classify plants into weeds or non-weeds, since it depends on the views of a person, and the context of the plant.

Hemming et al. (2013) and Hemming et al. (2018) have written research reports about a working system using weed detection. The first research works with three crops: onions, carrots and spinach. The research shows that recognition based on color requires less computational force than recognition based on shape. It is however in certain cases necessary to use shape recognition. It is important that the signal of the crop predominates compared to the weeds. For proper detection of an object, a minimum image resolution of 3 times the size of the object is required (based on Shannon's sampling theorem). The second research works with color recognition. HSI color dimension is used to convert the color observed by the camera into usable input for the software. The robot has a user interface so the user can help the neural network to learn the color of the plant. The user can determine the range of colors in which the plant’s colors are. This way the software becomes broadly applicable. Two interactive color segmentations are evaluated: the GrabCut algorithm and the FloodFill algorithm. The two algorithms fail due to the effect of shadows and multiple colors on a plant. The research shows that the settings of saturation and intensity are very important. To realize a working system, hardware is introduced and software is added to determine the relative positioning of the plant. In both researches, the camera observed from above the plant.

Potato blackleg is a bacterial disease that can occur in potato plants that causes decay of the plant, and may spread to neighbouring plants if the diseased plant is not taken away. So far, only systems have been devised that were able to detect the disease after harvesting the plants. Afonso et al. (2019) created a system that had a 95% precision rate in detection of healthy and diseased potato plants. This system consisted of a deep learning algorithm, which used a neural network trained by a dataset of 532 labelled images. There is a downside to the system, however, since it was devised, and trained, to detect plants that were separate and do not overlap. In most scenarios, this is not the case. Further developments need to be made to be able to use the system in all scenarios. In addition, it proved to be difficult to gain enough labelled images of the plants.

Most weed recognition and detection systems designed up to now are specifically designed for a sole purpose or context. Plants are generally considered weeds when they either compete with the crops or are harmful to livestock. Weeds are traditionally mostly battled using pesticides, but this diminishes the quality of the crops. The Broad-leaved dock weed plant is one of the most common grassland weeds, and Kounalakis et al. (2018) aim to create a general weed recognition system for this weed. The system designed relied on images and feature extraction, instead of the classical choice for neural networks. It had a 89% accuracy.

Salman et al. (2017) researched a method to classify plants based on 15 features of their leaves. This yielded a 85% accuracy for classification of 22 species with a training data set of 660 images. The algorithm was based on feature extraction, with the help of the Canny Edge Detector and SVM Classifier.

Li et al. (2020) have compared multiple convolutional neural networks for recognizing crop pests. The used dataset consisted of 5629 images and was manually collected. They found that GoogLeNet outperformed VGG-16, VGG-19, ResNet50 and ResNet152 in terms of accuracy, robustness and model complexity. As input RGB images were used and in the future infrared images are also an option.

Riehle et al. (2020) give a novel algorithm that can be used for plant/background segmentation in RGB images, which is a key component in digital image analysis dealing with plants. The algorithm has shown to work in spite of over- or underexposure of the camera, as well as with varying colours of the crops and background. The algorithm is index-based, and has shown to be more accurate and robust than other index-based approaches. The algorithm has an accuracy of 97.4% and was tested with 200 images.

Dos Santos Ferreira et al. (2017) created data by taking pictures with a drone at a height of 4 meters above ground level. The approach used convolutional neural networks. The results achieved high accuracy in discriminating different types of weeds. In comparison to traditional neural networks and support vector machines deep learning has the key factor that features extraction is automatically learned from raw data. Thus it requires little by hand effort. Convolutional neural networks have been proven to be successful in image recognition. For image segmentation the simple linear iterative clustering algorithm (SLIC) is used, which is based upon the k-means centroid based clustering algorithm. The goal was to separate the image into segments that contain multiple leaves of soy or weeds. Important is that the pictures have a high resolution of 4000 by 3000 pixels. Segmentation was significantly influenced by lightning conditions. The convolutional neural network consists of 8 layers, 5 convolutional layers and 3 fully connected layers. The last layer uses SoftMax to produce the probability distribution. ReLU was used for the output of the fully connected layers and the convolutional layers. The classification of the segments was done with high robustness and had superior results to other approaches such as random forests and support vector machines. If a threshold of 0.98 is set to than 96.3% of the images are classified correctly and none received incorrect identification.

Yu et al. (2019) argued that the deep convolutional neural networks (DCNN) takes much time in training (hours), and little time in classification (under a second). The researchers compared different existing DCNN for weed detection in perennial ryegrass and detection between different weeds too. Due to the recency of the paper and the comparison across different approaches it is a good estimation of the current state of the art. The best results seem to be > 0.98. It also shows weed detection in perennial ryegrass, so not perfectly aligned crops. However, only the distinction between the ryegrass or weeds is made. For robotics applications in agroforestry, different plants should be discriminated from different weeds.

Wu et al. (2020) try to improve the functioning of vision-based weed control, and do this by taking a slower approach to visual processing and decision-making. Multiple overhead cameras are used, which are not suited for all types of crops. However, 3D vision is used, so the camera position might be modifiable. A note that should be added is that the test were done using sugar beets which are easy to recognize on camera.

Piron et al. (2011) suggest that there are two different types of problems. First a problem that is characterized by detection of weeds between rows or more generally structurally placed crops. The second problem is characterized by random positions. Computer vision has led to successful discrimination between weeds and rows of crops. Knowing where, and in which patterns, crops are expected to grow and assuming everything outside that region is a weed has proven to be successful. This study has shown that plant height is a discriminating factor between crop and weed at early growth stages since the speed of growth of these plants differ. An approach with three-dimensional images is used to facilitate this. The classification is by far not robust enough, but the study shows that plant height is a key feature. The researchers also suggest that camera position and ground irregularities influences classification accuracy negatively.

Weeds hold particular features among: fast growth rate, greater growth increment and competition for resources such as water, fertilizer and space. These features are harmful for crops growth. Lots of line detection algorithms use Hough transformations and the perspective method. The robustness of Hough transformations is high. The problem with the perspective method is that it cannot accurately calculate the position of the lines for the crops on the sides of an image. Tang et al. (2016) propose to combine the vertical projection method and linear scanning method to reduce the shortcomings of other approaches. It is roughly based upon transforming the pictures into binary black- and white pictures to control for different illumination conditions and then drawing a line between the bottom and top of the image such that the amount of white pixels is maximized. In contrast to other methods, this method is real-time and its accuracy is relatively high.

Gašparović et al. (2020) discuss the use of unmanned aerial vehicles (UAV) to acquire spatial data which can be used to locate weeds. In this paper four classification algorithms are tested, based on the random forest machine learning algorithm. The automatic object-based classification method achieved the highest classification accuracy. Belgiu et al. (2016) have shown that the random forest machine learning algorithm is the best algorithm for the automation of classification as it requires very little parameters. Random forest algorithms were proposed by Breiman (2001).

Espejo-Garcia et al. (2020) deal with weed classification through transfer learning, where pre-trained convolutional neural networks (Xception, Inception-Resnet, VGNets, Mobilenet and Densenet) are combined with more "traditional" machine learning methods for classification (Support Vector Machines, XGBoost and Logistic Regression), in order to avoid overfitting and providing consistent and robust performance. This provides some impressively accurate classification algorithms, with the most accurate being a combination of fine-tuned Densenet and Support Vector Machine.

Different approaches might exist: machine vision methods and spectroscopic methods (utilizing spectral reflectance or absorbance patterns). With spectroscopic methods features such as water content, moisture or humidity can be measured. Field studies have shown that weeds and agricultural crops can be distinguished based on their relative spectral reflectance characteristics. Alchanatis et al. (2005) propose an image processing algorithm based on image texture to discriminate weeds from cotton. They used images hyperspectral images to perform basic segmentation between crop and soil. The authors used a robust statistics algorithm yielding an average false alarm rate of 15%. This is worse than newer existing options.

Booij et al. (2020) researched autonomous robots that can combat against unwanted potato plants. Previous robots could not distinguish between the potato and beetroot plants. Using deep learning, this has now succeeded with a 96% success rate. A robot was developed which drives on the land and makes pictures, which are sent to a KPN-cloud through 5G. The pictures are then analyzed by the deep learning algorithm, and the result is sent back to the robot. This deep learning algorithm was constructed with a dataset of about 5500 labelled pictures of potato and sugar beet plants to train the system. Next, the robot combats the plants that have been detected as the unwanted potato plants using a spraying unit, which is instructed by the system. This development is already a big step forward, but the fault rate is still too large for the system to be put into practice.

Raja et al. (2020) created a vision and control system that was able to remove most weeds from an area, without explained visual features of crops and weeds. It achieved a crop detection accuracy of 97.8%, and was able to remove 83% of weeds around plants. This seemed to be in a very controlled setting, however, and still works with mainly simple farms.

Su et al. (2020) and Raja et al. (2020) investigated the use of specific signaling compounds to mark desired plants, so that weeds can be removed. This created a way of marking the plants with a "machine-readable signal". This could thus be used for automatic classification of plants. According to one of the studies, an accuracy of at least 98% was achieved for detecting weeds and crops. Further work still needs to be done to get this method practically functioning.

Herck et al. (2020) describe how they modified farm environment and design to best suit a robot harvester. This took into account what kind of harvesting is possible for a robot, and what is possible for different crops, and then tried to determine how the robot could best do its job.

References


Afonso, M. V., Blok, P. M., Polder, G., van der Wolf, J. M., & Kamp, J. A. L. M. (2019). Blackleg Detection in Potato Plants using Convolutional Neural Networks. Paper presented at 6th IFAC Conference on Sensing, Control and Automation Technologies for Agriculture, AgriControl 2019, Sydney, Australia.

Alchanatis, V., Ridel, L., Hetzroni, A., & Yaroslavsky, L. (2005). Weed detection in multi-spectral images of cotton fields. Computers and Electronics in Agriculture, 47(3), 243-260. doi:10.1016/j.compag.2004.11.019

Bawden, O., Kulk, J., Russell, R., McCool, C., English, A., Dayoub, F., . . . Perez, T. (2017). Robot for weed species plant-specific management. Journal of Field Robotics, 34(6), 1179-1199. doi:10.1002/rob.21727

Belgiu, M., Drăguţ, L. (2016). Random forest in remote sensing: a review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 114, 24–31. https://doi:10.1016/j.isprsjprs.2016.01.011

Booij, J., Nieuwenhuizen, A., van Boheemen, K., de Vissr, C., Veldhuisen, B., Vroegop, A., ... Ruigrok, T. (2020). 5G Fieldlab Rural Drenthe: duurzame en autonome onkruidbestrijding. (Rapport / Stichting Wageningen Research, Wageningen Plant Research, Business unit Agrosysteemkunde; No. WPR). Wageningen: Stichting Wageningen Research, Wageningen Plant Research, Business unit Agrosysteemkunde. https://doi.org/10.18174/517141

Breiman, L. (2001). Random Forests. Machine Learning 45, 5–32. https://doi.org/10.1023/A:1010933404324

Carvalho, L., & Von Wangenheim, A. (2019). 3d object recognition and classification: A systematic literature review. Pattern Analysis and Applications, 22(4), 1243-1292. doi:10.1007/s10044-019-00804-4

Comer, S., Ekanem, E., Muhammad, S., Singh, S. P., & Tegegne, F. (1999). Sustainable and conventional farmers: A comparison of socio-economic characteristics, attitude, and beliefs. Journal of Sustainable Agriculture, 15(1), 29-45.

Dos Santos Ferreira, A., Matte Freitas, D., Gonçalves da Silva, G., Pistori, H., & Theophilo Folhes, M. (2017). Weed detection in soybean crops using convnets. Computers and Electronics in Agriculture, 143, 314-324. doi:10.1016/j.compag.2017.10.027

Duong, L.T., Nguyen, P.T., Sipio, C., Ruscio, D. (2020). Automated fruit recognition using EfficientNet and MixNet. Computers and Electronics in Agriculture, 171. https://doi.org/10.1016/j.compag.2020.105326

Espejo-Garcia, B., Mylonas, N., Athanasakos, L., Fountas, S., & Vasilakoglou, I. (2020). Towards weeds identification assistance through transfer learning. Computers and Electronics in Agriculture, 171, 0168-1699. https://doi.org/10.1016/j.compag.2020.105306

Finger, R., Scott M. S., Nadja El B., and Achim W. 2019. “Precision Farming at the Nexus of Agricultural Production and the Environment.” Annual Review of Resource Economics 11(1):313–35.

Gašparović, M., Zrinjski, M., Barković, D., & Radočaj, D. (2020). An automatic method for weed mapping in oat fields based on UAV imagery. Computers and Electronics in Agriculture, 173, 0168-1699. https://doi.org/10.1016/j.compag.2020.105385

Haggblade, S., Smale, M., Kergna, A., Theriault V., Assima, A. (2017). Causes and Consequences of Increasing Herbicide Use in Mali. Eur J Dev Res 29, 648–674. https://doi.org/10.1057/s41287-017-0087-2

He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2015). Deep Residual Learning for Image Recognition.

Hemming, J., Barth, R., & Nieuwenhuizen, A. T. (2013). Automatisch onkruid bestrijden PPL-094 : doorontwikkelen algoritmes voor herkenning onkruid in uien, peen en spinazie. Wageningen: Plant Research International, Business Unit Agrosysteemkunde.

Hemming, J., Blok, P., & Ruizendaal, J. (2018). Precisietechnologie Tuinbouw: PPS Autonoom onkruid verwijderen: Eindrapportage. (Rapport WPR; No. 750). Bleiswijk: Wageningen Plant Research, Business unit Glastuinbouw. https://doi.org/10.18174/442083

Herck, L., Kurtser, P., Wittemans, L., & Edan, Y. (2020). Crop design for improved robotic harvesting: A case study of sweet pepper harvesting, Biosystems Engineering, 192, 1537-5110. https://doi.org/10.1016/j.biosystemseng.2020.01.021.

Koh, Lian Pin. (2010). Agroforestry Implications. Biotropica 42(6):760–60.

Kounalakis, T., Triantafyllidis, G. A., & Nalpantidis, L. (2018). Image-based recognition framework for robotic weed control systems. Multimedia Tools and Applications, 77(8), 9567-9594. http://dx.doi.org/10.1007/s11042-017-5337-y

Li, Y., Wang, H., Dang, L.M., Sadeghi-Niaraki, A., & Moon, H. (2020). Crop pest recognition in natural scenes using convolutional neural networks. Computers and Electronics in Agriculture, 169, 0168-1699. https://doi.org/10.1016/j.compag.2019.105174

Ministerie van Landbouw, Natuur en Voedselkwaliteit. (2019). Landbouwbeleid. Consulted from: https://www.rijksoverheid.nl/onderwerpen/landbouw-en-tuinbouw/landbouwbeleid

Perrins, J., Williamson, M., & Fitter, A. (1992). A survey of differing views of weed classification: implications for regulation of introductions. Biological Conservation, 60(1), 47-56.

Pingali, P.L. (2001). Environmental consequences of agricultural commercialization in asia. Environment and Development Economics, 6(4), 483–502

Piron, A., van der Heijden, F. & Destain, M.F. Weed detection in 3D images. Precision Agric 12, 607–622 (2011). https://doi-org.dianus.libr.tue.nl/10.1007/s11119-010-9205-2

Plourde J.D, Pijanowski B.C, & Pekin B.K. (2013). “Evidence for Increased Monoculture Cropping in the Central United States.” Agriculture, Ecosystems and Environment 165:50–59.

Raja, R., Nguyen, T.T., Slaughter, D.C., Fennimore, S.A. (2020). Real-time robotic weed knife control system for tomato and lettuce based on geometric appearance of plant labels. Biosystems Engineering, 194, 1537-5110. https://doi.org/10.1016/j.biosystemseng.2020.03.022

Raja, R., Nguyen, T.T., Slaughter, D.C., & Fennimore, S.A. (2020). Real-time weed-crop classification and localisation technique for robotic weed control in lettuce. Biosystems Engineering, 192, 1537-5110. https://doi.org/10.1016/j.biosystemseng.2020.02.002

Riehle, D., Reiser, D. & Griepentrog, H.W. (2020). Robust index-based semantic plant/background segmentation for RGB- images. Computers and Electronics in Agriculture, 169, 0168-1699. https://doi.org/10.1016/j.compag.2019.105201

Russakovsky, Olga; Deng, Jia; Su, Hao; Krause, Jonathan; Satheesh, Sanjeev; Ma, Sean; Huang, Zhiheng; Karpathy, Andrej; Khosla, Aditya; Bernstein, Michael; Berg, Alexander C.; Fei-Fei, Li (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision.

Salman, A., Semwal, A., Bhatt, U., Thakkar, V.M. (2017). Leaf classification and identification using Canny Edge Detector and SVM classifier. 2017 International Conference on Inventive Systems and Control (ICISC), Coimbatore, pp. 1-4.

Su, W., Fennimore, S.A., & Slaughter, D.C. (2020). Development of a systemic crop signalling system for automated real-time plant care in vegetable crops. Biosystems Engineering, 193, 1537-5110. https://doi.org/10.1016/j.biosystemseng.2020.02.011

Tang, J. L., Chen, X. Q., Miao, R. H., & Wang, D. (2016). Weed detection using image processing under different illumination for site-specific areas spraying. Computers and Electronics in Agriculture, 122, 103-111.

Wu, X., Aravecchia, S., Lottes, P., Stachniss, C., & Pradalier, C. (2020) Robotic weed control using automated weed and crop classification. J Field Robotics, 37, 322– 340. https://doi.org/10.1002/rob.21938

Yu, J., Schumann, A., Cao, Z., Sharpe, S., & Boyd, N. (2019). Weed detection in perennial ryegrass with deep learning convolutional neural network. Frontiers in Plant Science, 10, 1422-1422. doi:10.3389/fpls.2019.01422

Who has done what



Week 1:

Name (ID) Hours Work done
Hilde van Esch (1306219) 11 Intro lecture + group formation (1 hour) + Meetings (3 hours) + Brainstorming ideas (1 hour) + Literature research (4 hours) + User (2 hours)
Leighton van Gellecom (1223623) 13 Intro lecture + group formation (1 hour) + Meetings (3 hours) + Brainstorming ideas (1 hour) + Literature research (6.5 hours) + Problem statement (1.5 hours)
Tom van Leeuwen (1222283) 9 Intro lecture + group formation (1 hour) + Meetings (3 hours) + Brainstorming ideas (1 hour) + Literature research (2 hours) + Approach, milestones and deliverables (2 hours)
Karla Gloudemans (0988750) 15 Intro lecture + group formation (1 hour) + Meetings (3 hours) + Brainstorming ideas (1 hour) + Literature research & State of the Art combining (9 hours) + Typing out minutes (1 hour)
Timon Heuwekemeijer (1003212) 9 Intro lecture + group formation (1 hour) + Meetings (3 hours) + Brainstorming ideas (1 hour) + Literature research (4 hours)

Week 2:

Name (ID) Hours Work done
Hilde van Esch (1306219) Meetings (2 hour) + Reviewing wiki page (1 hour)
Leighton van Gellecom (1223623)
Tom van Leeuwen (1222283) Meetings (1 hour)
Karla Gloudemans (0988750) Meetings (1 hour)
Timon Heuwekemeijer 4,5 Meetings (2,5 hours), create a planning (2 hours)