Recently there has been much interest in hyperspectral imaging research and applications. Hyperspectral cameras collect image data from across the electromagnetic spectrum. These cameras aim to capture a spectrogram for each pixel to form a hyperspectral cube. The application area for these types of cameras is broad and varies from vegetation inspection to chemical finger printing.
Small battery-powered multi-copters called Unmanned Aerial Vehicles (UAVs) have recently shown great potential for a large number of applications. A camera mounted on a UAV is used for the inspection of large objects or large areas. For example, in the field of precision agriculture they are used for measuring Chlorophyll to determine crop health. Furthermore, UAVs are used for the inspection of wind turbines, to take images af cracks, pinholes and other defects.
UAV applications could benefit more from hyperspectral imaging technology, but these devices have intrinsic limitations that makes using them in conjunction with each other challenging. This is mainly caused by the sensitivity to movement or low spatial resolution of the hyperspectral devices combined with the limited payload capabilities of UAVs.
Deep learning or, more specifically, Convolutional Neural Networks (CNNs) have shown to obtain state-of-the art performance in a multitude of research fields. Many applications already benefit from deep learning. Usually these deep learning models are trained directly from data in an end-to-end fashion. This dissertation revolves around the question if algorithms from the field of deep learning can mitigate the difficulties that are caused by the limitations encountered in combining hyperspectralimaging and UAVs.
This trinity of technologies: deep learning, hyperspectral imaging and UAVs, serves as a framework within which this research is defined.
Thesis available at: http://hdl.handle.net/11370/c1ae3b2f-86f4-4aa4-be1e-b00d606f2e42