In this tutorial, we will apply a neural network model to a cloud detection application using satellite imaging data. The data is from NASA’s Multi-angle Imaging SpectroRadiometer (MISR) which was launched in 1999. The MISR has nine cameras that view the Earth from nine different directions which allows the MISR to measure elevations and angular radiance signatures of objects. We will use the radiances measured from the MISR and features developed using domain expertise to learn to detect whether clouds are present in polar regions. This is a particularly challenging task due to the snow and ice covering the ground surfaces.
The data is taken from MISR measurements and expert features of 3 images of polar regions. For each location in the grid, there is an expert label whether or not clouds are present and 8 features (radiances + expert labels). Data from two images will comprise the training set and the left out image is in the test set.
The data can be found in a tar.gz file located at the url provided below in the next cell. It is organized into two directories (train and test). In each directory there are five subdirectories: n1, n2, n3, n4, and n5. The data in n1 contains expert features and the label pertaining to a particular location in an image. n2, n3, n4, and n5 contain the expert features corresponding to the nearest locations to the original location.
We will additionally use features from a location’s nearest neighbors as features to feed into our model, because there are dependencies across neighboring locations. In other words, if a location’s neighbors have a positive cloud label, it is more likely for the original location to have a positive cloud label as well. The reverse also applies as well.
val DATA_URL ="https://dl4jdata.blob.core.windows.net/training/tutorials/Cloud.tar.gz"val DATA_PATH = FilenameUtils.concat(System.getProperty("java.io.tmpdir"), "dl4j_cloud/")
Download Data
To download the data, we will create a temporary directory that will store the data files, extract the tar.gz file from the url, and place it in the specified directory.
Next, we must extract the data from the tar.gz file, recreate directories within the tar.gz file into our temporary directory, and copy the files into our temporary directory.
var fileCount =0var dirCount =0val BUFFER_SIZE =4096val tais = new TarArchiveInputStream(new GzipCompressorInputStream( new BufferedInputStream( new FileInputStream(archizePath))))
var entry = tais.getNextEntry().asInstanceOf[TarArchiveEntry]while(entry !=null){if (entry.isDirectory()) {new File(DATA_PATH + entry.getName()).mkdirs() dirCount = dirCount +1 fileCount =0 }else {val data =new Array[scala.Byte](4* BUFFER_SIZE)val fos =new FileOutputStream(DATA_PATH + entry.getName());val dest =new BufferedOutputStream(fos, BUFFER_SIZE);var count = tais.read(data, 0, BUFFER_SIZE)while (count !=-1) { dest.write(data, 0, count) count = tais.read(data, 0, BUFFER_SIZE) } dest.close() fileCount = fileCount +1 }if(fileCount %1000==0){ print(".") } entry = tais.getNextEntry().asInstanceOf[TarArchiveEntry]}
DataSetIterators
Our next goal is to convert the raw data (csv files) into a DataSetIterator, which can then be fed into a neural network for training. We will first obtain the paths containing the raw data, which is in csv file format.
We then will create two DataSetIterators to feed the data into a neural network. But first, we will initialize CSVRecordReaders to parse the raw data and convert it to record-like format. We create separate CSVRecordReaders for the original location and each nearest neighbor. Since the data is contained in separate RecordReaders, we will use a RecordReaderMultiDataSetIterator, which allows for multiple inputs or outputs. We then add the RecordReaders to the DataSetIterator using the addReader method of the DataSetIterator.Builder() class. We specify the inputs using the addInput method and the label using the addOutputOneHot method.
Now that the DataSetIterators are initialized, we can now specify the configuration of the neural network. We will ultimately use a ComputationGraph since we will have multiple inputs to the network. MultiLayerNetworks cannot be used when there are multiple inputs and/or outputs.
To specify the network architecture and the hyperparameters, we use the NeuralNetConfiguraiton.Builder class. We can add each input using the addLayer method of the class. Because the inputs are separate, the addVertex method is used to add a MergeVertex to the network. This vertex will merge the outputs from the previous input layers into a combined representation. Finally, a fully connected layer is applied to the merged output, which passes the activations to the final output layer.
The other hyperparameters, such as the optimization algorithm, updater, number of hidden nodes, and etc are also specified in this block of code as well.
val conf =new NeuralNetConfiguration.Builder() .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT) .updater(new Adam()) .graphBuilder() .addInputs("input1", "input2", "input3", "input4", "input5") .addLayer("L1", new DenseLayer.Builder() .weightInit(WeightInit.XAVIER) .activation(Activation.RELU) .nIn(3).nOut(50) .build(), "input1") .addLayer("L2", new DenseLayer.Builder() .weightInit(WeightInit.XAVIER) .activation(Activation.RELU) .nIn(3).nOut(50) .build(), "input2") .addLayer("L3", new DenseLayer.Builder() .weightInit(WeightInit.XAVIER) .activation(Activation.RELU) .nIn(3).nOut(50) .build(), "input3") .addLayer("L4", new DenseLayer.Builder() .weightInit(WeightInit.XAVIER) .activation(Activation.RELU) .nIn(3).nOut(50) .build(), "input4") .addLayer("L5", new DenseLayer.Builder() .weightInit(WeightInit.XAVIER) .activation(Activation.RELU) .nIn(3).nOut(50) .build(), "input5") .addVertex("merge", new MergeVertex(), "L1", "L2", "L3", "L4", "L5") .addLayer("L6", new DenseLayer.Builder() .weightInit(WeightInit.XAVIER) .activation(Activation.RELU) .nIn(250).nOut(125).build(), "merge") .addLayer("out", new OutputLayer.Builder() .lossFunction(LossFunctions.LossFunction.MCXENT) .weightInit(WeightInit.XAVIER) .activation(Activation.SOFTMAX) .nIn(125) .nOut(2).build(), "L6") .setOutputs("out") .build();
Model Training
We are now ready to train our model. We initialize our ComptutationGraph and train over the number of epochs with a call to the fit method of the ComputationGraph to train our specified model.
val model =new ComputationGraph(conf);model.init()model.fit( trainIter, 5 );
To evaluate our model, we simply use the evaluateROC method of the ComptuationGraph class.
val roc = model.evaluateROC[ROC](testIter, 100)
Finally we can print out the area under the curve (AUC) metric!