Job - nnMetrics.py

This page contains an in-depth description on the nnMetrics.py file

Function definitions

Please note that you cannot change the following function names and input variables. The following function names should exist in this file, however, you can modify the content inside them as per your requirement.

  1. metricSupportFn - This function defines how a predicted value is considered to be right or wrong during model validation. It takes two inputs: outputs, labels. outputs is the predicted value from the neural network, whereas labels is the real label for the input data. Given the outputs and labels, this function should return the number of total labels and the number of correctly predicted values, in the mentioned order (i.e. total, correct). Using this, accuracy will be calculated for the model.

  2. metricSupportFn2 - This function defines how you want to calculate true positives, false positives, true negatives and false negatives in the given order. Similar to metricSupportFn, it takes two inputs: outputs, labels. This is again used for model validation, and to calculate metrics like specificity, sensitivity and negative predictive value during model validation.

  3. optimizerFn - This function defines the optimizer to be used in model training. It takes the inputs: model, lr where model is the neural network as defined in nn.py and lr is the learning rate as defined in input.json's lr. It returns the optimizer you want to use in the model training process.

  4. criterionFn - This function takes no input and returns the loss function for model training.

  5. transformFn - This function returns the tranformations you want to apply before training and validating the data.

Sample file


import numpy as np
def metricSupportFn(outputs, labels):
    labels = np.array([t.numpy() for t in labels])
    outputs = outputs.cpu().numpy()
    outputs_rounded = np.array(np.matrix.round(outputs))
    vals = []
    for i in (outputs_rounded == labels):
        vals.append(i.sum()/len(i))
    total  = len(vals)
    correct  = np.array(vals).sum()
    return total, correct

def metricSupportFn2(outputs, labels):
    from sklearn.metrics import confusion_matrix
    classes = len(labels[0])
    tp, fp, tn, fn = 0, 0, 0, 0
    for i in range(classes):
        y_pred = np.where(outputs[:,i] > 0.5, 1, 0)
        tn_tmp, fp_tmp, fn_tmp, tp_tmp = confusion_matrix(labels[:,i], y_pred, labels=[0, 1]).ravel()
        tp += tp_tmp
        fp += fp_tmp
        tn += tn_tmp
        fn += fn_tmp
    return (tp, fp, tn, fn)

def optimizerFn(model, lr):
    import torch.optim as optim
    return optim.SGD(model.parameters(), lr=lr, momentum=0.9)

def criterionFn():
    import torch
    return torch.nn.CrossEntropyLoss()

def transformFn():
    from torchvision import transforms
    t = transforms.Compose([
                transforms.Resize((224, 224)),
                transforms.RandomHorizontalFlip(),
                transforms.RandomVerticalFlip(),
                transforms.ToTensor()
    ])
    return t

Last updated