Adaptive Asymmetric Loss Function for Positive Unlabeled Learning

Unlike traditional semi-supervised learning, positive unlabeled learning requires only some labeled data from the positive class.  All other data, both positive and negative, is unlabeled.  The goal is the same, i.e., to construct a classification model that correctly labels unlabeled images and create a model to label future images.  This problem is surprisingly common in the real-world with both commercial and military applications.  Detecting a new class in remote sensing is one such example.

Researchers at Arizona State University and Prime Solutions Group, Inc. have developed an algorithm that provides near supervised classification accuracy with very low levels of labeled data (e.g., as little as 1% or less).  This algorithm is a new and efficient solution for positive unlabeled learning tailored specifically for a deep learning framework.  The algorithm was tested with image classification.  When only positive and unlabeled data is available for training, the custom loss function, paired with a simple linear transform of the output, results in an inductive classifier where no estimate of the class prior is required.

Related publication: An adaptive asymmetric loss function for positive unlabeled learning

Potential Applications:

  • Deep neural networks for:
    • Image classification
    • Remote sensing
    • Other commercial and/or military applications

Benefits and Advantages:

  • Tested on benchmark image datasets including MNIST, Cats-vs-Dogs, and STL-10
    • Outperforms current state-of-the-art algorithms, notably when the proportion of labeled samples is very low and in realistic open training set problems such as STL-10
  • Effectively learns a binary classifier on positive unlabeled data
  • Adaptive asymmetric loss function that is learned and based on the structure of the data
Patent Information: