Learn to Learn like a child — A Few Shots Learning for Image Classification

TL;DR; 
- A typical approach for doing Image Classification for k-class classification is to have a lot of sample images for each class, hundreds or even thousands of images. The huge challenge for image collection and image labeling.
- A few shots learning offers a way, in which we can have a query image and reference sample images within k-way n-shot. Each class(in k-way) consists of very few images (n-shot). Then we can find in which class the query image belongs to, by computing the similarities between the feature vector of the query image and the mean features of images in each of the classes.
An Image, source: Google Image Search.
The four categories of animals in the cards, with one image per category. Image Source: Google Image Search.

Traditional Approach for Image Classification

Learn to Learn with A few Shots Learning

Two images of Anime_Woman. Image Source: Google Image Search.
image_sim = sim(image_1, image_2)
print(image_sim)
1.00
The left image is within the Anime_Woman class, and the right is in the Real_Woman class—image Source: Google Image Search.
image_sim = sim(image_1, image_2)
print(image_sim)
0.00

Query and Support Set

An illustration of one Query Image and a 4-way 1 short Support Set. Some of the images are from Google Image Search.
An illustration of one Query Image and a 4-way 2 short Support Set. Some of the images are from Google Image Search.
An example of a query against 4-way Support Set with only one image per class. The highest similarity score between the query image feature vector and the feature vector in a class within the Support Set indicates the chosen class, in this case, “Anime_Man”.
An example of a query against 4-way Support Set with 2 sample images per class. The highest similarity score between the query image feature vector and the respective mean vector in a class within the Support Set indicates the chosen class, in this case, “Real_Woman”.

Training a few Shots Learning model

  • Map the images in the Query and Support Set to feature vectors.
  • Map the feature vectors in the same class to obtain the mean (the average) for each class. If the support set has k-classes, then we will have k-mean values.
  • Compare the query feature vector with the mean vector in each k-classes to find the cosine similarities (similarity scores).

Prediction Accuracy

References

--

--

--

https://www.linkedin.com/in/andisama/

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Hands-on Content Based Recommender System using Python

Prayers of Deep Learning-Introduction(Part 1)

Document Similarity using Word Mover’s Distance and Cosine similarity

Kick Start with Intelligent Speech Interaction in Alibaba Cloud — Part 2

Generative Adversarial Networks (GANs) for Beginners:

Object Recognition using CNN model

Kaggle on Kubeflow

What is New in Spark NLP for Healthcare v3.4.1?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Andi Sama

Andi Sama

https://www.linkedin.com/in/andisama/

More from Medium

Application of Machine Learning for tracking Orbital Debris

Install TensorFlow with GPU Acceleration Simultaneously for Windows and WSL Linux (1)

The intricate connection between science and technology, a reflection on quantum computing.

What is the main process of model fitting in MatLab?