Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement multi-objective loss #187

Open
monatis opened this issue Dec 8, 2022 · 2 comments
Open

Implement multi-objective loss #187

monatis opened this issue Dec 8, 2022 · 2 comments
Labels
enhancement New feature or request

Comments

@monatis
Copy link
Contributor

monatis commented Dec 8, 2022

In cases where limited data are available, it might be possible that samples are labelled in different aspects. In such a case, it might be beneficial to use a multi-objective loss, i.e., a utility that wraps other losses and that is responsible for passing embeddings and labels to each one of the losses configured.

I propose such an implementation, PytorchMetricLearningWrapper might be an example.

Usage scenario

  1. Suppose that you are training a model to search for art images. These images can be said to be related or unrelated according to our point of comparison. For instance, two images may feature the same object but in stylistically different ways. These stylistic feature may be coming from the art movement or artist's personal legacy. In this case, the ideal embedding space should be able to encode an image with a vector containing these pieces of information. However, these features form different sets of labels. Our labels may include a string of visual captioning, an artistic description, the name of the artist, the year of creation. If we want to encourage our model encode all of them, then we need to use a separate loss instance for each one of them.
  2. We can categorise similarity learning losses into two main categories: pair-based losses such as Contrastive Loss or Triplet Loss, and proxy-based losses such as ArcFace and Proxy-NCA. The former can provide feedback on one-to-main relationships of similarity in the dataset while too slow in convergence. The latter are pretty much faster in convergence, but they are not so good at organizing the intra-class embedding space. There is a recent approach to combine both of them, i.e., using two types of losses simultaneously to get the best of two worlds. A relevant paper, PDF
@monatis monatis added the enhancement New feature or request label Dec 8, 2022
@crsdvaibhav
Copy link
Contributor

Hello! I would like to work on this issue, can you assign this to me?

@Spinachboul
Copy link

@monatis
That seems a wonderful idea! Will def go through the paper about the same, and I think it will make the tasks more efficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants