Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Generate embedding for images using /api/embeddings endpoint #4296

Open
Agent-E11 opened this issue May 9, 2024 · 3 comments
Labels
feature request New feature or request

Comments

@Agent-E11
Copy link

I think this feature would make a lot of sense.
I can add an "images" field in requests to /api/generate when using a multi-modal model, why can't I do the same for requests to /api/embeddings?

curl http://localhost:11434/api/embeddings -d '{
  "model": "llava",
  "images": ["iVBORw0KGgoAAAANSUhEUgAAAG0A..."]
}'
@Agent-E11 Agent-E11 added the feature request New feature or request label May 9, 2024
@PaulCapestany
Copy link

I'd be very interested in this feature as well. As a potential workaround I was thinking of just using llava via /api/generate endpoint to generate a textual description of the image, and then making an embedding of that text

@zty0510
Copy link

zty0510 commented May 11, 2024

I'm also expecting this feature with which I can get text embedding and image embedding respectively from Llava, so that i can calculate the semantic similarity between them.

@kendelljoseph
Copy link

I think this makes sense too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants