Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sufficient Colab tier/GPU for the "original" notebook? #8

Open
olaviinha opened this issue Apr 5, 2023 · 2 comments
Open

Sufficient Colab tier/GPU for the "original" notebook? #8

olaviinha opened this issue Apr 5, 2023 · 2 comments

Comments

@olaviinha
Copy link

olaviinha commented Apr 5, 2023

I'm using the "original" notebook. At first it ran just fine and dandy for a few days, but since 2 days ago or so it's giving me almost always "CUDA OUT OF MEMORY". Other times it doesn't, which makes it even weirder, and I can't find anything that would have changed within the past 2 days.

I'm on Colab Pro tier (i.e. T4), didn't at any point have Premium GPU selected. Is Premium GPU required/recommended, or have you any idea what might be causing this behaviour or how to prevent it? I'm out of ideas myself...

Thanks for the great work btw!

@camenduru
Copy link
Owner

Hi @olaviinha 👋 do you have this line in your colab

!pip install -q torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 torchtext==0.14.1 torchdata==0.5.1 --extra-index-url https://download.pytorch.org/whl/cu116 -U

@olaviinha
Copy link
Author

olaviinha commented Apr 5, 2023

I do, yes, and I think it started working a little better after those updates. Now CUDA OOM is no longer default result, but still happens a lot...

UPDATE: Ok I see Colab has changed GPU class to GPU type, which now defaults to better than T4 when you have compute units. I didn't know this, just assumed I had T4 all along (as Colab used to work). I think I ran out of compute units when it started complaining CUDA OOM, so probably just dropped to T4 from a better GPU at that time – that would explain the behaviour.

Anyway it would be great if somebody came up with a solution to run it smoothly on T4!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants