-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PostgreSQL - connection failed: FATAL: remaining connection slots are reserved for non-replication superuser connections #5071
Comments
This is a postgresql configuration issue and not related to this project.
|
But the problem is, this case is not happening in Django 3.2, and only appear after migrated to Django 4. |
If that is the problem, why don't you report it as such?
|
I'm using the django channels for websocket connection, and upgrading to these pacakges.
For
I'm only using async in the websocket things, and celery tasks to cleanup the sandbox machines. |
Increasing max_connections to 400 would be a simple solution |
|
Let say if many users are using the websocket at the same time. is increasing the |
How many workers does celery use? 12? Do the math: you are using two django pods, resulting in at least Howere, my guess would be that the catch is in worker-connections I suppose. Each worker has many http connections and each http connection has his own database connection. So with your 24 workers and the default worker-connections of 1000 you need 24000 database connections at max for django alone. Here is some more info: https://stackoverflow.com/questions/63471960/gunicorn-uvicorn-worker-py-how-to-honor-limit-concurrency-setting |
How to check the how many celery workers? This is what I get so far;
and for more:
|
|
anything that I can do apart from increasing the |
You have at least these 4 options as far I can tell:
|
Thank you so much @foarsitter, we will try your suggestions 👍 |
If you resolve your issue by implementing a custom UvicornWorker we would like to receive a PR :) |
Sure, will happy to do that. For quick solution we will try to increase the postgres |
Finally my issue was solved. This happens because of using both
cookiecutter-django/{{cookiecutter.project_slug}}/{{cookiecutter.project_slug}}/users/tasks.py Lines 1 to 9 in beba4c1
Before
from celery import shared_task
from config import celery_app
@shared_task()
def task_1():
...
@celery_app.task()
def task_2_that_called_for_cronjob():
... Afterfrom config import celery_app
@celery_app.task()
def task_1():
...
@celery_app.task()
def task_2_that_called_for_cronjob():
... |
The issue still appears again. |
|
I found some related issue are because of impact of upgrade Django 4 + using
|
So you put CONN_MAX_AGE to 0 and solved the issue? |
Nope, I had changed it to 0. And the error still occurs. |
The issue finally resolved after implementing the django-db-connection-pool + set the
DATABASES = {"default": env.db("DATABASE_URL")}
DATABASES["default"]["ATOMIC_REQUESTS"] = True
# https://github.com/altairbow/django-db-connection-pool?tab=readme-ov-file#postgresql
DATABASES["default"]["ENGINE"] = "dj_db_conn_pool.backends.postgresql"
DATABASES["default"]["POOL_OPTIONS"] = {
"POOL_SIZE": 10,
"MAX_OVERFLOW": 10,
"RECYCLE": 24 * 60 * 60,
}
# Set to 0 to disable long(er) living connections
# https://docs.djangoproject.com/en/dev/ref/settings/#std-setting-CONN_MAX_AGE
# https://github.com/cookiecutter/cookiecutter-django/issues/5071
# "connection failed: FATAL: remaining connection slots are reserved for non-replication superuser connections" # noqa: ERA001
DATABASES["default"]["CONN_MAX_AGE"] = env.int("CONN_MAX_AGE", default=0)
|
The upcoming errors that occurs is Related issue: psycopg/psycopg#417 |
@agusmakmun is |
For this case, was resolved after downgrade the python from I also consulate with ChatGPT with this query:
DATABASES["default"]["ENGINE"] = "dj_db_conn_pool.backends.postgresql"
DATABASES["default"]["POOL_OPTIONS"] = {
"POOL_SIZE": 3,
"MAX_OVERFLOW": 3,
"RECYCLE": 1800, # 30 minutes
}
DATABASES["default"]["CONN_MAX_AGE"] = env.int("CONN_MAX_AGE", default=0) Until deployed to production, no issues so far. Not sure which one is correct, either decrease the pool_size with propper size, or downgrade the python. |
You can certain remove CONN_MAX_AGE since it defaults to 0. |
Thank you so much.. I was suffering from this issue. |
Guys, FYI, until now I still facing this issue some how. |
And you did increase |
What happened?
I'm having this postgresql error when doing upgrade the django cookie cutter for my project.
I'm using many celery tasks, websocket connections, and deploy using the k8s in my project.
The similar issues are mentioned in Stack Overflow:
One of the thread mentions this package, but not sure if it's working or not:
https://github.com/altairbow/django-db-connection-pool
Details
python3 -V
: 3.12docker --version
: 3docker compose --version
: 3The text was updated successfully, but these errors were encountered: