django - gunicorn + redis: Does not work? -
i run django server redis [in-memory caching filtering duplicate records] integrated celery process [tasks insert data postgresql database asynchronously]. django server integrated redis via django-redis-cache
caching enabled.
caches = { 'default': { 'backend': 'redis_cache.rediscache', 'location': '/tmp/redis.sock', }, }
if use ./manage.py runserver ip:8000
run server , start posting data, there's no problem during get/set attributes in redis single application , app runs smoothly.
but now, i'm planning migrate django gunicorn
via command:
gunicorn myapp:wsgi_app -w 3 -b ip:8000
as creates 3 workers django process, believe trying access same redis-server caching.
if start posting data using setup, message,
internal server error: /post/data/ file "/path/to/app/views.py", line 94, in savedata value = cache.get(key) file "/usr/local/lib/python2.7/dist-packages/redis_cache/cache.py", line 186, in result = self.unpickle(value) file "/usr/local/lib/python2.7/dist-packages/redis_cache/cache.py", line 248, in unpickle return pickle.loads(value) file "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py", line 1035, in model_unpickle return cls.__new__(cls) typeerror: ('object.__new__(x): x not type object (nonetype)', <function model_unpickle @ 0x1c49aa0>, ((u'webapi', 'reading'), [], <function simple_class_factory @ 0x1c49a28>))
if check redis-server log [redis-cli monitor
], empty records in them after start using gunicorn.
is there smart work around keep redis active gunicorn threads also?
Comments
Post a Comment