09:04:40 GMT Hi! I am trying to index the whole of wikipedia using Redis. Since redis is in memory this won't be possible with only one node. Can I put up a redis cluster on onw windows laptop and one linux laptop? Will a 16 GB RAM and 2 4 GB RAMs suffice? 09:11:34 GMT Hi! I am trying to index the whole of wikipedia using Redis. Since redis is in memory this won't be possible with only one node. Can I put up a redis cluster on onw windows laptop and one linux laptop? Will a 16 GB RAM and 2 4 GB RAMs suffice? 10:23:43 GMT Is Redis ACID compliant? 10:35:45 GMT no 10:37:38 GMT it has no transactions that you can roll back, writes to disk only happen periodically, etc 10:47:26 GMT Is my manager also mistaken in that he today proclaimed Redis to be a database? 10:54:36 GMT it's an in-memory key-value store. kind of qualifies as database in my book. see the supported commands to get a felling for what it does: http://redis.io/commands/ 10:54:52 GMT it's not SQL if you mean that 10:55:20 GMT it's like memcached on steroids 13:00:07 GMT Hi, I run a django website on aws. redis server keeps shutting down, I start getting errors: "connecting to localhost:6379. connection refused" I restart the server using sudo service redis-server restart but after 24-30 hrs it shuts down again. 13:00:29 GMT error in error log: Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max clients configuration to 3984. 13:00:38 GMT any help! 13:02:43 GMT check the system log for crash reasons, check for coredumps 13:02:53 GMT my first guess would be it ran out of memory 13:03:24 GMT the message you got from the log doesn't really pose a problem if you don't have >3984 concurrent clients 13:04:44 GMT set a maxmemory limit and an eviction policy, also set up monitoring of the used RAM 13:21:32 GMT minus: thanks! I think maxmemory configuration should help. 13:21:58 GMT default value of maxmemory is 0 I think. 13:22:01 GMT config get maxmemory 13:22:01 GMT your system logs should tell you if it really was OOM kills 13:22:09 GMT yeah, no limit by default 13:22:37 GMT what does maxmemory 0 mean for 64 bit linux machine with 8 gig ram and no swap? 13:23:15 GMT minus: it'll use whatever system has to offer? 13:24:02 GMT it'll use all ram 13:24:14 GMT i think 13:25:06 GMT system log (/var/log/syslog) shows OOM I think. Out of memory: Kill process 2641 (redis-server) score 522 or sacrifice child 13:25:18 GMT yep 13:26:04 GMT before I upgrade ram, can I handle the issue for sometime... any other solution better than restarting redis-server using cron? 13:28:43 GMT if you use redis as cache setting maxmemory and an eviction policy (by default there's none, it will just reject writes) should be fine; you'll notice if load goes up because you cache too little probably 13:34:14 GMT minus: # The default is: 13:34:15 GMT # 13:34:15 GMT # maxmemory-policy volatile-lru 13:34:31 GMT ah 13:34:44 GMT i think newer versions don't have a default anymore 13:34:55 GMT or rather no eviction as default 13:35:22 GMT volatile-lru should be fine (read the docs otherwise) 13:35:47 GMT minus: Thanks! 15:26:35 GMT Do reddit settings set through the cli persist by default 15:26:35 GMT ? 15:45:20 GMT Jaxkr; AFAIK, using config set does not cause redis.conf to be re-written. You need to take care of updating the config file. 15:45:34 GMT netcarver, great. 15:45:40 GMT I wanted to make sure I was on optimized defaults. 15:49:38 GMT Running redis 2.8.4, have a server currently in production, it can’t persist to disk. Log is flooded with http://pastebin.com/raw/fVKY4iQA. I am not sure if there is a memory or disk problem preventing the fork, but I have to get the items out of redis. No new items are coming in to redis at this time. Can I disable saving live? Or should I try to save manually? To a different file than the currently configured file? I’m not sure what my o 15:49:38 GMT am busy reading. But I can’t get this wrong so I am hesitant to guess at the best course of action. 15:52:06 GMT I’m trying to read redis with logstash, but it’s blocking because the persist-to-disk is failing. I looked into migrating or copying items from the failing redis to another, but migrate tries to remove the local copy which will fail, and copy isn’t supported until version 3. 16:12:01 GMT GiantEvolving: it says it can't allocate memory; do you have enough memory free? if you don't have overcommit enabled you'll need at least as much free memory as redis already uses 16:13:21 GMT I didn’t know about that second part, that it uses as much memory to fork as it’s currently using. That’s expensive, and I don’t have enough. It’s using almost half the memory on the system. 16:14:43 GMT http://redis.io/topics/admin see the second bullet point 16:15:06 GMT sysctl vm.overcommit_memory=1 then it should work 16:16:22 GMT background being that a fork requires the same amount of memory and without overcommit linux would reserve that much, with overcommit it just reserves as much as necessary; everything that isn't changed is just kept once in memory 16:19:05 GMT I’ll have to add some swap to the host too to make sure that I don’t OOM anything. Are there other options for reading the messages? I’m looking at the python library but can’t find good docs on the methods. 16:21:03 GMT if you have about half your RAM free and aren'textremely write-heavy you'll be fine without swap i guess (Though a swapon doesn't hurt) 16:21:22 GMT reading messages? 16:22:32 GMT reading the log messages, that is the contents of the list stored at key ‘logstash’. I think ‘lrange’ is what I want and I can just loop through all the messages and send them to a different redis. 16:23:50 GMT lrange if you want to read them but modify, yeah 16:26:05 GMT Could you expand on the “but modify”? 16:31:24 GMT lrange just gives you a copy but leaves them in redis 16:32:28 GMT OK cool, I follow. So I have at least two options now. Thank you so much for your help!