00:23:52 GMT How can the original ttl value be retrieved for a key? That is, if a key was stored with 'setex foo 1000 "blah"', how can the original ttl 1000s be retrieved from Redis? 06:33:14 GMT help 06:33:32 GMT hhello 06:33:34 GMT hello 06:34:05 GMT I am not able to get ops high operation per second in high end servers smp environments 06:34:24 GMT can anybody please help 07:00:28 GMT hi. https://gist.github.com/etehtsea/db6715036e3f0a30d93721bc9f8775da What am I doing wrong? HSCAN ignores COUNT option 07:09:11 GMT same behaviour for zscan 07:11:36 GMT nevermind, found the issue https://github.com/antirez/redis/issues/1723 08:28:34 GMT <_Wise_> hi * 08:32:29 GMT <_Wise_> we are using Redis 3.0.5 and recently deleted *zillions* of keys 08:32:56 GMT that's a very old redis 08:33:01 GMT just an observation 08:33:07 GMT <_Wise_> ram usage remained unchanged because of fragmentation (expected behaviour) 08:33:27 GMT <_Wise_> is there a way to defragment *other* than to restart ? 08:33:37 GMT <_Wise_> Habbie: yes that's old, I know 08:34:49 GMT <_Wise_> used_memory_human:36.89G 08:34:49 GMT <_Wise_> used_memory_peak_human:49.12G 08:34:51 GMT <_Wise_> mem_fragmentation_ratio:1.32 08:35:27 GMT <_Wise_> fragmentation was 1.00 before the deletion 08:35:59 GMT i see, restarting that thing would take a while 08:36:20 GMT <_Wise_> yes, like 20 minutes or so 08:36:41 GMT <_Wise_> to play the aof file back 08:38:22 GMT if you have enough ram you could replicate to a new instance and switch to that. dunno if that's a great idea though 08:45:21 GMT <_Wise_> minus: hmm, that's unfortunately not an option for us right now 08:45:49 GMT <_Wise_> I think I'll restart the slave, failover and restart the former master 08:46:04 GMT sounds reasonable 08:46:29 GMT <_Wise_> that will be a 30 sec interuption 08:46:56 GMT <_Wise_> I was dreaming of a 'defrag' option 08:47:23 GMT you could maybe set the dump dir to a tmpfs location and speed it up 08:47:30 GMT otoh the bottleneck is probably CPU 08:47:45 GMT not that it matters for the failover 08:54:29 GMT <_Wise_> minus: about the tmpfs, well I'll need ram for that :) 08:54:43 GMT <_Wise_> that I don't have 08:55:11 GMT <_Wise_> thanks for the support, I'll do the failover trick 10:28:57 GMT there will be an active-defragmentation option in upcoming Redis 15:16:25 GMT <_Wise_> badboy_: thanks for the info ! 15:16:35 GMT <_Wise_> I'll check it out when it is released 18:35:47 GMT i'm using the c hiredis-0.13.3 lib and trying to use redisGetReply, but it is freezing the code. code: https://gist.github.com/mfilipe/ff1d45e09119971e351fd37fb96340ca 18:35:56 GMT what is wrong there? 18:36:45 GMT why is it freezing? 18:38:16 GMT I'm spinning up a redis server when I start my automated tests then shutting it down afterwards. Still need to deal with it already being started. But is this config file sufficient? It seems like everything I need. https://gist.github.com/mustmodify/1f62b0c06449a1759a15e65aef63cbde 19:19:26 GMT mustmodify: why not not daemonize it, not write a pid file and instead track the processes and kill it when you're done? 22:07:17 GMT question, i want to start out on aws with a small CPU so redis has like a gig, but if the website gets traffic ill eventually want to have a bigger machine, if i use the redis cluster, can i add a machine then shut down the smaller one without losing data? will the data form the smaller machine move to the bigger one?