00:55:44 GMT hey, is possible to know which client(s) wrote/update which keys? 00:55:59 GMT (I would assume it is not possible) 05:59:17 GMT I am trying to build a Redis Cluster by hand. I have had the nodes MEET, and I added a single slot to each node. All seems to look ok, but when I use redis-cli to set a key I get 05:59:18 GMT (error) CLUSTERDOWN The cluster is down 05:59:42 GMT I can't seem to figure out how to debug this. I have no slaves, I do have three masters. I list the nodes on each and all see each other. 06:14:34 GMT I think I figured it out. cluster-require-full-coverage defaults to yes. I did not use all the slots, only 1 on each node (trying to construct simple example.) I set to no and now can calculate each hash per key and place. Thanks for listening :) 13:19:19 GMT Hello. Has anybody here any experience running a single redis instance with 1 TB of memory? In this case a x1.16xlarge instance 13:21:41 GMT I've had instances up to ~100GB effective mem 13:22:10 GMT it's not a good idea ;) 13:36:53 GMT xiaomiao: why not? 13:37:13 GMT Inge-: just purging data takes a long time at that size 13:37:30 GMT and if you don't persist it (performance reasons?) then re-filling takes a very long time 13:38:02 GMT also, it's easy to be CPU-limited ... it might be easier to run multiple smaller redis instances 13:44:38 GMT yep, seen that also. especially I think that master-slave replication is quite futile idea on that size 13:46:42 GMT what would you store in that 1TB? 13:47:20 GMT we have one very special dataset which would store user activity in a rolling two week window 13:48:32 GMT we will most likelly end up using cluster with a lot smaller instances, but the amount data required will still push the individual redis instance size quite high 13:49:07 GMT I've seen issues with master-slave replication at ~30 GiB redis instance sizes, but some of those were in earlier 3.x releases, so all the bugs related to that might have been fixed by now 14:26:19 GMT does SCAN work on a cluster? i.e. will it return all keys of the cluster or just the node i'm asking? 14:31:45 GMT aep, http://paluch.biz/blog/162-iterate-over-all-keys-in-a-redis-cluster.html suggests no 14:33:57 GMT hmm right. probably makes sense 14:52:46 GMT Garo_: at that data size it might be more cost-effective to use other technology 14:52:57 GMT memory is very expensive to scale up :) 14:53:48 GMT xiaomiao: I know. we already use redis and that's the only pure in-memory database what we currently offer to our devs so that's why we are looking that first. feel free to suggest alternatives 14:54:54 GMT Garo_: depends on the structure of your data (e.g. size of single values) 14:55:12 GMT postgres works surprisingly well :) 15:27:27 GMT Is there a built-in method to append of the value being appended doesn't already exist in the value of the key you are appending? 15:28:12 GMT ex: 'tpp' => '1,2,3,4,5,6' | APPEND tpp ",4" 15:28:25 GMT That would result in a value of "1,2,3,4,5,6,4" 15:28:37 GMT but I don't wand duplicates in that value 15:49:47 GMT nevermind - bad design 16:27:40 GMT Use a set 18:09:02 GMT Hello everybody. Can anyone tell me why I don't get the same text when I run DUMP directly or when I store its output in a variable? http://picpaste.com/pics/snaggy-UYnCfvWN.1479751675.png 18:09:40 GMT marcv: try with --raw 18:10:09 GMT --raw is implied if stdout is not a tty iirc 18:12:21 GMT Hey minus. What command --raw is an argument of? For instance I don't get any change with dump=$(redis-cli --raw DUMP absent-tokens:5779) && echo "$dump" 18:12:52 GMT when you use redis-cli without a subshell 18:13:52 GMT well, then it doesn't work :-/ 18:14:56 GMT do you want the escaped or unescaped content? 18:15:10 GMT OK, the option is --no-raw, no --raw. This way it works 18:15:12 GMT :-) 18:15:33 GMT I wanted the human readable form 18:15:33 GMT normally you wouldn't want the escaped output 18:15:43 GMT for debugging? 18:16:02 GMT for storing in a file to perform a batch restore 18:16:05 GMT you can always pipe it through hexdump -C too :) 18:16:12 GMT ah 18:16:50 GMT i'd rather write a python (or so) script instead of a bash one for that 18:17:17 GMT ok, why? 18:18:00 GMT because it deals with binary data without problems and provides a nicer interface to redis 18:18:41 GMT it's also likely to be faster because it doesn't have to spawn a redis-cli process for every key 18:19:45 GMT well I was intending to work arount the "one redis-cli process for every key" problem by using a batch file, precisely, with redis-cli --pipe 18:20:11 GMT But it's true that I could have it done much quicker in a PHP script... 18:20:13 GMT oh yeah, that'd work 18:20:56 GMT it's just more complex things are harder to get right in bash than a real scripting language 18:21:08 GMT I think you just convinced me :-) 18:21:47 GMT that's 100% true in my case. I'm fighting my way out with bash, which I don't know well at all, instead of doing it quickly in php. 18:22:06 GMT Thanks for the enlightenment :-) 18:22:13 GMT ;D 19:46:36 GMT Hi I'm trying to connect snort to Barnyard2 to syslog-ng to redis 19:47:01 GMT I'm struggling getting syslog-ng to connect to redis 20:16:54 GMT Hello 21:15:34 GMT as a followup to earlier questions about storing v4/v6 prefix data in redis 21:15:39 GMT here's some working code https://github.com/PowerDNS/redis-ipprefix 21:23:13 GMT that doesn't exactly look nice :D 21:32:21 GMT minus, suggestions welcome :) 21:33:06 GMT interesting 21:34:54 GMT Habbie: redis modules seems to be rather active with redis labs already btw 21:35:00 GMT minus, cool 21:36:43 GMT would be nice to see 4.0 coming with support for that, can't wait to code some C++ for that (yes, not C) 21:36:59 GMT my dayjob is C++ and Lua 21:37:07 GMT you can imagine my excitement for what you are saying ): 21:37:09 GMT eh 21:37:11 GMT :) 21:38:19 GMT I need to start using Redis 3.2 at my company 21:38:29 GMT I notice AWS uses Redis 3.2 for Elasticache Redis already 21:38:53 GMT I was watching the releases waiting for major fixes from refactoring to stop appearing in changes :) 21:38:59 GMT hehe 21:39:05 GMT i'm on 2.8 here i think 21:39:09 GMT but we can upgrade if i need it 21:39:15 GMT oh man we switched from 2.8 to 3.0 a few months back 21:39:20 GMT it was an easy switch fortunately :) 21:39:27 GMT any interesting new features? 21:39:31 GMT i haven't looked 21:39:37 GMT 3.2 has geo features 21:39:44 GMT ah yes 21:39:51 GMT i've pondered subverting those for my needs 21:39:53 GMT I think we saw some performance improvements in some areas and some memory usage improvements 21:39:58 GMT heh 21:40:04 GMT it was a while ago, let me see if I made any graphs, surely I had to write about it 21:40:10 GMT they probably suffer from the same precision limitations though 21:40:28 GMT Note: there is no GEODEL command because you can use ZREM in order to remove elements. The Geo index structure is just a sorted set. 21:40:31 GMT yup! 21:40:39 GMT ah, right 21:40:53 GMT man, need more time to do cool things 21:41:22 GMT 127.0.0.1:6379> geoadd foo 100 80 bar 21:41:26 GMT zscan 21:41:28 GMT 2) "4317375937111403" 21:42:00 GMT that's about 52 bit 21:42:13 GMT yeah 51.9 21:42:43 GMT so that's some clever encoding that flattens lat+lon into a single dimension 21:42:57 GMT http://redis.io/commands/geohash 21:44:06 GMT ah, https://en.wikipedia.org/wiki/Z-order_curve 21:46:40 GMT main thing I notice looking back is better memory utilization, at least just watching the usage from the system's perspective. performance wise, I think RDB snapshot time improved a little bit for us. most of the gains were moving from spinning drives to SSDs though. also we tuned the ZSETs to be efficient for up to 1024 elements instead of the default 128 which probably made a nice impact. I think Redis 21:46:46 GMT 3.0 had some Sentinel improvements if I recall correctly, it just made sense to move off of 2.8 to latest 3.0 at that point