00:00:43 GMT ddoom_, how about just taking the performance hit during the import, relying on swap? 00:02:45 GMT Habbie: that would work 11:37:27 GMT is there some command i can send redis to make my connection block until a specific key changes value? 11:37:51 GMT something more specific than monitor :) 11:46:31 GMT Habbie: no. you can only block on lists 11:46:38 GMT thought so 11:46:40 GMT while sleep it is 11:46:46 GMT until i bother to make it work with monitor 11:46:48 GMT thanks 11:46:55 GMT monitor sounds like a bad solution 11:47:04 GMT _maybe_ blocking on a list could work? 11:47:10 GMT no, it's for a test suite 11:47:19 GMT oO 11:47:23 GMT don't want to turn everything into lists for that 11:47:29 GMT also don't want to pop things from the test suite 11:47:50 GMT what i want to do is talk to an API that will eventually cause changes in redis 11:47:57 GMT and then test another daemon that needs to respond to those changes 11:48:02 GMT but i don't want to test the second one too soon 11:48:53 GMT suggestions welcome of course 11:50:16 GMT Habbie: how about keyspace notifications? 11:50:22 GMT go on 11:50:40 GMT http://redis.io/topics/notifications 11:50:42 GMT ah i see 11:51:37 GMT minus, that looks sensible, thanks! 11:54:28 GMT oh right, we have those :D 14:12:43 GMT Anyone knows how redis handles null bytes in key names? 14:17:05 GMT try it 14:17:12 GMT as far as i know it should just work 14:18:05 GMT yes, just works 14:21:10 GMT redis doesn't care 14:22:43 GMT as long as it's less than 512MiB 14:27:11 GMT does anybody have a clever idea for storing lots of ipv6 prefixes in redis 14:27:18 GMT and being able to find them again based on any IP inside a prefix? 14:32:01 GMT Habbie: sorted sets if you don't require too accurate precision 14:32:08 GMT i looked at those 14:32:12 GMT for non-overlapping v4 they are perfect 14:32:18 GMT but they're not going to cut it for v6 14:32:25 GMT unless i go three sets deep 14:32:31 GMT huh 14:32:40 GMT i require perfect precision 14:32:57 GMT precision was the problem with it? 14:33:00 GMT a zset gives me precision for up to 53 bits 14:33:05 GMT aye 14:33:09 GMT so if i cut the v6 address in 3 parts 14:33:38 GMT chains of hashes then? 14:34:03 GMT i guess i'd build something myself for the specialized use case 14:34:17 GMT hmm maybe hash the first 75 bits and point to a zset for the remaining 53 14:34:33 GMT minus, build in what way? a redis extension? 14:36:32 GMT well, those are only coming in redis 4 14:36:36 GMT ah yes 14:36:43 GMT well we build our own packages already 14:36:46 GMT but i guess 4 isn't released yet 14:36:51 GMT i meant something completely independendent 14:37:01 GMT too much den in there 14:37:10 GMT den den den 14:37:12 GMT right 14:37:40 GMT would be cool as redis module tho 14:37:44 GMT very 14:37:51 GMT i'll entertain the thought for a bit 14:38:40 GMT if it's not needed for something super productive redis-git may just do the job ;D 14:39:00 GMT assuming i still write that extension :) 14:39:07 GMT but it will be very much in production 14:39:20 GMT all data can be regenerated from outside redis though 14:40:31 GMT you could preprocess that data into a blob and use a redis lua script to do what you'd do in a module 14:40:50 GMT oh i would involve lua definitely, even if i do the 3/4 deep zsets or whatever 14:40:53 GMT gotta shave that latency 14:41:12 GMT need high throughput? 14:41:26 GMT 250k-500k qps of DNS, yes 14:41:37 GMT that's nice 14:41:52 GMT if i achieve it :) 14:41:56 GMT managed it with cdb before 14:42:05 GMT but cdb lookups are almost free 14:42:07 GMT i'd probably go towards having that blob in-memory in the program instead of having to roundtrip to redis 14:42:21 GMT and maybe use redis to distribute updates if necessary 14:42:30 GMT well, it's several million entries with 10-1000 updates per second 14:42:43 GMT could certainly use pubsub or blpop to distribute, this idea has come up before 14:42:44 GMT oh 14:43:02 GMT do you need to update it instantly? 14:43:05 GMT yes 14:43:24 GMT well, a few seconds delay is fine 14:43:27 GMT can i ask what you're doing with that stuff on a higher level? 14:44:06 GMT minus, https://www.powerdns.com/resources/2016%20UKNOF%20filtering%20bert%20hubert.pdf 14:44:12 GMT (it sounds interesting, hire me) 14:44:26 GMT minus, https://www.powerdns.com/careers.html 15:08:24 GMT that looks very exciting, Habbie. you're in the process of evaluating redis for it instead of cdb, right? 15:08:47 GMT it already runs on redis 15:09:01 GMT now i need to do subnets :) 15:09:23 GMT redis is already used for the domain part? 15:09:31 GMT and user prefs? 15:09:35 GMT user prefs are in redis 15:09:48 GMT domain part is in cdb or a third party lib currently but we're considering redis for that 15:11:16 GMT the whole thing runs as a cluster of DNS servers, right? and each of them can update the subnet blacklist? 15:11:36 GMT the subnet list (it's not a blacklist) is updated based on RADIUS or DHCP data generally 15:11:38 GMT not by the DNS servers 15:11:43 GMT the DNS servers only read 15:12:03 GMT oh, the subnet list maps customers then? 15:12:13 GMT yes 15:12:32 GMT and comes from a central place? 15:12:37 GMT yes 15:12:41 GMT replicated by redis 15:13:23 GMT the cdb? or are you already using triple-zsets 15:13:33 GMT i only have single IPs now 15:13:39 GMT those are easy 15:13:40 GMT ah 15:13:50 GMT the rsync solution didn't seem so bad 15:14:20 GMT it doesn't scale to actual ISP sizes 15:15:10 GMT how about just publishing the changes to the cdb via redis? 15:15:41 GMT should handle the 10-1000 updates/sec fine, and keep a local cdb on each DNS server 15:15:51 GMT you can't update cdb 15:15:54 GMT only rewrite it in full 15:15:55 GMT oh 15:15:58 GMT that sucks 15:16:02 GMT but yes, i've pondered doing redis+lmdb the way you describe 15:16:33 GMT mh, never heard of lmdb 15:17:16 GMT pretend i said bdb then :) 15:17:54 GMT (actually i do work on a similarish setup but with almost zero updates/second and fitting well enough into redis) 15:18:17 GMT though the 53bit resolution on sorted sets is a problem here too, never thought of that :/ 15:18:29 GMT oops 15:19:05 GMT we're not anywhere near the 53bit limit, so precision is still fine 15:19:22 GMT i.e. numbers are still rendered without scientific notation 15:19:26 GMT ack 15:20:04 GMT but packing 64bit worth of stuff into a single key doesn't seem scalable anyway, so that needs a redo 15:21:12 GMT jup, zrangebyscore is O(log(n)), not optimal 15:21:32 GMT well anything else i can do for subnets will be log-ish as well 15:22:11 GMT probably 15:29:24 GMT something like in-memory LMDB updated via Redis/RabbitMQ seems like an option 15:30:10 GMT the big problem with redis pubsub is that if you miss a message due to bad/no connectivity you're shit outta luck 15:30:18 GMT uhuh 15:30:41 GMT it's complicated™ 15:30:49 GMT :) 15:31:06 GMT (connectivity over the public internet to china is a bitch) 16:30:49 GMT sorted sets don't have a way to pop elements off the list (ie: zpop), right? looking for ideal situation of a set that discards duplicates, while allowing multiple clients to pop elements off in order rather than randomly. 16:32:27 GMT add 1, 2, 3, 2, 2, 4. pop 1, then 2, then 3, then 4. 16:52:08 GMT you can run a list and set in parallel; that's one option i see 19:26:15 GMT I am evaluating redis cluster or redis sentinal for a high traffic system. 19:26:39 GMT How can I see the key pros/cons of each tool ? 19:27:58 GMT the con is, redis doesn't easily do persistence 19:28:12 GMT pro, well, if it runs it just runs, few moving parts => not many surprises 19:29:58 GMT stack exchange runs 1.3 billion page views a month and use redis. 19:32:44 GMT I don’t care about persistence, I’m actually gonna just turn it off or whatever (no HDD sync) 19:32:56 GMT I have backup data in MySQL if redis falls over 19:33:13 GMT I mean the pros/cons of which to choose the Cluster or Sentinel package 19:34:34 GMT Inge-: that's not much 19:35:35 GMT per month is such a nice inflationary unit that hides "a few hundred per second" 19:38:23 GMT dragoonis: cluster partitions data, sentinel doesn't 19:38:43 GMT so if you need horizontal scaling, sentinel is out of the game 19:39:07 GMT unless running separate "clusters" of sentinels is an option 19:39:20 GMT sentinel is pure HA/failover 19:39:55 GMT i got redis master slave running is the slave just a read database? 19:40:02 GMT should i be using that for writting data too? 19:40:09 GMT no 19:40:16 GMT only master can write stuff 19:40:20 GMT gotcha 19:40:26 GMT thank you 19:46:08 GMT minus: I just have a shitton of volume of data, and going to be groupin them in hashsets, and lots of reading/writing to the sets as real-time data gets processed 19:46:22 GMT It is a huge buffer between 2 systems, and it’s in real time syncing changes from one system to the other 19:47:01 GMT So going with event keys like “transactions.add" => {1: [update1, update2, update3], 2: [up1, up2, up3] } 19:47:16 GMT 1 and 2 keys being the transaction ID - are you folks following ? 19:47:45 GMT not quite, but cluster is more likely what you want if a single redis instance can't handle your volume 19:48:00 GMT how many queries/second do you expect? 19:49:13 GMT <*> Sembiance LOVES redis. 19:49:55 GMT instantaneous_ops_per_sec:3195 19:49:57 GMT Sembiance: and the redis loves you 19:50:09 GMT not bad but it can handle a lot more 19:50:13 GMT used_memory_human:43.35G 19:50:17 GMT wow 19:50:39 GMT used_memory_human:1.00G 19:50:43 GMT I keep the number of commands I have to issue at a bare minimum. serving as my main and only data store, not a cache. 19:51:01 GMT tiny in comparison. tho still a lot more than expected, i should take a look at that 19:51:07 GMT db0:keys=318516717 19:51:18 GMT db0:keys=1107646 19:51:25 GMT some folks out there have MASSIVE installations of redis 19:51:45 GMT like tons more data hehe. but I love redis all the same even on my smaller install :) 19:52:34 GMT well, it's significant enough :) 19:52:45 GMT gonna need to add more ram to my server though soon heh 19:54:44 GMT in the past 10 years, very few technologies have come out that I like super duper love and have changed how I do things drastically 19:54:51 GMT Redis is easily in the Top 3 for me. Along with NodeJS 19:55:03 GMT we're just using it as (pre-warmed) cache though 19:55:35 GMT minus: yah, most folks use it as a cache because they don't can't risk losing X amount of data the system goes kaput 19:55:39 GMT but I'm just running a web game on it ;) 19:57:55 GMT which game? 19:59:15 GMT minus: hey! we have about 4million updates per day that need to go into this “change buffer” that the other systems will subscribe to. 20:01:08 GMT System A gets activity, saves to its own DB, and pushes same updates to Redis .. 20:01:17 GMT dragoonis: so 50 updates/sec on average; that's no problem for a single redis instance. sentinel should do the job then 20:01:45 GMT the only problem is that even sentinel can't guarantee that no data is lost if a redis instance dies 20:02:13 GMT That’s okay we are going to store the “changes” in persistant storage elsewhere. Originally MySQL but now looking at Kafka. 20:02:37 GMT We built a “replay service” so we know what was “sent to the chang buffer” but got lost, i.e: in redis 20:02:40 GMT Thoughts? 20:03:28 GMT Out of all our changes that we tried to put into the change buffer, we have “ACK” flags on them, so we can run a query to get all messages that havent got ACK and re-put them into redis. 20:03:55 GMT why not just use that then? 20:04:16 GMT use what? 20:05:34 GMT mysql/kafka 20:06:03 GMT You mean have workers on System B query Kafka for the unsynced data? 20:06:29 GMT i guess 20:07:02 GMT We need to pull msgs out grouped by EntityID, i.e: the TransactionID and sort them by “created date”, i.e: the order that they happened in 20:07:12 GMT Do you know if Kafka has some kind of query lang to facilitate this? 20:07:37 GMT Was gonna use redis to create Sets for grouping, and use redis sort to sort items by date 20:07:43 GMT okay, no idea, i'm totally lost 20:08:24 GMT :D 20:09:01 GMT minus: I’m gonna give sentinel a go. 20:09:11 GMT experienced with redis, never touched sentinel 20:09:18 GMT i see 20:09:19 GMT Need some form of horizontal scaling 20:09:33 GMT well, sentinel doesn't give that to you 20:10:22 GMT not auto-scaling - just ability to keep data highly available across many nodes 20:42:09 GMT Hey guys can i have miss states in a day ? 20:42:32 GMT redis-write miss ? 20:43:38 GMT try again, this time with a proper sentence 21:23:29 GMT i am getting error in redis : " Redis server went away " , if i increase number of redis connection in max client will it work ? 21:23:38 GMT or i need to have redis-cluster ? 21:26:20 GMT any fix for that php is having a hell of issue 21:27:47 GMT if it is the client limit you're hitting increasing it might help. but unless you're really needing that many connections you might just be forgetting to close them? 23:17:46 GMT yeah 23:17:51 GMT do u think i should close 23:18:02 GMT i mean i have heard of tcp connection error 23:18:06 GMT if i close too fast 23:20:23 GMT What are some of the slowest redis commands? 23:25:31 GMT wilornel__: KEYS 23:29:48 GMT thank you, badboy_ ! 23:30:14 GMT don't ever use it in production 23:32:45 GMT it seems faster than BGREWRITEAOF and BGSAVE 23:32:58 GMT That makes sense because those are IO operations? 23:33:06 GMT Will KEYS make network calls to all slaves? 23:33:08 GMT all nodes* 23:33:55 GMT BG* commands are actually fast from a clients side perspective, as they run the heavy work in a forked process 23:34:03 GMT keys will not make network calls 23:34:18 GMT but it will read each and every key in your database and return it back to the client