02:55:24 GMT So i've been using redis for a new system and started using a redlock implementation with a single master. The performance has been terrible since I had set the retry delay to 50ms and sometimes have 200 processes fighting over a lock at the same time. It hammers the crud out of the redis server 02:55:42 GMT What would b the proposed solution? Different locking software? Multi-master system? Longer retry delay? 07:04:07 GMT MasterGberry: what problem are you trying to solve? 08:15:06 GMT the obvious solution is to not use locks if possible 11:41:35 GMT Hi 12:28:43 GMT how are writes handled by read-only slaves ? 12:29:09 GMT you get back an error 12:30:57 GMT minus: and if its a non read-only slave? 12:31:19 GMT it'll just write stuff, but a sync from the master may override stuff 12:34:25 GMT minus: cool; am basically re-designing the cache layer; want to have multiple cache servers, that autoscale. Will be using sentinel since all the servers are ephermal and new masters will need to be elected. All of the servers will be behind a LB with round-robin LBing; just want to confirm whether that'll work 12:34:39 GMT I can imagine writes being made to the slaves 12:34:48 GMT so making them read-only probably won't work 12:36:03 GMT usually writes should be made against the master but in certain scenarios clients may accidently write to a slave? 12:37:20 GMT minus: Well, not accidently, but by design. If I have 1 master + 20 slaves w/ round robin, its a 1/20 chance that writes will be made to master 12:37:50 GMT why do you have a master at all then? 12:38:58 GMT minus: interesting question; never considered a masterless design, is that even possible? sounds a lot more complicated than a slave+master relationship 12:39:44 GMT unless all of the machines should hold the same data it seems strange to use master/slave 12:40:59 GMT minus: well, they should hold the same cache data, yes; 12:41:09 GMT i guess they have nearly the same data, but not necessarily identical 12:41:24 GMT as in, a simple LRU cache 12:41:38 GMT and you have the master/slave to warm up a server after it went down? 12:44:47 GMT minus: does sound uncessarily complicated. I guess the alternative is to write to master and read from slaves 12:46:51 GMT minus: although if the slaves are syncing with the master, I don't necessarily see why that wouldn't work? the chatter the master would have deal with is significant, thats a given 12:48:28 GMT i was asking how your stuff looks like, it wasn't a suggestion 12:49:26 GMT minus: do you think my current design ambitions are viable? 12:50:55 GMT you're using redis as LRU cache? 12:53:34 GMT minus, yes 12:54:59 GMT and the intention behind master/slave is that in case of failure of one instance is that you have your cache warm again quickly? 12:56:15 GMT minus: yes; that, and consistency across all slaves 12:58:15 GMT then a master/slave setup with writes only happening against the master is the right solution 12:59:48 GMT minus: cool, I guess I shouldn't put the master in the LB group then; have the slaves serving read-only, and writes to master. 13:00:51 GMT if you're using HAproxy as LB you can configure it to have a port that always goes to the master 13:02:16 GMT minus: ah, thats a very good idea; would allow me to put the master in the LB group and use sentinal for re-election in event of failure. I was initially going to use a L4 LB 13:02:38 GMT L4 LBs are obviouslly alot more limited in what they can do 13:02:53 GMT HAproxy is L4, isn't it? 13:03:00 GMT L7 13:03:26 GMT it operates on ports though 13:03:34 GMT HAproxy is good; we use it in production 13:03:50 GMT it's simple and works 13:04:00 GMT minus, indeed; it does alot more than just ports though 13:04:29 GMT too bad rackspace don't use it on their HW LBers 13:04:51 GMT https://github.com/falsecz/haredis/blob/master/haproxy.cfg something like that 13:05:26 GMT we use that too (or rather our hoster does for us; they suggested it too. initially the idea was to use twemproxy) 13:05:50 GMT nice; am just wondering whether I should spin up another HAproxy instance or use the existing one; hmm 13:09:19 GMT minus, nice talking to you; thanks for your suggestions, very useful. 13:09:30 GMT the HAproxy is obviously a SPOF 13:10:38 GMT the better option are sentinel-aware clients 13:10:40 GMT minus: yep; LBers are always tricky when it comes to SPOF; in networking, you would do some sort of high availability with virtual IPs, haven't really looked into HA for cloud LBs 13:11:05 GMT yeah, you could run 2+ HAproxy hosts and do IP LB 13:12:23 GMT minus: yeh, in the last 3 years, we have hardly had any downtime with our single haproxy setup though; its a pretty solid piece of solid 13:12:39 GMT a few glitches here and there but solid otherwise 13:13:01 GMT its usually app and db servers that play up the most 18:57:21 GMT A little module to kick off the week - countminsketch: https://github.com/RedisLabsModules/countminsketch 18:59:13 GMT is that similar to hyperloglog, itamarhaber? 19:00:15 GMT a big meh at your choice of license 19:40:49 GMT Am I right in the logic that if I want to use several different passwords for redis queues, i should run different processes? 19:41:49 GMT yes, redis only supports one password per instance. 19:42:01 GMT with that you have full control and can also hijack the system 19:42:17 GMT that lines up with what I read, thanks minus