07:09:03 GMT do I get that right - if a sentinel-aware client is connected to a master that goes down/loses its master state, it'll have to be intelligent enough to try an alternative master by itself (after e. g. a timeout on an in-progress operation)? 07:26:08 GMT OK, https://redis.io/topics/sentinel-clients answers this question sufficiently, I think 08:53:05 GMT is redis eager to mentor in google summer of code 10:30:06 GMT I'd like to meaure how much socket I/O/traffic a redis instance serving via an UNIX domain socket generates (on GNU/Linux); any recommendations for doing that non-intrusively? 10:55:19 GMT colo-work: maybe some tracing tool like systemtap or bpf can hook into the relevant kernel tracepoints 10:55:56 GMT I don't think unix sockets have easy attach points similar to regular sockets 11:00:46 GMT badboy_, yeah, I was figuring the same thing and am looking for pre-made solutions for my problem ;) 11:00:54 GMT that should even be possible with ftrace 11:01:18 GMT problem is, I've never interfaces with it directly, but always used brendan greg's excellent tools abstracting the low-level stuff away 11:01:46 GMT :D that what I would have pointed you to 11:05:14 GMT so let's crash a few machines experimenting, I guess 8) 11:05:21 GMT (maybe.) 11:07:42 GMT 28036 available_filter_functions 11:07:43 GMT yikes 11:49:41 GMT I'm beginning to wonder if ftrace is the right tool for this job... 11:55:41 GMT this is going nowhere for now, I'm afraid. *performs mental context switch* 11:57:06 GMT I was trying to establish if our current setup where we have host-local, read-only redis slaves on each app server is a necessity (or at least a boon) for if/when we migrate to having a sentinel-based setup 11:58:18 GMT speaking of which, how does one properly "bootstrap" a sentinel-managed fleet of redisen? do I set up a number of redis-servers that are slave to a common master, and then let sentinel take over? 18:09:57 GMT hi all. I'm moving from memcached. There I used to cache all my API returned GET requests. 18:10:55 GMT But then I had a new request: when there is a POST to a given URL, I should flush the cache. Memcached didn't give me any way to do that. So I started to store both the cache AND an array of keys. So, when I need to flush, I fetch all keys and delete every one of them from memcached. 18:11:18 GMT This is working perfectly on redis. But is it the "redis" way? 18:12:13 GMT do you not have all information to delete caches in the POST? 18:12:21 GMT One thing that redis has that memcached didn't is SCAN. I think it can handle my issue better. 18:13:11 GMT depending on how large your keyspace is with regard to the amount you want to delete, SCAN might be inefficient 18:15:16 GMT minus what about my current "solution"? SET my_new_key "..." ; SET cached_keys (GET cached_keys + my_new_key); (I'm doing the cached_keys + my_new_key on PHP, FWIW) 18:18:26 GMT minus as for the "how large" is my keyspace: I have no idea. :( But we cache based on URL and query string, so it can be somewhere around fifty million keys, worst case scenario 18:19:54 GMT minus how much I want to delete: around 1 million or so. I don't have any way to predict when I'll have to delete. 18:20:11 GMT how do you know wich ones you have to delete 18:20:44 GMT a SCAN on 50M keys sounds like it could take a while; depending on how often you do that it might be fine though 18:54:54 GMT minus the ones I have to delete start with a given pattern. Say: /posts/POST_ID or /categories/CAT_ID 21:17:04 GMT I have a potentially dumb q. On [old server] I have a Redis 3.x DB that is db 0, and I want to move it to [new server] that already has Redis dbs 0 and 1, so I effectively want to move it to db 2. What is the easiest way to do this? Use a tool to export to CSV and import? Or is there something baked in to Redis that will let me do this?