06:29:51 GMT everybody' 06:36:00 GMT HI, I came to know that Redis doesnt support master master replication, In the redis tutorial I can see that there is a configuration , with 6 nodes, 3 master, 3 slaves, 06:36:24 GMT Can any one tell me what is the aim of this configuration 06:38:55 GMT My requirement is to reduce number of connection , made from the app server to Redis . so I was looking for a way where I can point to multiple redis nodes , so if I create a key from redis node 1, I can delete that key from Redis node 2 06:38:59 GMT is it possible> 06:41:35 GMT edge_: It is not possible. Redis cluster utilizes sharding to distribute keys to particular redis hosts. The client has a mechanism to determine which shard to connect to based on the key 06:42:50 GMT edge_: If you have a read-heavy load, you could use read-only replicas (master-slave). If you need a better distribution of writes and reads, cluster might still work for you, but you can't make arbitrary writes to arbitrary redis hosts 06:43:10 GMT edge_: take a look at https://redis.io/topics/cluster-tutorial 06:46:22 GMT edge_: to be more particular about the master-master replication in the example, if I'm not mistaken, masters are connected to eachother so that if you expand the cluster, keys can be rehomed to their new shards 06:47:12 GMT (the shard a key belongs to is decided via a hash function, and certain shards belong to certain hosts. When you add more hosts, some of the existing data might need to be moved to the new machine) 06:58:23 GMT cbgbt_: I didnt understand the purpose of sharding , all I know that , they occupies a part of hash of the server . I can copy the data to all the nodes, at the begining, but when the cluster is active, will it b able to write from one node, and read from some other node? 07:02:18 GMT edge_: You cannot write data to one node and read it from another, no. Only if you are doing master-slave replication can you do that. 07:02:56 GMT no, in case of master slave it aint possible 07:03:05 GMT so it isnt possible at all? 07:04:08 GMT If you performed all writes to the master, that would work. Alternatively, I think redis cluster can solve your problem, it just doesn't work the way you want it to. 07:05:14 GMT I'll try and explain. Redis-cluster divides all the keys it sees into 16383 "hash slots". Some number of these hash slots become associated with a particular master 07:06:04 GMT sorry, 16384 07:06:29 GMT If you have two masters, A and B, the 8192 hash slots are assigned to A and the other half are assigned to B 07:06:59 GMT When you use the redis cluster client to write some key "foo", the client figures out which hash slots "foo" belongs to, and then figures out which master owns that hash slot and writes to it 07:07:29 GMT later, when you do a read of "foo" it does the same thing to determine which hash slot and host to read it from. 07:07:50 GMT In this way, your keys are spread across multiple hosts, distributing the read and write load that any one host has to accept 07:42:39 GMT cgbt_: ok I will try it out, so u r saying hashing is done on the key itself 07:42:46 GMT thanks! 16:44:37 GMT anyone in here that has managed to run and pass the full redis-py suite? 16:44:51 GMT i booted ubuntu in docker to rule out osx issues but i'm getting so much pain 16:44:54 GMT i don't even know where to begin 16:51:20 GMT Habbie: 350 passed in 7.33 seconds (redis-py git d49aaacb, redis git r6019.790310d-1) 16:51:32 GMT minus, great, what OS, environment, etc.? 16:51:38 GMT archlinux 16:51:55 GMT what did you run to get that result? 16:52:05 GMT python setup.py test 16:52:14 GMT no deps, no pip, just that? 16:52:19 GMT (that is python 3) 16:52:45 GMT oh, i probably have the deps installed 16:54:06 GMT trying with a venv i'm having problems 16:54:08 GMT i had trouble figuring out the deps 16:54:08 GMT but i'll give arch a shot 16:54:08 GMT always nice and fresh 16:54:10 GMT ah 16:54:20 GMT their travis tests don't use a venv either 16:54:31 GMT but i have trouble understanding what 'pip install -e' means so far 16:55:03 GMT installs a package from a url iirc 16:55:12 GMT well there's no arguments 16:55:51 GMT but thanks for the git numbers and the arch pointer, i'll give it a shot later 16:55:52 GMT gives me an error if i try 16:56:00 GMT fun 16:56:06 GMT well i'm glad it's not all me 16:56:40 GMT with venv i had to manually install `mock` and then i get this when running the tests: E ModuleNotFoundError: No module named 'testscenarios' 16:57:02 GMT yes 16:57:05 GMT you can install that one too 16:57:15 GMT oh, is that a dep? 16:57:17 GMT this is my path of pain 16:57:17 GMT that ends in confusion 16:57:24 GMT well it's not listed as such anywhere 16:57:29 GMT but installing it does change the outcome 16:57:47 GMT ah 16:57:54 GMT now the tests run better 18:02:02 GMT is it possible to make redis only store commonly used keys in memory, and the rest in the dump.rdb? 18:03:47 GMT hue: it's not possible 18:04:02 GMT hue: everything is stored in memory. You can only save/load to disk 18:04:49 GMT shit 18:05:39 GMT you could always enable swap and it would kind of act how you want. But it'd probably be much slower than your thinking 18:06:02 GMT I'd just buy more ram, it's cheap 18:06:34 GMT or use redis as LRU cache and put your complete data somewhere else 18:07:01 GMT is there any other databases that use a key-value thing like redis? 18:07:05 GMT hue: how much data do you have? 18:08:05 GMT I plan to store chat logs, which will be frequently set'ted (about 100 every second?), and *may* be frequently get'ted 18:10:31 GMT hue: kyoto tycoon, leveldb, or even postgres 19:50:56 GMT minus, went with ubuntu 16.10 instead of 12.04, now everything just works 19:51:08 GMT minus, on 12.04 when i got the tests to run it came up with 'evalsha: command not found' and then i gave up on backporting ;) 19:51:31 GMT too old redis? 19:51:37 GMT too old everything 19:51:47 GMT pip too old to install the (old!) mock redis-py needed 19:51:49 GMT and then too old redis 19:51:51 GMT etc. 19:51:55 GMT 4 years are like 2 centuries in computer time 19:52:00 GMT yes 19:52:04 GMT it's 5 years now 19:52:23 GMT not quite but almost 19:52:39 GMT well they didn't freeze in april :) 20:32:31 GMT hello, does anyone have a second to help with HGET latency? 20:32:47 GMT just ask your question, pneville 20:33:19 GMT we are using 3.01 redis, in cluster mode. turned off bgsave and AOF but we are seeing high latency on HGET requests. When we run slowlog get 10, we see 10 ms or higher 20:33:22 GMT seeing lots of timeouts 20:33:37 GMT that is weird for HGET 20:33:39 GMT it is weird, we followed the latency page recommendations and everything is correct to it. 20:33:41 GMT yeah 20:33:56 GMT are you swapping memory? 20:33:59 GMT memory is free on the server, over 300 gb available. no i/o usage 20:34:23 GMT that's me out of ideas then 20:34:25 GMT swap is not being used and is completely free. we can hard set swapoff but it is completely free and not used. 20:34:37 GMT yeah, like it is really weird. it feels like chasing a ghost 20:35:43 GMT activerehasing is enabled though 20:35:49 GMT i can post my cfg 20:36:06 GMT everything is pretty default in those cases though 20:36:38 GMT activerehashing shouldn't cause this at a quick read but it's worth trying disabling it? 20:36:47 GMT it is weird though cause with redis running we see sub MS ping response gradually jump to 9 ms, then back to 0. we turn off redis, there is no ping latency. 20:36:58 GMT ok 20:37:39 GMT let me try 20:42:06 GMT is huge pages turned on? 20:42:25 GMT transparent huge pages* 20:42:29 GMT set to never 20:42:32 GMT on all 4 nodes 20:42:43 GMT just disabled activehashing, thought it was working but it came back 20:43:12 GMT maybe get a newer version of redis, or at least scan the changelogs 20:43:20 GMT 3.0.1 isn't very new 20:43:31 GMT 3.0.7 is the latest 3.0 iirc 20:50:59 GMT ok 21:14:07 GMT ok, we are going to try and upgrade. picking a low latency pop to test and make sure handling is the same. 21:14:16 GMT it is weird though that this would happen 21:20:51 GMT ok going to move to redis 3.1 21:20:54 GMT oops, 3.2