08:26:40 GMT I am using solrcloud (solr 5.2.1). i have 3 shards of one collections but some of my solr instances are automatically going down and in recovering mode 08:26:53 GMT how can i check why this happening 08:29:49 GMT sorry wrong window 14:45:32 GMT i need a key value database that could behave like a sorted queue, accessible from php and perl. i'm trying redis, but it's silly. i cannot get the first top rank element without knowing the scores range. 15:06:56 GMT Question for people that have hosted redis servers in multiple locations... 15:08:20 GMT We need to setup a server in North America away from our European servers, but we still need to talk to the redis server in Europe. Is it best to setup a redis slave in North America that points to the master in Europe or should be just connect straight to the redis in Europe? 90ms ping 21:36:34 GMT PSUser2: sure, but in theory you should be able to make that workable depending on what you're trying to do 21:36:49 GMT i've managed to do it 21:36:55 GMT PSUser2: also Redis 3.2+ has a C module API where you could surely write that function in 21:37:02 GMT it's like 10x faster than to do it in Lua 21:37:14 GMT but I just upgraded to Redis 3.0 branch myself, so I still dream about doing that :) 21:38:49 GMT profall: that's not great ping ... another question would be, what sort of throughput can you achieve? 21:39:52 GMT syncs, esp the case of full syncs, could be potentially bad esp if your data set is sufficiently large 21:40:49 GMT profall: so, are you basically asking whether it's more optimal to have the master+slave in Europe and have North American users then hit those over the pond? versus having a slave in North America so users don't have to take the latency hit? 21:41:50 GMT it's hard to answer without knowing more details about your application and use cases. speaking from my own experiences and the kinds of things I work on, I would want something that people could hit locally. 90ms is a long time! that's a duration, that although is fairly small, it is perceivable by people. 21:41:59 GMT that's just short of 1/10th of a second 21:43:22 GMT the 99th percentile of my complex queries typically take under 10ms, but can spike up to 40ms which I'm OK with. 90th percentile, the most expensive one takes just over 1ms. mean latencies are on the hundreds of microseconds scale 21:47:20 GMT an aside -- after tuning my zset-max-ziplist-entries to a higher value to account for my expected max zset size (1000) and having to bounce all the redis-server instances when rebuilding the machines on better hardware, my memory usage went down by about 50% (30GB) 22:06:14 GMT hmm 22:06:43 GMT Well it's just one nodejs application utilizing the redis server, not other users. 22:08:54 GMT The problem is we HAVE to communicate with the master in Europe no matter what. 22:13:11 GMT sure. so you have to take that hit somewhere. if you can deal with higher latency between the application and redis-server without impacting user experience or other things I may not be considering (you know your app best), then I'd probably go with having the slave in Europe 22:13:35 GMT note that you can daisy-chain slaves, though. but with the cost of more delay in replication 22:14:08 GMT ok 22:14:11 GMT I haven't played around with that yet, so I can't really quantify that impact 22:14:23 GMT Do you recommend always connecting to a slave and not master with an application? 22:14:37 GMT for reads, presumably? 22:14:49 GMT The nodejs app writes as well. 22:15:04 GMT you can't write to slaves. well, you can, but that data won't replicate back to the master afaik 22:15:08 GMT yea 22:15:17 GMT So I have to just suck up the 90ms ping time 22:15:19 GMT we actually write temp data to slaves as part of complex read queries 22:15:54 GMT so in our case, we offload the complex read queries onto the slaves 22:16:26 GMT ok 22:16:30 GMT but, yeah, if you're talking about master+slave and dropping a slave in NA, it's not going to help your write case. 22:17:20 GMT gotta run. cheers