00:40:15 GMT OverCoder: your question is... unclear 00:40:58 GMT I haven't asked yet :v 00:41:04 GMT I have a question 00:41:11 GMT >< 00:41:17 GMT Does Redis have any sort of data separation or something? 00:41:23 GMT biax: your question is also unclear 00:41:23 GMT Like, in SQL, you have a DB, and that DB contains tables 00:41:40 GMT but in Redis I can't see anything other than just a global namespace containing everything 00:41:41 GMT <*> OverCoder is new 00:42:10 GMT redis has databases technically but they aren't generally used 00:42:13 GMT to my knowledge that is correct, but i'm not redis expert :-) 00:42:20 GMT o 00:42:29 GMT redis just stores keys and values, it's a big single namespace 00:42:43 GMT so do I just use keys like "access_token:1234" where 1234 is the user ID and access_token is something that identifies the type of the key? 00:42:45 GMT you do namespacing on your own, commas or something 00:42:50 GMT hmm 00:42:51 GMT yup 00:43:13 GMT I wonder how more efficient would it be to just have a seaprate tables or something 00:43:19 GMT why? 00:44:27 GMT there's no such thing as relations or any of that fancyness, making tables a first-class concept doesn't get you anything :P 00:44:27 GMT Because say I have a redis of 1 million rows, grouped with "access_tokens:.+", and only few of those rows match "user_data:.+", now eh, if I want to SCAN some user_data, I'd have to step through all those access_tokens 00:44:31 GMT that doesn't sound efficient 00:44:52 GMT s/grouped with/identified with/ 00:44:53 GMT you shouldn't be using scan like that :D 00:44:59 GMT then heck 00:45:01 GMT how does that work 00:45:11 GMT what's your use case? collect all user's keys? 00:45:13 GMT tokens** 00:45:55 GMT erm, honestly I have two uses for it now in my website: 00:45:56 GMT i'd think normal access patterns would be "this user is making a request, get their token" 00:46:02 GMT in which case you already have their ID, and only need a GET 00:46:24 GMT 1) I need to use to track some article-views thingy, so refreshing the page many times won't change the view counter of an article, that filter is per IP 00:46:48 GMT 2) Store login tokens, i.e. I use a RESTful API and I want users, afte rthey login, to get an access token that is used to do any other API query 00:46:53 GMT (within the website) 00:46:59 GMT no need for SCAN with either of those 00:47:05 GMT o 00:47:18 GMT right, what was I even thinking of 00:47:22 GMT ^_^ 00:47:25 GMT uh SQL is SELECT or die 00:47:44 GMT So I quickly looked at SCAN, heh, anyways thanks :P 00:48:00 GMT you already have all of the needed info there, so no need to search for it in your keyspace 00:48:22 GMT gotcha :v 00:48:35 GMT (And as a side-question, my approach is correct, right?) 00:48:57 GMT <*> OverCoder targets scalability and performance 00:49:08 GMT I mean I have SQL but I think redis can perform better 00:53:35 GMT storing IPs of recent views is definitely a good use-case, access tokens probably too, but dunno without more deets i think 00:55:05 GMT hmm, okay thanks :v 00:55:31 GMT if you wanna track forever you could just add IPs to a set for recent views 01:01:11 GMT nah not forever 01:01:46 GMT if the user requests the snippet for say, after an hour, then they really are looking at it, and it's worth a view count increment :v 01:03:07 GMT could just, like, "SET articles::recent_viewers: date", then EXPIRE that key after an hour 01:03:31 GMT then when recording, if the key exists, don't record the view 01:03:35 GMT ezpz 01:06:50 GMT yeah 01:06:53 GMT that's what I'm planning to 13:41:46 GMT <_Wise__> hi * 14:31:50 GMT hey 14:32:10 GMT could somebody help me with what would be the best cluster/replication setup in my case 14:33:35 GMT from what i gather, when im running in cluster mode, when my master fails, a random slave will become a new master 14:33:53 GMT if i have multiple masters, they just share the data, so failure of one master will result in partial loss 14:34:24 GMT but i have multiple datacenters, and i want to minimize data loss in case connection between datacenters is lost 14:34:59 GMT so practically, i want each datacenter to have a predefined master 14:35:19 GMT and the masters are replicas of each other 14:35:36 GMT is it possible to achieve something like that? some replication+cluster combo 14:37:28 GMT slave -> cluster master -> replication master hierarchy 14:40:12 GMT or maybe some other way to achieve higher consistency between datacenters 15:20:12 GMT i'm super nub, just installed redis for the first time, and i'm experiencing an issue with an ajax query on the https version of my site 15:20:53 GMT i'm reading that redis doesn't work with SSL out of the box, and i'm seeing that i should use stunnel or spiped 15:22:19 GMT what does ajax on a website have to do with redis not supporting ssl? 15:23:01 GMT the ajax returns nothing on https, i'm wondering if that issue is related to redis not supporting SSL 15:23:07 GMT your website's backend connects to redis on localhost, no ssl involved there nor required nor supposed to break anything 15:23:15 GMT ah 15:23:32 GMT so http/https should be no difference if it's hosted on the same server 15:23:42 GMT PHP? 15:23:44 GMT yes 15:23:53 GMT yeah, blank pages totally sound like PHP 15:23:56 GMT haha 15:23:59 GMT check your error logs 15:24:09 GMT ok thank you 15:24:19 GMT or turn on display_errors while developing 18:58:50 GMT i want a tshirt 19:37:53 GMT ok, datastructure question. 19:39:00 GMT i'm keeping a history of chat messages in a list, trimmed to a certain length via rpush/ltrim 19:39:11 GMT each entry is a json object describing the message 19:39:49 GMT problem though is that i sometimes need to remove random elements from this list, and the only way to do that is to loop over every one and LREM that specific message, which is O(n) 19:40:21 GMT is there a cleaner way to do that? eg, store messages in an ordered manner which makes deleting random entries better than O(n)? 19:42:41 GMT primary access pattern is store/retrieve, but i'd really like clearing to be fast as well 21:25:09 GMT a sorted set maybe? 21:58:30 GMT minus: i don't think a sorted set gets me anything though, the score would have to be like the timestamp of the message, and i still have the same problem of needing to loop through all the messages one-by-one to find the one(s) i wanna delete