A number of developments to the Twapper Keeper service have been announced on this blog. We are aware of growing interest in the service. If you are thinking of using the service and have questions about the service feel free to raise them here.
This entry was posted on June 11, 2010 at 2:50 pm and is filed under JISC. You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.
Would it be possible to lift the limit of 10000 tweets for the “get tweets” API call? I think it would be very useful to be able to export a whole archive at once, and that most users would like to do that when they use this call.
As it is now, to get a large archive, your best bet is to “divide and conquer” by splitting the calls in half between archive creation and current time, until there is a success on all API calls. I think this puts more strain on your servers than just a single, large JSON response.
This would also save the hassle of having to deal with multiple JSON files.
Due to the large archives, some limit has to be established as the JSON package would simply be too big to query / pull together / on the fly (the server simply can’t compile the payload into memory to serve in a timely manner – lots of API calls would be a lot better than one big one).
And since I want the API to be as real time against the archive as possible, I don’t envision “cache’ing” them at this time.
However, I think this should be somewhat fixed with the enhancements I have scheduled for the API where I will be aligning more closely with Twitter. What I will do is paginate the results so you can simply loop thru the results, vs. having to do start / stop times which I am sure could be a pain.