I agree to what was said here so far.
It looks like that someone has forgotten the most important lesson, if you run your stuff on the a big AWS cluster(s). You need to be careful what you are doing - with great power comes great responsibility™. Now it looks like that once more the the good guys have to suffer, because someone did something (perhaps even not deliberately) wrong
And I echo what @EVE_Ref said: this data is only updated rarely - and it’s static like hell. Even on the endpoint specification it is declared that it’s only invalidated once a day!
I would not go that far that we need streaming - for order data, yes, this would greatly make sense - and hence I can imagine a lot of ESI-based applications where event streaming could lower your AWS cluster’s load significantly. For the history data, I don’t see there a lot of benefits for streaming - to be honest.
It would be sad to have the history market data down for long. For my (corp) application, this is an important source of information about market metrics. Several calculations and billings are dependent on long-term market data, which so far we have determined based on market history data. So, you can be sure that the downtime of the history endpoint also has “real impact” on gameplay on Tranquility.
In general, we anyhow have too little market metrics available at our hands: For the orders, it’s rather okay, because we mainly have access to the full order book. However, for past data (what has happened in the market yesterday and earlier) and volume information (how many trades were made yesterday), I could imagine a lot of more “basic metrics” to be available. This would allow traders to tailor their market strategies much better to the needs of their customers. If you are interested into that direction, we should have a separate talk on this matter.
@CCP_Zelus & @CCP_Devs: Thanks for your efforts on this topic. I can imagine how nerve-wrecking your situation is. Be assured that we a) reject such kind of abusive usage of the API, b) are hoping that you will find a suitable solution as soon as possible, but c) understand that the latter does not come out of the blue in no time!
Let us know, if we can be of help somehow (and be it beta-testing).
PS: In case you also intend to redesign the endpoint from a data-provisioning perspective: This API cries to be used in a replication-only scenario. I hardly can imagine an application which would only fetch the data selectively on-demand instead of caching it locally. Instead, what you typically want to do is something like “data digging” - and for that you want to have as much data as possible locally. The current design of the API endpoint makes replication especially hard to implement, because both parameters region_id
and type_id
are mandatory. Essentially, this forces everyone who wants to do “data digging” to implement an O(n²)
loop and scrape all data from your endpoint. This leads to the fact that you will get thousands of small requests for the tuple (region_id, type_id)
- in fact it will be 4,955,216 right now, as we have 44243 types (as of today) and 112 regions. Especially after downtime, when the statistics are updated, everyone is sucking the data out of your mouth. Even worse, to stay up-to-date, you need to “poll” the endpoint for the whole enchilada every day. However, if yesterday’s replication has worked out, 98% of the data you fetch is for the toss: Typically the data of all the days before wasn’t changed.
What I want to say is: Consider turning the mandatory parameter around[1], e.g. by making a selection by market day (single-value) mandatory as single field (best as URL path attribute). The endpoint then returns all history data for all types and all regions for the requested day. Then all those “notorious scrapers” like us will send you only one single request for the previous day - and that only daily. On your side, you can satisfy all these requests from one single cache object only: Have your gateway reverse-dispatch the request to a (static) file in an S3 bucket - then let this approach scale! Moreover, this approach would roughly save you 364/365 (i.e. roughly 99.7%) of your bandwidth on the payload on these replication requests every day (compression in transit not considered). Additionally, if historic data (of previous days/months) is stored on disk space (which virtually does not cost anything), you may even consider prolonging the time period of your market history data: Only “full reload” cases will request the old “files” - “delta scrapers” will only touch the most recent file.
[1] That is to be considered risk-free from a data authorization perspective, as both the set of type_id
s and region_id
s are public knowledge.