Best practice for ESI requests when multiple


I’m developping tools that relies on requesting ESI multiple times depending on user behaviour.

I was wondering what was the best practice about it ?
Is it better to avoid any requests when possible ?

For instance to extract prices of certain items, i only request once, and if the file storing results is older than 10mn, i’m requesting again.

But now, i’m creating a tool needing Solar System information, Planets, and adjacent Solar Systems.
Since there are like 7k+ Solar Systems, and since the way ESI work demands multiple requests for 1 single information :
For instance, searching a system by name requires to request url/search/, getting the IDs, filtering IDs, then requesting again with filtered ID to get detailed informations.
Same for adjacent Solar Systems, on top of my previous example, i need to get stargate IDs, then extract system ID, then extract system Name.

Would it be better for me to build my own JSON with all the information i really need, and play with it ?
Or requesting multiple time ESI is not that much of a deal ?

1 Like

For stuff like Systems and Planets that are pretty much static you should cache the results and only update once in a while (once a week/month or whenever the eve server version change, something like that).
An alternative to getting the data from ESI is the SDE: (Also available in a host of formats made by Steve:

For non-static data, you can update as soon as expires header allow.
There is an error limit too.
Most of the detail are covered in the docs, and that also have links to all other important pages regarding ESI:

The #esi channel on Tweetfleet slack (also explained in the docs) is a good place to ask questions too.


Thank you very much for all these informations.

So i guess that requesting ESI for things like prices, changing very often is not that big of a deal.
On the contrary, for static data, like Solar Systems, it’s best that i build my own ressource from ESI and SDE.
Extracting what i really only need and rebuilding with it like so :

        "system_id": 3000000,
        "name": "",
        "planets": [{"id": 0, "name": "", "type": "", "radius": }],
        "security": 0,
        "adjacent_systems_id": [0,0,0]

For that i’ll need to create a tool of my own with C# or another language.
I’ve just used Zifrian’s program to turn static files into SQLite, but it did not extract planets data.
[EDIT] Just found out that Planets have been extracted in “Denormalized” table.

The question now is if it would be better as a JSON file, or a DB file like SQLite.
I bet SQLite would be best, as there’re over 7k of Solar Systems. :smiley:

I’m not using Slack otherwise i’d joined.
Thank you again !

So i guess that requesting ESI for things like prices

Yerh, hit the price endpoints as often as the expires allow. It’s perfectly fine. You only need to stop/throttle if you hit the error limit - implementing back off mechanics in response to the error limit is a must (you will get temporary or as a repeat offender permanently banned from ESI, if you do not)
A small note on the prices endpoint - you may want to add a small delay in addition to the expires time, or you may end up getting old result for some of the pages.

Also, you’re missing out not being on ESI slack. We have the CCP devs for ESI there and a great 3rd party community of devs using ESI.

If you are getting market data, if you’re getting any real number of items, just get everything for the region. That cache tends to be prepopulated (because of the rest of us) so it’s minimal extra DB load for CCP.

I can pull everything in a couple of minutes.

I’m only extracting some item types, and only from Jita.
Purpose is to give a Min and Med Value for my users on these items.

I will check those expiry and errors, but i never got one as far as i know.
Is 10mn an enough limit ?

Right now, i’ve converted SDE files into JSON, and trying to recreate a DB of my own, but it takes way too much time to do it, as my code is looping throug the HUGE “Denormalized” JSON file for every single systems.
I’m trying to optimize this, as, if my calculations are correct, it would take 15 hours to create my own DB. :smiley:

There’s always my converted DB in sqlite :smiley: Which Golden Gnu linked above.

The conversion process does take a while (around 40 minutes per version, all in.) but I have it pretty automated these days.

The alternate method would be to process the original yaml, file by file. It’ll take a while, mostly due to the number of files, but you could build a simpler data structure and dump as json pretty quickly.

The limits are: 100 errors in 1 minute. Anything more than that and you’ll get 420 errors for everything (until it resets. back off for a minute and you should be fine)

My mistake is probably to manipulate JSON instead of DB.
First I’ve created DBs from SDE files with your tool.
THEN converted those DBs to JSON files.

Used SDE files :

  • mapSolarSystems (to get names/security)
  • mapSolarSystemJumps (to get adjacent systems)
  • mapDenormalized (to get planets details)

I don’t know why i’m using JSON files, with so much entries…
I’m so obsessed with it.
(probably coming from the fact that i’ve started manipulating datas in coding through JSON :D)

I know what i’m going to do :

  • First filtering out Moons, Asteroid Belts out of these entries (240k !)
  • Then converting this back to JSON.
  • Then using my code :smiley:

If it’s still not fast enough, i PROMISE, i’ll use DB directly. :smiley:

Keep it simple. Take the static data from Steve from
Fetch only a few tables, the ones you need. And you already found the “denormalized” table which is a gift. I have some queries with 3 or 4 joins into it :slight_smile:

Then fetch market data from Steves API I discovered is more suitable and robust then ESI.

If dealing with ESI be aware of the error codes. Sometimes you get an err 500 two times in a row but success on the third try with a delay of 100ms between the queries :slight_smile:

I rather make my own, even if it takes times.
Once it’s done, i’ll have 1 single table with everything i need into it. :slight_smile:

Same for Market Data, i want to use my own resource, to avoid being stuck if ever fuzzwork goes down or anything alike.

I really need to check my error logs, i haven’t encountered one afaik.

Absolutely nothing wrong with writing your own.
If I hadn’t, people wouldn’t be able to use mine :wink: Monocultures are fragile.

1 Like

Quick note for anyone who would come by this thread and would want some information :

If you can, avoid JSON. :smiley:
For local data storage, obviously, sometimes you just can’t so either way to GET them.
But once got, store them datas, in local DB.

Wayyyyyy faster and cleaner.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.