Why only 10 to 12 waypoints for optimization?

It’d represent a major breakthrough in maths, which would potentially have significant impacts in cryptography (specifically breaking other people’s encryption.)

It’s an NP-Hard problem.

(I don’t do this level of math/stats. I only vaguely understand what I’m talking about :D)

5 Likes

Problems that can be solved in linear time are classified as Polynomial ( P ). The traveling salesman problem is classified as a non-deterministic polynomial (NP) problem. By definition, any NP problem can be translated into any other NP problem in linear time. So, if you solve any NP problem in linear time you have solved them all (linear + linear = linear).

Further, if you were to accomplish such, you would have proven that P = NP. At this time, the question of whether or not P = NP is one of the biggest unknowns in math. Most with knowledge of the topic believe that P =/= NP, but a proof of such does not currently exist (despite a lot of effort from brilliant mathematical minds). The belief that they are probably not equal is based on some of the implications if P does equal NP. If this is the case, then some stuff has to be true that is hard to believe. For example, if P = NP then any problem that can be verified in polynomial time can also be solved in polynomial time. Applied to encryption, this would mean that finding the key would be roughly as easy as it is to verify a key.

So, if you solve the general traveling salesman problem in linear time you are the only person on earth who knows how to easily break all current encryption schemes. Hence the potential interest from an organization like the NSA =)

5 Likes

Thank you for writing this post, but all of it is literally hot air for me, because there’s no tangible substance to actually wrap my head around. Not trying to be insulting, it’s just … lots of words that don’t really mean anything in the real world, or I’m missing the connection.

According to you, an NP problem is literally just “maths that takes long to calculate”.

Maths that takes long to calculate takes long to calculate. What kind of mind believes that every math, which takes long to calculate, can be translated into any other math that takes long to calculate and why should that mean everything can be calculated in linear time?

How does any of this make sense?

It doesn’t.

1 Like

I’ll admit it’s probably the “strangest” math I have ever done. It is hard to understand in as brief a statement as I made (even if I was the best at explaining said topic in the world, and I’m not), so don’t feel bad. I’ll gladly take the blame for being confusing :stuck_out_tongue:

I’ll try again, though…

NP-complete problems are not just “maths that takes long to calculate”, they are “problems for which the time required to solve grows non-linearly as the problem size increases.” You can look up the formal definition if you wish.

But another way to prove a problem is NP-complete is through reduction. I will now attempt to make this clear and tie it to the consequences I stated before.

First, we need to discuss time-complexity (how long it takes an algorithm to complete, usually estimated in terms of the problem size). If our problem size is defined by n, and our time-complexity is defined as (n^n + 100*n), then as we grow n to very large numbers the result will be governed by the n^n term and the 100n term will not really be important. Thus we would say the time-complexity of this problem is O(n^n) (this is “Big-O” notation). This is a simplification, but a justified one. We only care about the 100n term if comparing to another O(n^n) algorithm/problem.

In my opinion, the concept of “reduction” is the hardest point to grasp because it’s very abstract and situational since we are potentially talking about quite different problems at first glance. The idea of reduction is nicely illustrated here:

Let’s say we have 2 problems/algorithms:
A: NP-Complete
B: ???

We could apply the definition of NP-complete directly to assign classification to B, but it’s usually easier to try reduction. If B is NP-complete, then we will be able to reduce A to B and B to A, both using procedures that require linear time. Just as my example in describing time-complexity, the reduction procedures end up being insignificant given big n since both problems require non-linear time to solve. Both become (linear + non-linear = non-linear).

We know that P is a subset of NP. We can reduce a P problem to an NP-complete problem (pointless as that would be, since the time-complexity gets worse), but we don’t know how to reduce an NP-complete problem to a P problem in linear time. Such a method might exist (it has not been proven impossible), but we don’t know it. Thus we just don’t know if P = NP.

If P does equal NP then we know (by definition) that we can reduce NP problems to P problems using linear time reductions. So then solving an NP problem is just reduce to a P problem and solve. The time-complexity would be (linear + linear = linear). So if you devise an algorithm/procedure that solves the traveling salesman problem in linear time, you have solved all NP-complete problems in linear time and proven that NP = P.

TL;DR
Don’t bother trying to solve the general traveling salesman problem in linear time… it’s impossible or among the hardest problems anyone has considered.

4 Likes

I’m trying to research this. Am out of house currently.

Right now I fail to understand what’s non-deterministic about the TSP. Makes no sense. Equal input = equal output. Roads and cities don’t change “randomly” while trying to find the solution.

A non-deterministic solving-algorithm changing results for some reason … why should that make sense? It’s like adding a random, which is deterministic, or letting an AI come up with one, which is also deterministic but might change over time assuming it also learns. Still deterministic, though.

In the end there simply isn’t any problem which can’t be solved by piling enough specialized hardware on it, brute forcing a result, even if it requires the size of the moon… no?

This all sounds more and more like someone came up with an irrelevant problem fooling everyone into believing it matters. Purely theoretical with no base in reality… no?

Doctor V, @Valerie_Valate, said P is not NP!

Oh, my latest source:

Reading this now:
https://www.ibm.com/developerworks/community/blogs/jfp/entry/no_the_tsp_isn_t_np_complete?lang=en

The “non-deterministic” comes from “non-deterministic finite automaton” (NFA). The simpler version is a “deterministic finite automaton” (DFA). If I recall correctly, an NP problem is one that can be solved in linear time by a NFA. I see no benefit from explaining the terms beyond this as it’s just a deeper dive into the computer science theory than needed here.

If the problem has non-linear time-complexity, then each successive unit increase in the problem size results in a greater increase to the average required time to complete.

Let’s say we use the obvious solution to the TSP problem, with time complexity O(n!). Further, let’s say we can solve TSP problems of size n=100 in an average time of a day of computation.

Then, let’s say we are granted a new computer that is twice as fast. We can now solve the size n=100 TSP problem in a half day!

So, now we decide to try size n=101 TSP problem…
this will take 101 times longer than the n=100 size problem. So even with our doubling in computational power, it still takes 50.5 days to solve. How many times do we need to double computer power if our goal is to get to n=200 (and have it finish in our lifetimes)?

edit: misread last part, deleted comment that doesn’t apply

2 Likes

And why does this really matter in any practical sense?

I’m still missing how one can conclude that all the NPs are interchangable. How can a specific solution to a problem be applied to a different problem with a different specific solution?

We can solve TSP for low numbers. We can crack low encryption by brute force. How does solving the TSP at any amount of cities, within reasonable time, allow to crack passwords faster? It is absurd to assume that the solution (aka optimization?) works for everything.

Sorry for not really diving into your comments, but I will properly digest them once I have gathered a minimum amount of information required. There are too many holes, so I’m asking first. I know I could google, but you’re LIVE and for once that’s better than googling. Also I’m googling anyway, so … : D

Another: Does it mean that, if I can come up with faster solutions to problems that take a lot of time to process, i win?

Does it count if problems are solved geometrically and analytically, instead of being based on pure algorithmic approaches?

Am i asking for too much? D:

There’s a difference between finding a solution to a problem using a deterministic algorithm and proving a possible solution is actually one.

P-class are the problems which require a polynomial time to find a solution.
NP-class are the problems for which checking if a valuation of the problem variables (= a possible solution) can be done in a polynomial time.

In each case, polynomial time means that, given a size metric over the space of possible solution to the problem, the time/work required to find a solution§ or verify a valuation(NP) is always inferior to a constant existing polynomial function over the size of the problem.

P = NP is a possibility, that implies that non polynomial problems (ie direct complexity > exponential) that can be simplified into a polynomial number of possible solution, and for which the valuations can be verified in a polynomial time, can thus be solved, using a problem transformation, into a polynomial time. Many NP problems have been proven to accept a transformation into a problem with a polynomial number of solutions, that is, they thus can be solved in a polynomial time. Not all, but if P = NP then all of those problems should become resolvable in polynomial time assuming of course we can simplifiy the NP problem into one for which the solution space can be explored in a P time.

2 Likes

How can it be hard to verify a solution for TSP? Just travel the resulting route?
Or is it about verifying that the other solutions actually are indeed worse?

Best example I could find of a reduction involving TSP:
https://opendsa-server.cs.vt.edu/ODSA/Books/Everything/html/hamiltonianCycle_to_TSP.html

Lunch time!

1 Like

I think I saw that one mentioned when I was googling my idea of a Voronoi based solution, which actually exist!

1 Like

polynomial time is not reasonnable time. Many exponential problems are easier to solve than polynomial problems, for small instance. The thing is, some mechanism assume there is no way to find a solution in a non-exponential time to be trusted.
If the solutions can be found in polynomial time, then the mechanism cannot be trusted anymore. However proving something does not exist is a very hard task. That’s why it matters for practical sense.

Most exponential problem can be solved using a mix of analytical (see linear programing), random( genetic algorithms), and bruteforce-with-guess (partial exploration) algorithms. Those are still algorithm, and deterministic eg the genetic algorithm is based on a seed that will always produce the same “search” for the same problem.

2 Likes

The problem of TSP is not to find a solution, it is to find the best one. Any ordering over the towns is a solution.
So you need to prove that a solution for TSP is actually the best one, in polynomial time, while the space of solutions is n! (n being the number of towns), so exponential(it’s the number of ordering in n). That is, if you just compare one solution to all other, you need to spend n! time so it’s not helpful.

So the real problem isn’t actually finding the solution to a problem like TSP, but finding a solution for the problem of finding the best solution…

… wait, aren’t these related?

1 Like

Bonus points for mentioning genetic/evolutionary algorithms! Fun stuff…

In general, the problem of AI can be imagined as searching the (near) infinite space of possible solutions for the best one you can find in the time given. I hadn’t thought of mentioning this approach as a “solution”… well done =)

1 Like

The real problem is : can we affirm some problems accept only exponential, as in do not accept polynomial, research algorithms ?

That’s called meta-algorithms ( = algorithm that work on algorithm space).
the resolving methods I mentioned earlier are actually meta-algorithm, they produce an algorithm that may be able to find solutions for a given class of problems.

Fun fact, finding the best algorithm for a class of problems can be done using an algorithm. So if we can explore the space of algorithms in a polynomial time and verify if an algorithm is the best solution in polynomial time, then we have P=NP.

2 Likes

Love the discussion, this reminds me of university, had two semester of complexity science, P and NP classes are just starters :stuck_out_tongue: … however let’s just wait for a working quantum computer and algorithm.

3 Likes

Traveling salesman problem simplified:

The number of possible solutions to the traveling salesperson problem is the number of stops factorialized.

That is, the computer has to determine into length of the trip for every possible solution.

So if there are 5 stops, you have to calculate the length of the route for 5! Different possible trips.

5! = 120 possibile routes.

10! = 3,628,800 possible routes (10 stops for the salesman)

15! = 1.3 trillion possible routes (15 stops for the salesman).

You can see how fast the number of possible routes is growing… that’s the problem.

It’s an easy problem, but so computationally expensive as to be functinallu unsolvable.

Beat it, get famous…

4 Likes

If the calculation was done in a separate thread which doesn’t affect the rest of the client, it would solve the problem. The client could just show a progress bar which says something like
“Optimizing Route [||||||||__]” while the calculation runs in the background.

I have heard that EvE uses an old version of python with no multi-threading support. I don’t know how true that is.

Someone could probably come up with a third party tool to do this.

You could let the tool optimize your ridiculously complicated route in the background while you continue to play the game.

We don’t have access to the route in the ESI yet. All we can do is add waypoints and delete the route.
Sure we can fetch all the contracts we accepted, the present location, and ubmit a new optimized route. but we can’t retrieve the route yet.