Not being able to figure things out is one thing, but i tend to find broken reference samples. Meaning the calibration set is seriously broken if like 5% of its results are wrong.
A light dabbler on and off with Eve for many years, was delighted to see this “Project Discovery” add-in recently. Been doing the Planet Transit thing for about a week now and I think it’s fantastic. However, it seems as though the Evaluation Set still has some problems. I have noticed many questionable analyses (some I am convinced are just flat wrong), but never had a good way to demonstrate this. I have just come across a perfect example which is easily observable. Seems like maybe the initial reference analysis could have been correct, but, something got corrupted and shifted the reference analysis. Anyway, looks like I am not the first one to come across problems, just wanted to provided another example to help improve the experience.
I was also thinking about a way that might help to better track inconsistencies. Either a button that you can click to indicate an analysis might need to be revisited, or, a serial number that can been used to reference any particular data set.
Actually, never took notice before, but it appears there is a serial number for each data set (I assume anyway), in the bottom right corner. It is only visible after an analysis is completed, presumably to mitigate nefarious activity.
The number for the data set I outlined above is 200218214.