In my first year at Télécom Paris, I took part with a fellow student in a very short hackathon (1 week), organized by french company Total Energies. The Hackathon was opened to 1st to 3rd year students.
The goal was to optimize the charging schedule of electric cars to minimize cost (the challenge being that yo don't know perfectly in advance how much electricity will cost in 2-6 or 24hours).
Our solutions was a very simple time series model (LSTM), and we made it to the second place (out of 23), behind a team of 3rd year students, that had a much more elaborate solution.
First, I learned a lot considering that this was only 1 week long.
But I mostly realized how AI results can be distorted by overfitting and pure luck. I don't talk often about this hackathon (compared to Swarm Rescue 4-months challenge for example), because all teams have completely overffited to the problem, in order to win.
If you look the detailed results, the first team made 176 algorithm variations tests in 1 week, our team made 86 variations (and truth is 80/86 variations were tested the last afternoon of the hackathon).
In fact, the last hour, the first few teams improved their scores by around 0.5 points, which is the difference between 1st and 6th place.
I learned by myself what other have told numerous time: AI competitions don’t produce useful models (Lauren Oakden-Rayner) & Is Your Model the Best One or the Luckiest One? (Samuele Mazzanti).