Whether it is an online mass scale hackathon or an internal project sprint for machine learning model building, the prime motive is definitely building a model that predicts the best. This drives the need for cross validation techniques followed by rigorous hyper-parameter tuning, ensemble techniques, boosting techniques and what not.
The major catch which the contestants fail to understand, is that the model should be generalizable as much as possible. Randomly searching and tuning the hyperparameter will either be computationally to expensive or an over fitting model. Make sure to understand some basic hyperparameters and try to limit the search by concept and data descriptions. Also make sure to handle class imbalance.
Last but not the least, remember to remove the garbage in data cleaning, that is the most rewarding part.