What if hyperparameter tuning reduces accuracy?

Hyperparameter tuning is now especially easy with sklearn or other freeware packages. However, what if the resulting accuracy of hyperparameter tuned (HPT) values is actually lower than earlier picked values.
When can it happen?
Case1:
First a model building activity with some parameters chosen by intuition, has been evaluated on a single train test split. Now HPT values are obtained using a cv = 5.
This essentially means you are comparing metrics on two different test dataset! It’ll be more sensible to compare on the same dataset.
Case2:
Grid search is very limited
Case 3:
The model overfits on the train set, and a non-optimum hyperparameters result better when deployed on the test set.