Selecting and finalizing a deep learning neural network model for a predictive modeling project is just the beginning.
You can then start using the model to make predictions on new data.
One possible problem that you may encounter is that the nature of the prediction problem may change over time.
You may notice this by the fact that the effectiveness of predictions may begin to decline over time. This may be because the assumptions made and captured in the model are changing or no longer hold.
Generally, this is referred to as the problem of “ concept drift ” where the underlying probability distributions of variables and relationships between variables change over time, which can negatively impact the model built from the data.
Concept drift may affect your model at different times and depends specifically on the prediction problem you are solving and the model chosen to address it.
It can be helpful to monitor the performance of a model over time and use a clear drop in model performance as a trigger to make a change to your model, such as re-training it on new data.
Alternately, you may know that data in your domain changes frequently enough that a change to the model is required periodically, such as weekly, monthly, or annually.
Finally, you may operate your model for a while and accumulate additional data with known outcomes that you wish to use to update your model, with the hopes of improving predictive performance.
Importantly, you have a lot of flexibility when it comes to responding to a change to the problem or the availability of new data.
For example, you can take the trained neural network model and update the model weights using the new data. Or we might want to leave the existing model untouched and combine its predictions with a new model fit on the newly available data.
These approaches might represent two general themes in updating neural network models in response to new data, they are:
- Retrain Update Strategies.
- Ensemble Update Strategies.