# Mean Absolute Error

Mean Absolute Error, or MAE, is a popular metric because, like RMSE, the units of the error score match the units of the target value that is being predicted.

Unlike the RMSE, the changes in MAE are linear and therefore intuitive.

That is, MSE and RMSE punish larger errors more than smaller errors, inflating or magnifying the mean error score. This is due to the square of the error value. The MAE does not give more or less weight to different types of errors and instead the scores increase linearly with increases in error.

As its name suggests, the MAE score is calculated as the average of the absolute error values. Absolute or abs() is a mathematical function that simply makes a number positive. Therefore, the difference between an expected and predicted value may be positive or negative and is forced to be positive when calculating the MAE.

The MAE can be calculated as follows:

MAE = 1 / N * sum for i to N abs(y_i – yhat_i)

Where y_i is the i’th expected value in the dataset, yhat_i is the i’th predicted value and abs() is the absolute function.

We can create a plot to get a feeling for how the change in prediction error impacts the MAE.

The example below gives a small contrived dataset of all 1.0 values and predictions that range from perfect (1.0) to wrong (0.0) by 0.1 increments. The absolute error between each prediction and expected value is calculated and plotted to show the linear increase in error.

…

# calculate error

err = abs((expected[i] - predicted[i]))

The complete example is listed below.

# plot of the increase of mean absolute error with prediction error

from matplotlib import pyplot

from sklearn.metrics import mean_squared_error

# real value

expected = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

# predicted value

predicted = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0]

# calculate errors

errors = list()

for i in range(len(expected)):

# calculate error

err = abs((expected[i] - predicted[i]))

# store error

errors.append(err)

# report error

print(’>%.1f, %.1f = %.3f’ % (expected[i], predicted[i], err))

# plot errors

pyplot.plot(errors)

pyplot.xticks(ticks=[i for i in range(len(errors))], labels=predicted)

pyplot.xlabel(‘Predicted Value’)

pyplot.ylabel(‘Mean Absolute Error’)

pyplot.show()

Running the example first reports the expected value, predicted value, and absolute error for each case.

We can see that the error rises linearly, which is intuitive and easy to understand.

1.0, 1.0 = 0.000

1.0, 0.9 = 0.100

1.0, 0.8 = 0.200

1.0, 0.7 = 0.300

1.0, 0.6 = 0.400

1.0, 0.5 = 0.500

1.0, 0.4 = 0.600

1.0, 0.3 = 0.700

1.0, 0.2 = 0.800

1.0, 0.1 = 0.900

1.0, 0.0 = 1.000

A line plot is created showing the straight line or linear increase in the absolute error value as the difference between the expected and predicted value is increased.

The mean absolute error between your expected and predicted values can be calculated using the mean_absolute_error() function from the scikit-learn library.

The function takes a one-dimensional array or list of expected values and predicted values and returns the mean absolute error value.

…

# calculate errors

errors = mean_absolute_error(expected, predicted)

The example below gives an example of calculating the mean absolute error between a list of contrived expected and predicted values.

# example of calculate the mean absolute error

from sklearn.metrics import mean_absolute_error

# real value

expected = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

# predicted value

predicted = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0]

# calculate errors

errors = mean_absolute_error(expected, predicted)

# report error

print(errors)

Running the example calculates and prints the mean absolute error.

0.5

A perfect mean absolute error value is 0.0, which means that all predictions matched the expected values exactly.

This is almost never the case, and if it happens, it suggests your predictive modeling problem is trivial.

A good MAE is relative to your specific dataset.

It is a good idea to first establish a baseline MAE for your dataset using a naive predictive model, such as predicting the mean target value from the training dataset. A model that achieves a MAE better than the MAE for the naive model has skill.