Evaluate the Model

Although the score method of a regressor returns the R-squared metric, there are other metrics in oraclesai.metrics that are helpful to evaluate a model. For instance, the Akaike Information Criteria (AIC), which measures the amount of information lost by the model.

from oraclesai.metrics import aic
 
print(f"AIC: {aic(error_model_fit)}")
 
score_train = spatial_error_pipeline.score(X_train, y="MEDIAN_INCOME")
print(f"r2_score (X_train): {score_train}")
 
score_valid = spatial_error_pipeline.score(X_valid, y="MEDIAN_INCOME")
print(f"r2_score (X_valid): {score_valid}")

Having a validation set helps to evaluate the model before making predictions with the test set. Also, it can be used to determine if the model is overfitted or underfitted. The preceding code shows the following metrics.

AIC: 56503.460498197666
r2_score (X_train): 0.6212791699175433
r2_score (X_valid): 0.6417600931041549