Format | Price | Quantity | Select |
---|---|---|---|
PDF Download |
$6.95
|
||
EPUB Download |
$6.95
|
||
Printed Black & White Copy |
$7.25
|
Machine-learning (ML) models have become a common tool used across a multitude of industries to help people make decisions. As these models have increased in predictive power, many have also grown in complexity. The pursuit to improve the accuracy of predictions has diminished the interpretability of many models—leaving users with little understanding of the model’s behavior or trust in its predictions. The field of eXplainable artificial intelligence (XAI) seeks to encourage the development of interpretable models. Google Cloud Platform offers two Explainable AI functions in BigQuery ML that allow users to examine the attribution of model features, which aids in model behavior verification and bias recognition. One of the functions available in BigQuery provides a global perspective on the features used to train the model, while the second function examines local feature attribution associated with individual predictions in more detail. This note offers an overview of Explainable AI in BigQuery ML, using as an example a (fictional) realtor’s linear regression model that predicted a home’s latest sale price based on predictor variables such as the total tax assessment from the year of the last sale, the square footage of the house, the number of bedrooms, the number of bathrooms, and whether the condition of the home is below average. After training the linear model, the feature attribution can be studied from a global and local perspective in BigQuery.