Welcome to trelawney’s documentation!

trelawney

https://img.shields.io/pypi/v/trelawney.svg https://img.shields.io/travis/aredier/trelawney.svg Documentation Status MIT License

Trelawney is a general interpretability package that aims at providing a common api to use most of the modern interpretability methods to shed light on sklearn compatible models (support for Keras and XGBoost are tested).

Trelawney will try to provide you with two kind of explanation when possible:

  • global explanation of the model that highlights the most importance features the model uses to make its predictions globally
  • local explanation of the model that will try to shed light on why a specific model made a specific prediction

The Trelawney package is build around:

  • some model specific explainers that use the inner workings of some types of models to explain them:
    • LogRegExplainer that uses the weights of the your logistic regression to produce global and local explanations of your model
    • TreeExplainer that uses the path of your tree (single tree model only) to produce explanations of the model
  • Some model agnostic explainers that should work with all models:
    • LimeExplainer that uses the Lime package to create local explanations only (the local nature of Lime prohibits it from generating global explanations of a model
    • ShapExplainer that uses the SHAP package to create local and global explanations of your model
    • SurrogateExplainer that creates a general surogate of your model (fitted on the output of your model) using an explainable model (DecisionTreeClassifier,`LogisticRegression` for now). The explainer will then use the internals of the surrogate model to explain your black box model as well as informing you on how well the surrogate model explains the black box one

Quick Tutorial (30s to Trelawney):

Here is an example of how to use a Trelawney explainer

>>> model = LogisticRegression().fit(X, y)
>>> # creating and fiting the explainer
>>> explainer = ShapExplainer()
>>> explainer.fit(model, X, y)
>>> # explaining observation
>>> explanation =  explainer.explain_local(X_expain)
[
    {'var_1': 0.1, 'var_2': -0.07, ...},
    ...
    {'var_1': 0.23, 'var_2': -0.15, ...} ,
]
>>> explanation =  explainer.graph_local_explanation(X_expain.iloc[:1, :])
Local Explanation Graph
>>> explanation =  explainer.feature_importance(X_expain)
{'var_1': 0.5, 'var_2': 0.2, ...} ,
>>> explanation =  explainer.graph_feature_importance(X_expain)

R .. image:: http://drive.google.com/uc?export=view&id=1R2NFEU0bcZYpeiFsLZDKYfPkjHz-cHJ_

width:400
alt:Local Explanation Graph

FAQ

Why should I use Trelawney rather than Lime and SHAP

while you can definitally use the Lime and SHAP packages directly (they will give you more control over how to use their packages), they are very specialized packages with different APIs, graphs and vocabulary. Trelawnaey offers you a unified API, representation and vocabulary for all state of the art explanation methods so that you don’t lose time adapting to each new method but just change a class and Trelawney will adapt to you.

How to create implement my own interpretation method in the Trelawney framework?

To implement your own explainer you will need to inherit from the BaseExplainer class and overide it’s three abstract methods as such:

>>> class MyOwnInterpreter(BaseExplainer):
...     def fit(self, model: sklearn.base.BaseEstimator, x_train: Union[pd.Series, pd.DataFrame, np.ndarray],
...             y_train: pd.DataFrame):
...             # fit your interpreter with some training data if needed
...             pass
...    def explain_local(self, x_explain: Union[pd.Series, pd.DataFrame, np.ndarray],
...                      n_cols: Optional[int] = None) -> List[Dict[str, float]]:
...             # interpret a single prediction of the model
...             pass
...     def feature_importance(self, x_explain: Union[pd.Series, pd.DataFrame, np.ndarray],
...                            n_cols: Optional[int] = None) -> Dict[str, float]:
...             # interpret the global importance of all at most n_cols features on the predictions on x_explain
...             pass

You can find some more information by reading the documentation of the BaseExplainer class. If possible don’t hesitate to contribute to trelawney and create a PR.

Comming Soon

  • Regressor Support (PR welcome)
  • Image and text Support (PR welcome)

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Indices and tables