Imputation
1. Autoimpute
Autoimpute is designed to be user friendly and flexible.
Usage
- Installation
pip install autoimpute
- Three imputers
from autoimpute.imputations import SingleImputer, MultipleImputer, MiceImputer
si = SingleImputer() # pass through data once
mi = MultipleImputer() # pass through data multiple times
# mice
mice = MiceImputer() # pass through data multiple times and iteratively optimize imputations in each column
- Impute the easy way
# mice
mice = MiceImputer()
mice.fit_transform(data)
- Impute the complex way
# create a complex instance of the MiceImputer
# Here, we specify strategies by column and predictors for each column
# We also specify what additional arguments any `pmm` strategies should take
imp = MiceImputer(
n=10,
strategy={"salary": "pmm", "gender": "bayesian binary logistic", "age": "norm"},
predictors={"salary": "all", "gender": ["salary", "education", "weight"]},
imp_kwgs={"pmm": {"fill_value": "random"}},
visit="left-to-right",
return_list=True
)
# Because we set return_list=True, imputations are done all at once, not evaluated lazily.
# This will return M*N, where M is the number of imputations and N is the size of original dataframe.
imp.fit_transform(data)
- Impute using supervised machine learning methods
- apply scikit-learn and stats models to multiply imputed datasets (using the MiceImputer under the hood).
- Now supporting linear regression and binary logistic regression.
from autoimpute.analysis import MiLinearRegression
# By default, use statsmodels OLS and MiceImputer()
simple_lm = MiLinearRegression()
# fit the model on each multiply imputed dataset and pool parameters
simple_lm.fit(X_train, y_train)
# get summary of fit, which includes pooled parameters under Rubin's rules
# also provides diagnostics related to analysis after multiple imputation
simple_lm.summary()
# make predictions on a new dataset using pooled parameters
predictions = simple_lm.predict(X_test)
# Control both the regression used and the MiceImputer itself
mice_imputer_arguments = dict(
n=3,
strategy={"salary": "pmm", "gender": "bayesian binary logistic", "age": "norm"},
predictors={"salary": "all", "gender": ["salary", "education", "weight"]},
imp_kwgs={"pmm": {"fill_value": "random"}},
visit="left-to-right"
)
complex_lm = MiLinearRegression(
model_lib="sklearn", # use sklearn linear regression
mi_kwgs=mice_imputer_arguments # control the multiple imputer
)
# fit the model on each multiply imputed dataset
complex_lm.fit(X_train, y_train)
# get summary of fit, which includes pooled parameters under Rubin's rules
# also provides diagnostics related to analysis after multiple imputation
complex_lm.summary()
# make predictions on new dataset using pooled parameters
predictions = complex_lm.predict(X_test)
Imputation Methods
Univariate | Multivariate | Time Series / Interpolation |
---|---|---|
Mean | Linear Regression | Linear |
Median | Binomial Logistic Regression | Quadratic |
Mode | Multinomial Logistic Regression | Cubic |
Random | Stochastic Regression | Polynomial |
Norm | Bayesian Linear Regression | Spline |
Categorical | Bayesian Binary Logistic Regression | Time-weighted |
Predictive Mean Matching | NextObs Carried Backward | |
Local Residual Draws | Last Obs Carried Forward |
Main features
- Utility functions to examine patterns in missing data
- Missingness classifier and automatic missing data test set generator
- Numerous imputation methods for continuous, categorical, and time-series data
- Single and multiple imputation frameworks to apply imputation methods
- Custom visualization support for utility functions and imputation methods
- Analysis methods and pooled parameter inference using multiply imputed datasets
- Adherence to scikit-learn API design for imputation and analysis classes
- Integration with pandas, scikit-learn, statsmodels, pymc3, and more
2. fancyimpute
- github
- Installation by
pip
.conda
is not supported.
pip install fancyimpute
Usage
from fancyimpute import KNN, NuclearNormMinimization, SoftImpute, BiScaler
# X is the complete data matrix
# X_incomplete has the same values as X except a subset have been replace with NaN
# Use 3 nearest rows which have a feature to fill in each row's missing features
X_filled_knn = KNN(k=3).fit_transform(X_incomplete)
# matrix completion using convex optimization to find low-rank solution
# that still matches observed values. Slow!
X_filled_nnm = NuclearNormMinimization().fit_transform(X_incomplete)
# Instead of solving the nuclear norm objective directly, instead
# induce sparsity using singular value thresholding
X_incomplete_normalized = BiScaler().fit_transform(X_incomplete)
X_filled_softimpute = SoftImpute().fit_transform(X_incomplete_normalized)
# print mean squared error for the imputation methods above
nnm_mse = ((X_filled_nnm[missing_mask] - X[missing_mask]) ** 2).mean()
print("Nuclear norm minimization MSE: %f" % nnm_mse)
softImpute_mse = ((X_filled_softimpute[missing_mask] - X[missing_mask]) ** 2).mean()
print("SoftImpute MSE: %f" % softImpute_mse)
knn_mse = ((X_filled_knn[missing_mask] - X[missing_mask]) ** 2).mean()
print("knnImpute MSE: %f" % knn_mse)
Algorithms
-
SimpleFill
: Replaces missing entries with the mean or median of each column. -
KNN
: Nearest neighbor imputations which weights samples using the mean squared difference on features for which two rows both have observed data. -
SoftImpute
: Matrix completion by iterative soft thresholding of SVD decompositions. Inspired by the softImpute package for R, which is based on Spectral Regularization Algorithms for Learning Large Incomplete Matrices by Mazumder et. al. -
IterativeImputer
: A strategy for imputing missing values by modeling each feature with missing values as a function of other features in a round-robin fashion. A stub that links toscikit-learn
’s IterativeImputer. -
IterativeSVD
: Matrix completion by iterative low-rank SVD decomposition. Should be similar to SVDimpute from Missing value estimation methods for DNA microarrays by Troyanskaya et. al. -
MatrixFactorization
: Direct factorization of the incomplete matrix into low-rankU
andV
, with an L1 sparsity penalty on the elements ofU
and an L2 penalty on the elements ofV
. Solved by gradient descent. -
NuclearNormMinimization
: Simple implementation of Exact Matrix Completion via Convex Optimization by Emmanuel Candes and Benjamin Recht using cvxpy. Too slow for large matrices. -
BiScaler
: Iterative estimation of row/column means and standard deviations to get doubly normalized matrix. Not guaranteed to converge but works well in practice. Taken from Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares.