Referencia Rápida: Scikit-learn de Python

Scikit-learn es una librería de código abierto para Python, que implementa un rango de algoritmos de Machine Learning, pre-procesamiento, referencias cruzadas y visualización usando una interfaz unificada.

Un Ejemplo Básico

from sklearn import neighbors, datasets, preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
iris = datasets.load_iris()
X, y = iris.data[:, :2], iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
knn = neighbors.KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
accuracy_score(y_test, y_pred)

Cargar la data

Nuestra data debe ser numérica y estar almacenada como arreglos de NumPy o matrices de SciPy. Otro tipo de data que pueda convertirse en arreglos numericos tambien se aceptan, como los DataFrames de Panda.

import numpy as np
X = np.random.random((10,5))
y = np.array(['M','M','F','F','M','F','M','M','F','F','F'])
X[X < 0.7] = 0 Preprocessing The Data Standardization
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
standardized_X = scaler.transform(X_train)
standardized_X_test = scaler.transform(X_test)
NORMALIZACIÓN
from sklearn.preprocessing import Normalizer
scaler = Normalizer().fit(X_train)
normalized_X = scaler.transform(X_train)
normalized_X_test = scaler.transform(X_test)
BINARIZACIÓN
from sklearn.preprocessing import Binarizer
binarizer = Binarizer(threshold=0.0).fit(X)
binary_X = binarizer.transform(X)
CODIFICAR ATRIBUTOS CATEGÓRICOS
from sklearn.preprocessing import LabelEncoder
enc = LabelEncoder()
y = enc.fit_transform(y)
IMPUTAR VALORES FALTANTES
from sklearn.preprocessing import Imputer
imp = Imputer(missing_values=0, strategy='mean', axis=0)
imp.fit_transform(X_train)
GENeraR atributos polinomiales
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(5)
oly.fit_transform(X)
entrenaR y probaR la data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=0)

Crear el Modelo

Estimadores Supervisados

REGRESIÓN lineal
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
Support Vector Machines (SVM)
from sklearn.svm import SVC
svc = SVC(kernel='linear')
Naive Bayes
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
KNN
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier(n_neighbors=5)

Estimadores No Supervisados

ANÁLISIS de Componente principal (PCA)
from sklearn.decomposition import PCA
pca = PCA(n_components=0.95)
K Means
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0)

Ajustar el Modelo

Aprendizaje supervisado
lr.fit(X, y)
knn.fit(X_train, y_train)
svc.fit(X_train, y_train)
aprendizaje no supervisado
k_means.fit(X_train)
pca_model = pca.fit_transform(X_train)

Predecir

 

estimadores supervisados
y_pred = svc.predict(np.random.random((2,5)))
y_pred = lr.predict(X_test)
y_pred = knn.predict_proba(X_test))
estimadores no supervisados
y_pred = k_means.predict(X_test)

Evaluar el Desempeño del Modelo

Métricas de Clasificación

puntaje de exactitud
knn.score(X_test, y_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
reporte de CLASIFICACIÓN
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred)))
matriz de CONFUSIÓN
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test, y_pred)))

Métricas de Regresión

error absoluto promedio
from sklearn.metrics import mean_absolute_error
y_true = [3, -0.5, 2])
mean_absolute_error(y_true, y_pred))
error medio cuadrado
from sklearn.metrics import mean_squared_error
mean_squared_error(y_test, y_pred))
puntaje R2
from sklearn.metrics import r2_score
r2_score(y_true, y_pred))

Metricas de Agrupacion

ÍNDICE Ajustado en Radianes
from sklearn.metrics import adjusted_rand_score
adjusted_rand_score(y_true, y_pred))
Homogeneidad
from sklearn.metrics import homogeneity_score
homogeneity_score(y_true, y_pred))
V-measure
from sklearn.metrics import v_measure_score
metrics.v_measure_score(y_true, y_pred))
VALIDACIÓN cruzada
print(cross_val_score(knn, X_train, y_train, cv=4))
print(cross_val_score(lr, X, y, cv=2))

Ajustar el Modelo

BÚSQUEDA de Cuadrillas
from sklearn.grid_search import GridSearchCV
params = {"n_neighbors": np.arange(1,3), "metric": ["euclidean", "cityblock"]}
grid = GridSearchCV(estimator=knn,param_grid=params)
grid.fit(X_train, y_train)
print(grid.best_score_)
print(grid.best_estimator_.n_neighbors)
OPTIMIZACIÓN DE PARÁMETROS ALEATORIZADOS
from sklearn.grid_search import RandomizedSearchCV
params = {"n_neighbors": range(1,5), "weights": ["uniform", "distance"]}
rsearch = RandomizedSearchCV(estimator=knn,
param_distributions=params,
cv=4,
n_iter=8,
random_state=5)
rsearch.fit(X_train, y_train)
print(rsearch.best_score_)

Tomado DataCamp, donde hay una version descargable muy practica para imprimir y tener a la mano!.


Also published on Medium.

Publicado por

sansagara

Software and Data Engineer. Tech Passionate. Open Source Advocate.

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *