import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as seabornInstancefrom sklearn.linear_model import LinearRegressionfrom sklearn.model_selection import train_test_split#from sklearn.feature_selection import SelectFromModelfrom sklearn import metricsfrom sklearn.metrics import classification_reportfrom sklearn.metrics import confusion_matriximport statsmodels.api as smServiceLevel = pd.read_csv("Service Level.csv")print(ServiceLevel.head())y = ServiceLevel['SL'].values # Create arrays for the feature variableX = ServiceLevel.drop(['SL','Date'], axis=1).values # Create arrays for the response variables# We do the regressionreg = sm.OLS(y, X).fit()# And here we can see the results in a very nice looking tableprint('SUMMARY -------------------------------------------')print((reg.summary()))# We can only take a look at the parameter values thoughprint('PARAMETERS ----------------------------------------')print((reg.params))# We can also extract the residuals# print('RESIDUALS -----------------------------------------')# print((reg.resid))# This line is just to prevent the output from vanishing when you# run the program by double-clicking# input('Done - Hit any key to finish.')SUMMARY -------------------------------------------
OLS Regression Results
=======================================================================================
Dep. Variable: y R-squared (uncentered): 0.979
Model: OLS Adj. R-squared (uncentered): 0.978
Method: Least Squares F-statistic: 6357.
Date: Sat, 16 May 2020 Prob (F-statistic): 0.00
Time: 19:09:41 Log-Likelihood: 554.37
No. Observations: 699 AIC: -1099.
Df Residuals: 694 BIC: -1076.
Df Model: 5
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
x1 -0.0017 6.67e-05 -24.976 0.000 -0.002 -0.002
x2 0.0016 7.11e-05 23.039 0.000 0.001 0.002
x3 0.0030 0.000 13.994 0.000 0.003 0.003
x4 -6.06e-05 9.08e-05 -0.667 0.505 -0.000 0.000
x5 0.0006 8.65e-05 6.670 0.000 0.000 0.001
==============================================================================
Omnibus: 102.645 Durbin-Watson: 0.825
Prob(Omnibus): 0.000 Jarque-Bera (JB): 190.404
Skew: -0.879 Prob(JB): 4.51e-42
Kurtosis: 4.857 Cond. No. 98.6
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
PARAMETERS ----------------------------------------
[-1.66514205e-03 1.63764578e-03 2.95806398e-03 -6.05955892e-05
5.76872815e-04]