Skip to content

mlmodels : Machine Learning and Deep Learning Model ZOO for Pytorch, Tensorflow, Keras, Gluon models...

License

Notifications You must be signed in to change notification settings

aregsyan/mlmodels

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mlmodels : Model ZOO

This repository is the Model ZOO for Pytorch, Tensorflow, Keras, Gluon, LightGBM, Keras, Sklearn models etc with Lightweight Functional interface to wrap access to Recent and State of Art Deep Learning, ML models and Hyper-Parameter Search, cross platforms that follows the logic of sklearn, such as fit, predict, transform, metrics, save, load etc. Now, more than 60 recent models (> 2018) are available in those domains :

  • Time Series,
  • Text classification,
  • Vision,
  • Image Generation,Text generation,
  • Gradient Boosting, Automatic Machine Learning tuning,
  • Hyper-parameter search.

Main characteristics :

  • Functional type interface : reduce boilerplate code, good for scientific computing.
  • JSON based input : reduce boilerplate code, easy for experiment management.
  • Focus to move research/script code to prod/batch

We are looking for contributors (!!)

alt text alt text alt text

Benefits :

Having a simple framework for both machine learning models and deep learning models, without BOILERPLATE code. Collection of models, model zoo in Pytorch, Tensorflow, Keras allows richer possibilities in model re-usage, model batching and benchmarking. Unique and simple interface, zero boilerplate code (!), and recent state of art models/frameworks are the main strength of MLMODELS. Different domain fields are available, such as computer vision, NLP, Time Series prediction, tabular data classification.

Here you can find usages guide

If you want to contribute, contribution guide

Model List :

Time Series:

  1. Montreal AI, Nbeats: 2019, Advanced interpretable Time Series Neural Network, [Link]

  2. Amazon Deep AR: 2019, Multi-variate Time Series NNetwork, [Link]

  3. Facebook Prophet 2017, Time Series prediction [Link]

  4. ARMDN, Advanced Multi-variate Time series Prediction : 2019, Associative and Recurrent Mixture Density Networks for time series. [Link]

  5. LSTM Neural Network prediction : Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction [Link]

NLP:

  1. Sentence Transformers : 2019, Embedding of full sentences using BERT, [Link]

  2. Transformers Classifier : Using Transformer for Text Classification, [Link]

  3. TextCNN Pytorch : 2016, Text CNN Classifier, [Link]

  4. TextCNN Keras : 2016, Text CNN Classifier, [Link]

  5. Bi-directionnal Conditional Random Field LSTM for Name Entiryt Recognition, [Link]

  6. DRMM: Deep Relevance Matching Model for Ad-hoc Retrieval.[Link]

  7. DRMMTKS: Deep Top-K Relevance Matching Model for Ad-hoc Retrieval. [Link]

  8. ARC-I: Convolutional Neural Network Architectures for Matching Natural Language Sentences [Link]

  9. ARC-II: Convolutional Neural Network Architectures for Matching Natural Language Sentences [Link]

  10. DSSM: Learning Deep Structured Semantic Models for Web Search using Clickthrough Data [Link]

  11. CDSSM: Learning Semantic Representations Using Convolutional Neural Networks for Web Search [Link]

  12. MatchLSTM: Machine Comprehension Using Match-LSTM and Answer Pointer [Link]

  13. DUET: Learning to Match Using Local and Distributed Representations of Text for Web Search [Link]

  14. KNRM: End-to-End Neural Ad-hoc Ranking with Kernel Pooling [Link]

  15. ConvKNRM: Convolutional neural networks for soft-matching n-grams in ad-hoc search [Link]

  16. ESIM: Enhanced LSTM for Natural Language Inference [Link]

  17. BiMPM: Bilateral Multi-Perspective Matching for Natural Language Sentences [Link]

  18. MatchPyramid: Text Matching as Image Recognition [Link]

  19. Match-SRNN: Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN [Link]

  20. aNMM: aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model [Link]

  21. MV-LSTM: [Link]

  22. DIIN: Natural Lanuguage Inference Over Interaction Space [Link]

  23. HBMP: Sentence Embeddings in NLI with Iterative Refinement Encoders [Link]

TABULAR:

LightGBM : Light Gradient Boosting

AutoML Gluon : 2020, AutoML in Gluon, MxNet using LightGBM, CatBoost

Auto-Keras : 2020, Automatic Keras model selection

All sklearn models :

linear_model.ElasticNet
linear_model.ElasticNetCV
linear_model.Lars
linear_model.LarsCV
linear_model.Lasso
linear_model.LassoCV
linear_model.LassoLars
linear_model.LassoLarsCV
linear_model.LassoLarsIC
linear_model.OrthogonalMatchingPursuit
linear_model.OrthogonalMatchingPursuitCV

svm.LinearSVC
svm.LinearSVR
svm.NuSVC
svm.NuSVR
svm.OneClassSVM
svm.SVC
svm.SVR
svm.l1_min_c

neighbors.KNeighborsClassifier
neighbors.KNeighborsRegressor
neighbors.KNeighborsTransformer

Binary Neural Prediction from tabular data:

  1. A Convolutional Click Prediction Model]([Link |)]

  2. Deep Learning over Multi-field Categorical Data: A Case Study on User Response Prediction]([Link |)]

  3. Product-based neural networks for user response prediction]([Link |)]

  4. Wide & Deep Learning for Recommender Systems]([Link |)]

  5. DeepFM: A Factorization-Machine based Neural Network for CTR Prediction]([Link |)]

  6. Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction]([Link |)]

  7. Deep & Cross Network for Ad Click Predictions]([Link |)]

  8. Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks]([Link |)]

  9. Neural Factorization Machines for Sparse Predictive Analytics]([Link |)]

  10. xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems]([Link |)]

  11. AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks]([Link |)]

  12. Deep Interest Network for Click-Through Rate Prediction]([Link |)]

  13. Deep Interest Evolution Network for Click-Through Rate Prediction]([Link |)]

  14. Operation-aware Neural Networks for User Response Prediction]([Link |)]

  15. Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction ]([Link |)]

  16. Deep Session Interest Network for Click-Through Rate Prediction ]([Link |)]

  17. FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction]([Link |)]

VISION:

  1. Vision Models (pre-trained) :
    alexnet: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size [Link]

  2. densenet121: Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space [Link]

  3. densenet169: Classification of TrashNet Dataset Based on Deep Learning Models [Link]

  4. densenet201: Utilization of DenseNet201 for diagnosis of breast abnormality [Link]

  5. densenet161: Automated classification of histopathology images using transfer learning [Link]

  6. inception_v3: Menfish Classification Based on Inception_V3 Convolutional Neural Network [Link]

  7. resnet18: Leveraging the VTA-TVM Hardware-Software Stack for FPGA Acceleration of 8-bit ResNet-18 Inference [Link]

  8. resnet34: Automated Pavement Crack Segmentation Using Fully Convolutional U-Net with a Pretrained ResNet-34 Encoder [Link]

  9. resnet50: Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes [Link]

  10. resnet101: Classification of Cervical MR Images using ResNet101 [Link]

  11. resnet152: Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network [Link]

  12. resnext50_32x4d: Automatic Grading of Individual Knee Osteoarthritis Features in Plain Radiographs using Deep Convolutional Neural Networks [Link]

  13. resnext101_32x8d: DEEP LEARNING BASED PLANT PART DETECTION IN GREENHOUSE SETTINGS [Link]

  14. wide_resnet50_2: Identificac¸˜ao de Esp´ecies de ´Arvores por Imagens de Tronco Utilizando Aprendizado de Ma´quina Profundo [Link]

  15. wide_resnet101_2: Identification of Tree Species by Trunk Images Using Deep Machine Learning [Link]

  16. squeezenet1_0: Classification of Ice Crystal Habits Observed From Airborne Cloud Particle Imager by Deep Transfer Learning [Link]

  17. squeezenet1_1: Benchmarking parts based face processing in-the-wild for gender recognition and head pose estimation [Link]

  18. vgg11: ernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation [Link]

  19. vgg13: Convolutional Neural Network for Raindrop Detection [Link]

  20. vgg16: Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder [Link]

  21. vgg19: A New Transfer Learning Based on VGG-19 Network for Fault Diagnosis [Link]

  22. vgg11_bn:Shifted Spatial-Spectral Convolution for Deep Neural Networks [Link]

  23. vgg13_bn: DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation [Link]

  24. vgg16_bn: Partial Convolution based Padding [Link]

  25. vgg19_bn: NeurIPS 2019 Disentanglement Challenge: Improved Disentanglement through Learned Aggregation of Convolutional Feature Maps [Link]

  26. googlenet: On the Performance of GoogLeNet and AlexNet Applied to Sketches [Link]

  27. shufflenet_v2_x0_5: Exemplar Normalization for Learning Deep Representation [Link]

  28. shufflenet_v2_x1_0: Tree Species Identification by Trunk Images Using Deep Machine Learning [Link]

  29. mobilenet_v2: MobileNetV2: Inverted Residuals and Linear Bottlenecks [Link]

More resources are available here

######################################################################################

① Installation Guide:

(A) Using pre-installed Setup (one click run) :

Read-more

(B) Using Colab :

Read-more

Initialize template and Tests

Will copy template, dataset, example to your folder

ml_models --init  /yourworkingFolder/
To test Hyper-parameter search:
ml_optim
To test model fitting
ml_models

Actual test runs

Read-more

test_fast_linux


Usage in Jupyter/Colab

Read-more


Command Line tools:

Read-more


Model List

Read-more


How to add a new model

Read-more


Index of functions/methods

Read-more


LSTM example in TensorFlow (Example notebook)

Define model and data definitions

# import library
import mlmodels


model_uri    = "model_tf.1_lstm.py"
model_pars   =  {  "num_layers": 1,
                  "size": ncol_input, "size_layer": 128, "output_size": ncol_output, "timestep": 4,
                }
data_pars    =  {"data_path": "/folder/myfile.csv"  , "data_type": "pandas" }
compute_pars =  { "learning_rate": 0.001, }

out_pars     =  { "path": "ztest_1lstm/", "model_path" : "ztest_1lstm/model/"}
save_pars = { "path" : "ztest_1lstm/model/" }
load_pars = { "path" : "ztest_1lstm/model/" }


#### Load Parameters and Train
from mlmodels.models import module_load

module        =  module_load( model_uri= model_uri )                           # Load file definition
model         =  module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
model, sess   =  module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)


#### Inference
metrics_val   =  module.fit_metrics( model, sess, data_pars, compute_pars, out_pars) # get stats
ypred         = module.predict(model, sess,  data_pars, compute_pars, out_pars)     # predict pipeline

AutoML example in Gluon (Example notebook)

# import library
import mlmodels
import autogluon as ag

#### Define model and data definitions
model_uri = "model_gluon.gluon_automl.py"
data_pars = {"train": True, "uri_type": "amazon_aws", "dt_name": "Inc"}

model_pars = {"model_type": "tabular",
              "learning_rate": ag.space.Real(1e-4, 1e-2, default=5e-4, log=True),
              "activation": ag.space.Categorical(*tuple(["relu", "softrelu", "tanh"])),
              "layers": ag.space.Categorical(
                          *tuple([[100], [1000], [200, 100], [300, 200, 100]])),
              'dropout_prob': ag.space.Real(0.0, 0.5, default=0.1),
              'num_boost_round': 10,
              'num_leaves': ag.space.Int(lower=26, upper=30, default=36)
             }

compute_pars = {
    "hp_tune": True,
    "num_epochs": 10,
    "time_limits": 120,
    "num_trials": 5,
    "search_strategy": "skopt"
}

out_pars = {
    "out_path": "dataset/"
}



#### Load Parameters and Train
from mlmodels.models import module_load

module        =  module_load( model_uri= model_uri )                           # Load file definition
model         =  module.Model(model_pars=model_pars, compute_pars=compute_pars)             # Create Model instance
model, sess   =  module.fit(model, data_pars=data_pars, model_pars=model_pars, compute_pars=compute_pars, out_pars=out_pars)      


#### Inference
ypred       = module.predict(model, data_pars, compute_pars, out_pars)     # predict pipeline

RandomForest example in Scikit-learn (Example notebook)

# import library
import mlmodels

#### Define model and data definitions
model_uri    = "model_sklearn.sklearn.py"

model_pars   = {"model_name":  "RandomForestClassifier", "max_depth" : 4 , "random_state":0}

data_pars    = {'mode': 'test', 'path': "../mlmodels/dataset", 'data_type' : 'pandas' }

compute_pars = {'return_pred_not': False}

out_pars    = {'path' : "../ztest"}


#### Load Parameters and Train
from mlmodels.models import module_load

module        =  module_load( model_uri= model_uri )                           # Load file definition
model         =  module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
model, sess   =  module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)          # fit the model


#### Inference
ypred       = module.predict(model,  data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)     # predict pipeline

TextCNN example in keras (Example notebook)

# import library
import mlmodels

#### Define model and data definitions
model_uri    = "model_keras.textcnn.py"

data_pars    = {"path" : "../mlmodels/dataset/text/imdb.csv", "train": 1, "maxlen":400, "max_features": 10}

model_pars   = {"maxlen":400, "max_features": 10, "embedding_dims":50}
                       
compute_pars = {"engine": "adam", "loss": "binary_crossentropy", "metrics": ["accuracy"] ,
                        "batch_size": 32, "epochs":1, 'return_pred_not':False}

out_pars     = {"path": "ztest/model_keras/textcnn/"}



#### Load Parameters and Train
from mlmodels.models import module_load

module        =  module_load( model_uri= model_uri )                           # Load file definition
model         =  module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)          # fit the model


#### Inference
data_pars['train'] = 0
ypred       = module.predict(model,  data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)

Using json config file for input (Example notebook, JSON file)

Import library and functions

# import library
import mlmodels

#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config

model_uri    = "model_tf.1_lstm.py"
module        =  module_load( model_uri= model_uri )                           # Load file definition

model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
    'choice':'json',
    'config_mode':'test',
    'data_path':'../mlmodels/example/1_lstm.json'
})

#### Load parameters and train
model         =  module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
model, sess   =  module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)          # fit the model

#### Check inference
ypred       = module.predict(model, sess=sess,  data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)     # predict pipeline

Using Scikit-learn's SVM for Titanic Problem from json file (Example notebook, JSON file)

Import library and functions

# import library
import mlmodels

#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config

model_uri    = "model_sklearn.sklearn.py"
module        =  module_load( model_uri= model_uri )                           # Load file definition

model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
    'choice':'json',
    'config_mode':'test',
    'data_path':'../mlmodels/example/sklearn_titanic_svm.json'
})

#### Load Parameters and Train

model         =  module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
model, sess   =  module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)          # fit the model


#### Inference
ypred       = module.predict(model,  data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)     # predict pipeline
ypred


#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score

y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)

Using Scikit-learn's Random Forest for Titanic Problem from json file (Example notebook, JSON file)

Import library and functions

# import library
import mlmodels

#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config

model_uri    = "model_sklearn.sklearn.py"
module        =  module_load( model_uri= model_uri )                           # Load file definition

model_pars, data_pars, compute_pars, out_pars = module.get_params(param_pars={
    'choice':'json',
    'config_mode':'test',
    'data_path':'../mlmodels/example/sklearn_titanic_randomForest.json'
})


#### Load Parameters and Train
model         =  module.Model(model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars)             # Create Model instance
model, sess   =  module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)          # fit the model


#### Inference

ypred       = module.predict(model,  data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)     # predict pipeline
ypred

#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score

y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)

Using Autogluon for Titanic Problem from json file (Example notebook, JSON file)

Import library and functions

# import library
import mlmodels

#### Load model and data definitions from json
from mlmodels.models import module_load
from mlmodels.util import load_config

model_uri    = "model_gluon.gluon_automl.py"
module        =  module_load( model_uri= model_uri )                           # Load file definition

model_pars, data_pars, compute_pars, out_pars = module.get_params(
    choice='json',
    config_mode= 'test',
    data_path= '../mlmodels/example/gluon_automl.json'
)


#### Load Parameters and Train
model         =  module.Model(model_pars=model_pars, compute_pars=compute_pars)             # Create Model instance
model   =  module.fit(model, model_pars=model_pars, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)          # fit the model
model.model.fit_summary()


#### Check inference
ypred       = module.predict(model,  data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)     # predict pipeline

#### Check metrics
model.model.model_performance

import pandas as pd
from sklearn.metrics import roc_auc_score

y = pd.read_csv('../mlmodels/dataset/tabular/titanic_train_preprocessed.csv')['Survived'].values
roc_auc_score(y, ypred)


Using hyper-params (optuna) for Titanic Problem from json file (Example notebook, JSON file)

Import library and functions

# import library
from mlmodels.models import module_load
from mlmodels.optim import optim
from mlmodels.util import params_json_load


#### Load model and data definitions from json

###  hypermodel_pars, model_pars, ....
model_uri   = "model_sklearn.sklearn.py"
config_path = path_norm( 'example/hyper_titanic_randomForest.json'  )
config_mode = "test"  ### test/prod



#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)


module            =  module_load( model_uri= model_uri )                      
model_pars_update = optim(
    model_uri       = model_uri,
    hypermodel_pars = hypermodel_pars,
    model_pars      = model_pars,
    data_pars       = data_pars,
    compute_pars    = compute_pars,
    out_pars        = out_pars
)


#### Load Parameters and Train
model         =  module.Model(model_pars=model_pars_update, data_pars=data_pars, compute_pars=compute_pars)y
model, sess   =  module.fit(model, data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)

#### Check inference
ypred         = module.predict(model,  data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)     # predict pipeline
ypred


#### Check metrics
import pandas as pd
from sklearn.metrics import roc_auc_score

y = pd.read_csv( path_norm('dataset/tabular/titanic_train_preprocessed.csv') )
y = y['Survived'].values
roc_auc_score(y, ypred)

Using LightGBM for Titanic Problem from json file (Example notebook, JSON file)

Import library and functions

# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm
import json

#### Load model and data definitions from json
# Model defination
model_uri    = "model_sklearn.model_lightgbm.py"
module        =  module_load( model_uri= model_uri)

# Path to JSON
data_path = '../dataset/json/lightgbm_titanic.json'  

# Model Parameters
pars = json.load(open( data_path , mode='r'))
for key, pdict in  pars.items() :
  globals()[key] = path_norm_dict( pdict   )   ###Normalize path

#### Load Parameters and Train
model = module.Model(model_pars, data_pars, compute_pars) # create model instance
model, session = module.fit(model, data_pars, compute_pars, out_pars) # fit model


#### Check inference
ypred       = module.predict(model,  data_pars=data_pars, compute_pars=compute_pars, out_pars=out_pars)     # get predictions
ypred


#### Check metrics
metrics_val = module.fit_metrics(model, data_pars, compute_pars, out_pars)
metrics_val 

Using Vision CNN RESNET18 for MNIST dataset (Example notebook, JSON file)

# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm, params_json_load
import json


#### Model URI and Config JSON
model_uri   = "model_tch.torchhub.py"
config_path = path_norm( 'model_tch/torchhub_cnn.json'  )
config_mode = "test"  ### test/prod


#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)


#### Setup Model 
module         = module_load( model_uri)
model          = module.Model(model_pars, data_pars, compute_pars) 
`
#### Fit
model, session = module.fit(model, data_pars, compute_pars, out_pars)           #### fit model
metrics_val    = module.fit_metrics(model, data_pars, compute_pars, out_pars)   #### Check fit metrics
print(metrics_val)


#### Inference
ypred          = module.predict(model, session, data_pars, compute_pars, out_pars)   
print(ypred)



Using ARMDN Time Series (Example notebook, JSON file)

# import library
import mlmodels
from mlmodels.models import module_load
from mlmodels.util import path_norm_dict, path_norm, params_json_load
import json


#### Model URI and Config JSON
model_uri   = "model_keras.ardmn.py"
config_path = path_norm( 'model_keras/ardmn.json'  )
config_mode = "test"  ### test/prod




#### Model Parameters
hypermodel_pars, model_pars, data_pars, compute_pars, out_pars = params_json_load(config_path, config_mode= config_mode)
print( hypermodel_pars, model_pars, data_pars, compute_pars, out_pars)


#### Setup Model 
module         = module_load( model_uri)
model          = module.Model(model_pars, data_pars, compute_pars) 
`
#### Fit
model, session = module.fit(model, data_pars, compute_pars, out_pars)           #### fit model
metrics_val    = module.fit_metrics(model, data_pars, compute_pars, out_pars)   #### Check fit metrics
print(metrics_val)


#### Inference
ypred          = module.predict(model, session, data_pars, compute_pars, out_pars)   
print(ypred)



#### Save/Load
module.save(model, save_pars ={ 'path': out_pars['path'] +"/model/"})

model2 = module.load(load_pars ={ 'path': out_pars['path'] +"/model/"})

Pytorch

About

mlmodels : Machine Learning and Deep Learning Model ZOO for Pytorch, Tensorflow, Keras, Gluon models...

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 67.5%
  • Python 27.1%
  • JavaScript 3.6%
  • C 1.2%
  • OpenEdge ABL 0.4%
  • Perl 0.1%
  • Other 0.1%