kerasEvalPrediction | R Documentation |
Evaluates prediction from keras model using several metrics based on training, validation and test data
kerasEvalPrediction(pred, testScore = c(NA, NA), specList, metrics, kerasConf)
pred |
prediction from keras predict |
testScore |
additional score values |
specList |
spec with target |
metrics |
keras metrics (history) |
kerasConf |
keras config |
### These examples require an activated Python environment as described in ### Bartz-Beielstein, T., Rehbach, F., Sen, A., and Zaefferer, M.: ### Surrogate Model Based Hyperparameter Tuning for Deep Learning with SPOT, ### June 2021. http://arxiv.org/abs/2105.14625. PYTHON_RETICULATE <- FALSE if(PYTHON_RETICULATE){ library(tfdatasets) library(keras) target <- "age" batch_size <- 32 prop <- 2/3 dfCensus <- getDataCensus(nobs=1000, target = target) data <- getGenericTrainValTestData(dfGeneric = dfCensus, prop = prop) specList <- genericDataPrep(data=data, batch_size = batch_size) ## spec test data has 334 elements: str(specList$testGeneric$target) ## simulate test: pred <- runif(length(specList$testGeneric$target)) kerasConf <- getKerasConf() simpleModel <- getSimpleKerasModel(specList=specList, kerasConf=kerasConf) FLAGS <- list(epochs=16) y <- kerasFit(model=simpleModel, specList = specList, FLAGS = FLAGS, kerasConf = kerasConf) simpeModel <- y$model history <- y$history # evaluate on test data pred <- predict(simpleModel, specList$testGeneric) ## in use keras evaluation (test error): testScore <- keras::evaluate(simpleModel, tfdatasets::dataset_use_spec(dataset=specList$test_ds_generic, spec=specList$specGeneric_prep), verbose = kerasConf$verbose) kerasEvalPrediction(pred=pred, testScore = testScore, specList = specList, metrics = history$metrics, kerasConf = kerasConf ) }
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.