Description Usage Arguments Details
classify
predicts the species in each image.
This function uses absolute paths, but if you are unfamilliar with this
process, you can put all of your images, the image label csv ("data_info") and the trained_model folder that you
downloaded following the directions at https://github.com/mikeyEcology/MLWIC2 into one directory on
your computer. Then set your working directory to this location and the function will find the
absolute paths for you. If you trained a model using train
,
this function can also be used to evalute images using the model developed by
train
by specifying the log_dir
of the trained model. If this is your first time using
this function, you should see additional documentation at https://github.com/mikeyEcology/MLWIC2 .
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | classify(
path_prefix = paste0(getwd(), "/images"),
data_info = paste0(getwd(), "/image_labels.csv"),
model_dir = paste0(getwd(), "/MLWIC2_helper_files/"),
log_dir = "species_model",
save_predictions = "model_predictions.txt",
python_loc = "/anaconda3/bin/",
os = "Mac",
num_classes = 1000,
num_cores = 1,
delimiter = ",",
architecture = "resnet",
depth = "18",
top_n = 5,
batch_size = 128,
num_gpus = 2,
make_output = TRUE,
output_location = NULL,
output_name = "MLWIC2_output.csv",
test_tensorflow = FALSE,
shiny = FALSE,
print_cmd = FALSE
)
|
path_prefix |
Absolute path to location of the images on your computer (or computing cluster). All images must be stored in this dictory, or a subdirectory from here. |
data_info |
Name of a csv containing the file names of each image (including relative path from the |
model_dir |
Absolute path to the location where you stored the |
log_dir |
If you are IDing species, this should be "species_model". If you are
determining if images contain animals or if they are empty, this should be "empty_animal".
If you trained a model with |
save_predictions |
File name where model predictions will be stored.
You should not need to change this parameter.
After running this function, you will run |
python_loc |
The location of python on your machine. |
os |
the operating system you are using. If you are using windows, set this to "Windows", otherwise leave as default |
num_classes |
The number of classes in your model. If you are using the 'species_model', the number is '1000'. If using 'empty_animal', 'num_classes=2'. If using 'CFTEP', 'num_classes=10' |
num_cores |
The number of cores you want to use. You can find the number on your computer using parallel::detectCores() |
delimiter |
this will be a ',' for a csv. |
architecture |
the architecture of the deep neural network (DNN). Resnet-18 is the default. Options are c("alexnet", "densenet", "googlenet", "nin", "resnet", "vgg"). If you are using the trained model that comes with MLWIC, use resnet 18 (the default). If you trained a model using a different architechture, you need to specify this same architechture and depth that you used for training. |
depth |
the depth of the neural network. If you are using the built in model, do not adjust this parameter. If you are using a model that you trained, use the same architecture and depth as that model. |
top_n |
the number of guesses you want the model to make (how many species do you want to see the confidence for?). This number must be less than or equal to 'num_classes'. |
batch_size |
The number of images for the model to evaluate in each batch. Larger numbers will run faster |
make_output |
logical. Do you want the package to create a nice output file with column headers |
output_name |
Desired name of the output file. It must end in '.csv' |
test_tensorflow |
logical. Do you want to test your installation of tensorflow before running
|
If you specify make_output=TRUE
, the function will generate a csv with
depicting your results. answer
is the ground truth label that you supplied.
guess1
is the model's top guess, and confidence1
is the model's
confidence in the top guess.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.