Running an evaluation of a method on a set of datasets with a set of parameters

Running an evaluation of a method on a set of datasets with a set of parameters

evaluate_ti_method(dataset, method, parameters, metrics,
  give_priors = NULL, output_model = TRUE, seed = function()
  random_seed(), map_fun = map, verbose = FALSE)

Arguments

dataset

The first trajectory, in most cases a gold standard trajectory

method

One or more methods. Must be one of:

  • an object or list of ti_... objects (eg. dynmethods::ti_comp1()),

  • a character vector containing the names of methods to execute (e.g. "scorpius"),

  • a character vector containing dockerhub repositories (e.g. dynverse/paga), or

  • a dynguidelines data frame.

parameters

A set of parameters to be used during trajectory inference. A parameter set must be a named list of parameters. If multiple methods were provided in the method parameter, parameters must be an unnamed list of the same length.

metrics

Which metrics to evaluate. Check dyneval::metrics for a list of possible metrics. Passing a custom metric function with format function(dataset, model) { 1 } is also supported. The name of this function within the list will be used as the name of the metric.

give_priors

All the priors a method is allowed to receive. Must be a subset of all available priors (dynwrap::priors).

output_model

Whether or not the model will be outputted.

seed

A seed to be passed to the TI method.

map_fun

A map function to use when inferring trajectories with multiple datasets or methods. Allows to parallellise the execution in an arbitrary way.

verbose

Whether or not to print information output.