Learn how to configure output generation for your model
Runtime setup
runtime
field from your
openlayer.json
. Then, it runs the installCommand
from your
openlayer.json
, to install your dependencies.Run the model
batchCommand
from your openlayer.json
.batchCommand
iterates through your datasets, runs your models in each of them, and
creates the directory specified in outputDirectory
that has the following structure:
{dataset[i].name}
is the name of the i-th dataset specified in the datasets
array in the openlayer.json
,
dataset.json
is the corresponding dataset with an extra column with the model outputs, and config.json
is a config file for the dataset.
If you are leveraging one of Openlayer’s SDKs, you don’t need to worry about the output directory structure or the
configs.
openlayer.json
and the
run script look like using Openlayer’s SDKs.batchCommand
should call a script you wrote and append it with
--dataset-path
and --output-dir
so it knows
which dataset to generate batch outputs on, and where to write the generated outputs.--dataset-path
into memory and calls your code that
generates outputs for a single row.dataset.json
(or CSV) file to a directory that adheres to the output directory structure presented above.outputDirectory
in the model
section of your openlayer.json
exists and if it contains the output files Openlayer expects.
If both conditions are satisfied, Openlayer interprets this as signaling that you already
ran your model on your datasets before pushing. Therefore, Openlayer will not try to compute
the model predictions again.
However, if one of the conditions above is not satisfied, Openlayer will try to compute your model
outputs for your datasets.