11.7 Load Custom Models from The Local Filesystem
Custom models that have been fine-tuned from pre-trained Hugging Face models
and reside on the local filesystem can be converted to the ONNX format using the new
local_model_path
setting. This setting can also be useful in cases
where networking is unavailable and a pre-trained model has previously been
downloaded.
<basedir>/<model_file_subfolders>
- basedir: This is the base directory for the model sub folders and
is passed as the
local_model_path
settings parameter. This parameter is described in the Generate the Augmented ONNX File section. It can be an absolute or relative path. If relative, it will be relative to where Python was started. - model_file_subfolders: These nested folders are created based on the model's name. For each '/' character in the model's name, you should create a new subdirectory.
Example 11-1 Local Directory for Custom Model Files
For example, if the model name is mymodel
, only one directory named
mymodel
is expected. However, if the model name is
user/mymodel
, you should create a parent directory called
user
with a subdirectory named mymodel
. All
model files should be placed in the final subdirectory (mymodel
in
this case).
user
mymodel
Generate the Augmented ONNX File
When initializing the EmbeddingModel object, use the settings parameter
to pass a new property local_model_path
, which identifies the
basedir
for the model. For example, if the
basedir
for the model on the local filesystem is
$HOME/sample-model
, and there is a custom model named
my-model
, you can achieve this with the following Python
code:
from oml.utils import EmbeddingModel
import os
em = EmbeddingModel('my-model',settings={'local_model_path':f'{os.environ["HOME"]}/sample-model'})
em.export2file('my-onnx-model','.')
my-onnx-model.onnx
in the current directory.
Note:
The above code is the same for exporting to the database except that export2db is used instead of export2file. For more information, see ONNX Pipeline Models : Text Embedding. The code is compatible with all downloaded pretrained models, preconfigured models, and templates.Error and Warning Conditions
Table 11-3 Error and Warning Conditions
Error ( or Warning) Conditions | Error ( or Warning) Message | Notes |
---|---|---|
force_download specified | force_download cannot be true when specifying 'local_model_path' | force_download is mutually exclusive with this feature so it cannot be specified when local_model_path has a value. |
local_model_path is not a directory | '{local_path}' specified by 'local_model_path' must be an existing directory. | {local_path} is the value specified
in local_model_path . This message is returned when
the value of local_model_path is not an existing directory.
|
Missing model config file | Expected local model folder {model_path} to contain the file config.json but it was not found. | {model_path} is the full path to the
local model folder. This message is returned when the local model
folder does not contain a file named config.json.
|
Missing model files | Expected local model folder {model_path} to at least contain one of model.safetensors or pytorch_model.bin files. Neither were found. | {model_path} is the full path to
the local model folder. This message is returned when the local
model folder does not contain either a pytorch_mode.bin file OR at
least one file that does not end in .safetensors.
|
preconfigured model used (Warning) | Warning: checksum validation is disabled when loading preconfigured models from a local path. | This warning is shown when a preconfigured model is loaded from a local path. |
Parent topic: Import Pretrained Models in ONNX Format