oonnx2oracle
Reference R2

Troubleshooting

Five failures cover the vast majority of real-world trouble. The CLI emits anchors from this page in its error messages, so you can paste the link directly from a stack trace.

ORA-00942 — table or view does not exist

Usually raised when verify queries user_mining_models.

What's happening

The connected user lacks SELECT ANY DICTIONARY, or you're connected to an Oracle release older than 23ai. DBMS_VECTOR and the mining-model dictionary views only exist on 23ai and 26ai.

Fix

-- as an admin user
GRANT SELECT ANY DICTIONARY TO app_user;

-- confirm the release
SELECT BANNER FROM v$version;

If the banner says 19c or 21c, in-database ONNX models aren't available — use the Docker compose file or an ADB instance.

ORA-20000 — LOAD_ONNX_MODEL failed

The loader hands Oracle a blob and Oracle rejects it. Two distinct causes.

Cause A — opset mismatch

Oracle's embedded ONNX runtime ingests up to opset 18 (as of 23ai/26ai). If the augmented graph was exported at a newer opset, the import aborts. The loader uses onnx>=1.16 and emits opset 18 by default — this only bites if you're supplying pre-built ONNX bytes out of band.

Fix A

$ pip install -U "onnx>=1.16"
# then rebuild the augmented graph; onnx2oracle load does this itself

Cause B — wrong input tensor name

The metadata JSON passed to LOAD_ONNX_MODEL declares the input tensor name. It must match the graph's actual input name. The built-in pipeline always uses pre_text. If you've modified the graph manually, either rename the tensor or update the metadata JSON to match.

Fix B

# inspect the graph
$ python -c "import onnx; m = onnx.load('model.onnx'); print([i.name for i in m.graph.input])"
['pre_text']   # must match the metadata JSON

ORA-01031 — insufficient privileges

The connected user can't execute DBMS_VECTOR.LOAD_ONNX_MODEL or can't create a mining model.

Fix

-- as SYSDBA or an ADMIN role
GRANT EXECUTE ON DBMS_VECTOR TO app_user;
GRANT CREATE MINING MODEL TO app_user;

On ADB, the default ADMIN user already holds these. Non-admin users need them granted explicitly.

Docker timeout on first startup

The Oracle Free image initializes the database on first run. That pulls seed data, runs startup scripts, opens tablespaces — allow three to five minutes on typical hardware.

What --wait does

docker up --wait starts the compose service, then runs a SQL readiness probe inside the container every five seconds until it succeeds or the timeout elapses. The default timeout is 600 seconds; override it with --wait-timeout 1200 on slower Docker hosts.

If the timeout isn't enough

$ onnx2oracle docker logs
# look for ORACLE_DBA or FATAL entries

Common causes:

  • Docker host is memory-starved — Oracle Free needs ~4 GB during init.
  • Port 1521 is already allocated — set ORACLE_PORT=1524 or another free host port before running docker up. The --target local shortcut uses the same variable.
  • The volume from a previous half-initialized run is corrupt — onnx2oracle docker down --volumes nukes it and starts over.
  • AArch64 host without emulation — the image is x86_64; enable Rosetta or qemu.
  • The Oracle registry image is unavailable in your environment — set ORACLE_IMAGE=container-registry.oracle.com/database/free:latest-lite or another compatible Oracle Free image before running docker up.

HuggingFace download fails (4xx, gated models)

Two symptoms, both from the huggingface_hub client.

Symptom — 401 Unauthorized

The repo is gated. Licensing on Nomic and some of the larger BGE variants flips between open and gated; the loader passes through whatever HuggingFace says.

Fix

$ huggingface-cli login       # accepts the license in the browser
$ export HF_TOKEN=hf_***       # or set the env var directly
$ onnx2oracle load nomic-embed-text-v1

Symptom — HTTP timeout or partial download

Large models (420 – 540 MB) on flaky networks. Two options:

  • Run onnx2oracle load again — downloads resume from the cache and retry per-file.
  • Set HF_HUB_DOWNLOAD_TIMEOUT=120 for slower links.

ONNX external-data sidecars

Some HuggingFace ONNX exports keep large tensors outside onnx/model.onnx in sibling files such as model.onnx_data. The builder now inspects model.onnx without loading external data, downloads the referenced sidecars from the repo's onnx/ folder, then loads the full model.

If a sidecar download fails

The HuggingFace repo may have moved or omitted an external data file. Rerun with a clean cache and keep the exact sidecar path from the log when filing an issue.

$ rm -rf ~/.cache/huggingface/hub/models--your--model
$ onnx2oracle load your-preset --target local

Filing a new issue

If the error you hit isn't here, the GitHub issue template asks for four things:

  1. onnx2oracle version output.
  2. SELECT BANNER FROM v$version; output.
  3. Exact command you ran (redact the password, keep everything else).
  4. Full stack trace — the Python one and any ORA- line from the driver.