Skip to content

Configuration Gallery

Overview

Aspect Details
Purpose Quick pointers to common presets and overlays.
Audience Users looking for ready-to-use configurations.
Note Presets are repo assets, not included in wheels.
Source configs/presets/ and configs/overlays/.

Pointers to common presets in this repository you can start from. Presets are repo assets (not included in wheels). Use flag‑only invarlock evaluate when installing from PyPI, or clone this repo to reference these files.

Note: Adapter‑based flows such as invarlock evaluate with HF models require extras like invarlock[hf] or invarlock[adapters]. The core install (pip install invarlock) remains torch‑free.

The evaluate examples below use the runtime container by default. Add --execution-mode host only for host-side workflows that intentionally bypass that boundary.

Most preset files intentionally keep small YAML preview_n / final_n values so they remain fast and portable for repo smokes. For balanced-tier evaluations that are expected to clear the normal token-floor gates, keep the preset and run it with --profile ci or --profile release.

Presets (Runnable)

Causal LM (decoder-only)

Preset Use Case Model Type Dataset
configs/presets/causal_lm/wikitext2_512.yaml Standard evaluation Decoder-only causal WikiText-2

When to use: Primary preset for causal language models. 512-token sequences provide good coverage while keeping runtime reasonable.

invarlock evaluate --allow-network --baseline gpt2 --subject /path/to/edited \
  --preset configs/presets/causal_lm/wikitext2_512.yaml --profile ci

Masked LM (BERT, RoBERTa, etc.)

Preset Use Case Model Type Dataset
configs/presets/masked_lm/wikitext2_128.yaml Standard MLM evaluation BERT/RoBERTa WikiText-2
configs/presets/masked_lm/synthetic_128.yaml Offline testing BERT/RoBERTa Synthetic

When to use: MLM presets for BERT-family models. Use synthetic preset when network access is unavailable or for CI smoke tests.

invarlock evaluate --allow-network --baseline bert-base-uncased --subject /path/to/edited \
  --preset configs/presets/masked_lm/wikitext2_128.yaml --profile ci

Seq2Seq (T5, etc.)

Preset Use Case Model Type Dataset
configs/presets/seq2seq/synth_64.yaml Quick seq2seq tests T5 Synthetic

When to use: Encoder-decoder models. Synthetic data keeps runs offline and fast for smoke testing.

Edit Overlays (Demo RTN Quantization)

These overlays apply the built-in quant_rtn edit for demonstration. For production, use Compare & evaluate (BYOE) with your own pre-edited checkpoint instead.

Overlay Scope Use Case
configs/overlays/edits/quant_rtn/8bit_attn.yaml Attention layers only Conservative quantization demo
configs/overlays/edits/quant_rtn/8bit_full.yaml All linear layers Full model quantization demo
configs/overlays/edits/quant_rtn/tiny_demo.yaml Minimal layers Quick smoke test

Example (demo edit):

invarlock evaluate --allow-network --baseline gpt2 --subject gpt2 \
  --preset configs/presets/causal_lm/wikitext2_512.yaml \
  --edit-config configs/overlays/edits/quant_rtn/8bit_attn.yaml \
  --profile ci

Profiles

Profiles control window counts and bootstrap depth:

Profile Windows Bootstrap Use Case
ci 240/240 1200 Standard CI evaluation
release 400/400 3200 Production releases
ci_cpu 120/120 1200 CPU-only environments

Tips

  • Use --profile ci|release|ci_cpu to apply runtime window counts and bootstrapping defaults.
  • Keep seq_len = stride for deterministic non‑overlapping windows.
  • Combine presets with edit overlays using multiple -c flags or --edit-config.
  • For custom data, see Bring Your Own Data.