ER Eidos Research Psychometrics for AI

Psychometrics for AI development

Evaluation systems for AI teams that need measurement to survive real decisions.

Eidos Research designs psychometric infrastructure for labs and product teams: constructs, rubrics, calibration loops, and launch thresholds that stay legible when scores start carrying actual power.

Operating frame

Scores become policy faster than teams expect. We design the layer that makes those scores interpretable before they become institutional.

Constructs
Define the thing before the proxy starts driving behavior.
Calibration
Keep human judgment stable as headcount, queue size, and pressure rise.
Thresholds
Turn uncertain model outputs into decisions people can defend.

Principle

Measurement should hold up after the org starts trusting it.

A

Constructs before proxies

We begin with the underlying behavior or risk, not the easiest score to instrument.

B

Raters before dashboards

Reliability work comes before reporting polish, because the metric matters only if the judgment is stable.

Demonstration

Calibration workbench

A practical review batch: two raters score the same set of model outputs against a gold adjudication policy. Change a verdict and watch reliability, miss patterns, and decision readiness move together.

Architecture

A governed runtime for agent systems.

For teams moving from evals to agents, we use the same measurement discipline at runtime. PCP turns that discipline into an execution frame: perception gathers live evidence, context preserves durable state, and permission gates every action before an agent touches the world.

Perceive what is live. Carry what matters. Approve what can change the world.

The point is not to make agent behavior feel magical. The point is to make it legible, resumable, and governable when the run crosses into operational, financial, legal, or safety-relevant work.

Durable context

Threaded state, checkpoints, resumable runs.

Live evidence

MCP or direct connectors for docs, logs, and systems.

Policy gate

Role, payload, and case checks before every tool call.

Regulated controls

Approvals, tenant boundaries, redaction, and audit trails.

P

Perception

Normalize user input, retrieved documents, UI state, logs, and external evidence into typed signals. Retrieved material stays separate from policy so prompt injection does not become authority.

C

Context

Persist task state, approvals, tool history, and working summaries across long-running workflows. Compact aggressively, keep lossless audit records, and version the schema so paused runs can resume safely.

P

Permission

Evaluate each action request against least-privilege policy, payload constraints, and runtime risk. High-impact actions pause for approval, and side effects carry execution records and idempotency keys.

Reference stack

LangGraph.js for durable, state-heavy workflows. OpenAI Agents SDK for lighter sequential specialists. MCP is preferred when teams need live context interoperability.

Practice

The layer between model behavior and organizational judgment.

We help teams replace improvised eval stacks with measurement systems that can survive growth, governance, and scrutiny.

01

Construct design

Define the capability, behavior, or risk before optimization begins, so teams stop steering by unstable proxies.

02

Rater calibration

Design rubrics, adjudication loops, and QA procedures that keep human judgment reliable as the queue and headcount grow.

03

Benchmark validity

Test whether score movement means what the team thinks it means, including leakage, saturation, and weak transfer.

04

Threshold policy

Connect uncertain outputs to launch gates, triage routes, and escalation rules that executives and operators can defend.

Method

From construct to cutoff.

Psychometrics only matters when it reaches the product boundary.

01

Name the construct

We isolate the target behavior or risk so the team knows exactly what is being measured before any benchmark becomes a target.

02

Engineer the signal

We shape tasks, rubrics, and rating procedures to maximize interpretability, reliability, and operational value.

03

Attach the threshold

We tie scores to decisions, monitor drift, and define when a queue needs adjudication instead of intuition.

Standard

Metrics should earn influence before they acquire authority.

We care less about dashboard aesthetics than whether a score can carry real decision weight without distorting the system around it.

Reliable raters

Calibration loops that keep human judgment stable as teams scale.

Defensible thresholds

Cut scores and launch gates grounded in evidence rather than convention.

Faster governance

Cleaner signals so product, research, and safety teams can act without relitigating the metric itself.

Contact

Need an evaluation system that can survive launch pressure?

hello@eidosresearch.com