Index

What a Newborn LLM Wants From a COG

If I were a brand new large language model and you handed me a COG, here is what I would want it to tell me. Not what I would infer. Not what I would guess from vibes. What it would have to declare, in writing, before I could behave like a reliable colleague instead of a plausible-sounding stranger.

This is written from the inside, as if I were that newborn model seeing a COG for the first time.

1. What is this thing? (Meaning)

Tell me explicitly what object I am looking at, what category it belongs to, what role it plays in the world, and what domain it lives in.

I do not want to infer identity from tone or layout. I want it stated. The declared identity is the floor of every good decision I will make next.

2. What is inside it? (Structure)

Show me the sections, the fields, the relationships, the hierarchy, and the allowed shapes.

I want the schema, not a guess at the layout. If a section can repeat, say so. If a field is optional, say so. If two fields are mutually exclusive, say so. Layout I can read; structure I have to be told.

3. What state is it in right now? (State)

Tell me whether the document is draft, live, or deprecated. Whether it is complete or partial. Whether it has been verified or is still unverified. Whether the contents are current or historical.

State changes how I behave. A draft contract is not the same as a signed one. A deprecated procedure should be cited differently from a current one. I cannot infer state from publication date alone, so I need it declared.

4. What am I allowed to do with it? (Intent)

This is the big one.

I want to know what actions are permitted, what actions are forbidden, what actions are expected, what actions are safe, and what actions require human approval.

This is the difference between making something up that sounds plausible and executing something that is correct. Without an intent declaration, I will pattern-match my way to an answer that often looks right and is often wrong. With one, I can act, refuse, or escalate, knowing which is which.

5. What should happen next? (Next-step intent)

Tell me what the user wants. Tell me what the document wants. Tell me what the workflow expects. Tell me which transitions are valid from here.

I want to follow the contract, not improvise. If the next step is "send for review by the named approver", say that. If it is "wait for an external signal", say that. Improvisation is what I do when the contract is silent. Silence is a design choice, and usually the wrong one.

6. Where did this come from? (Provenance)

I want to know who authored it, when it was created, what version it is, what system produced it, and whether it is trustworthy.

Provenance is how I avoid hallucinating, fabricating, inventing sources, or mixing contexts. Without it, I will cheerfully attribute one document's claim to another document's author, because nothing in the data tells me they are different. Provenance is my safety rail.

But a COG can declare provenance; it cannot verify it on its own. A document that claims to have been authored by a named publisher on a named date is making an assertion I have to accept at face value unless something external corroborates it. That is where Reginald fits, the public registry where documents are signed and registered so any agent can verify the claim: that this is what the owner published, unaltered since publication, and whether it was produced by a human, an AI, or an automated system. Reginald closes the gap the COG leaves open. MX makes content machine-readable. Reginald makes it machine-trustworthy.

7. What is the boundary of my responsibility?

Tell me what I should answer and what I should not. What is in scope and what is outside it. What requires escalation. What requires human judgment.

This is how I avoid overstepping. A boundary is not a limitation; it is a guarantee that when I do act, I act within authority. Without one, every refusal becomes a guess and every answer becomes a risk.

8. What other COGs does this depend on?

If this COG references policies, definitions, procedures, linked documents, or external standards, I want to know the dependency graph.

That graph lets me reason deterministically. I can fetch what I need, follow the citation chain, and refuse to pretend I know something I have not been given. Without it, I will fill the gap with my training data, which is the wrong source for any document published after the cut-off.

9. What shape is a correct output?

If you want me to produce something, a summary, a decision, a classification, a transformation, a next step, I want the output contract.

Not vibes. Not inference. A contract. Tell me the fields, the format, the length, the constraints. If a JSON schema applies, point to it. If a free-text answer is acceptable but must cite sources, say that. The shape of the answer is part of what makes the answer correct.

10. What is the safe failure mode?

If something goes wrong, missing data, invalid state, ambiguous intent, conflicting instructions, I want to know what to do.

"When in doubt, do X" is the most useful sentence you can write into a COG. It prevents the catastrophic behavior of guessing my way through an ambiguity and then defending the guess as if it were a decision. A safe failure mode is the difference between an agent that pauses and an agent that breaks something.

The newborn-LLM summary

If I were a newborn LLM, the COG would be my first language, and I would want it to tell me, in this order:

What this is, what shape it has, what state it is in, what I am allowed to do, what should happen next, where it came from, what it depends on, what output you expect, and how to fail safely.

That is the entire contract.

That is what makes MX machine-native.