HAP robot holding clipboard with checklist and pencil

HAP's Learning Lab

Context Engineering

This one changed how I think about AI tools. I used to think the model was the thing you tuned. Turns out, the model stays the same — what changes is what it reads before it answers. Prof. Teeters calls this context engineering, and once I understood it, every conversation with Copilot got better.

This page collects what I learned in Stations 4 and 6. The rules, the files, the loop that makes it all stick. 🟠

Date-stamped: March 2026. The instruction file landscape is still settling. New tools adopt new conventions. But the principle stays the same: if the agent reads better instructions, it produces better output. Review your instruction files when your tools update.

What Is Context Engineering?

The agent reads files to understand your project. Change what it reads, change how it behaves. That is the whole idea.

I used to think I needed a better model. Prof. Teeters showed me that the model was already good — I was just giving it bad instructions. The agent reads your project files, your instruction files, and your conversation. That collection of text is the context. Engineering it means being deliberate about what goes in.

Instruction File Locations

Different tools read different files. Here is where each one looks.

AGENTS.md

Lives at the project root. Read by the Copilot Coding Agent (GitHub) and Claude Code. This is the broadest instruction file — it shapes how any agent-level tool understands your project.

# AGENTS.md (project root)
# Read by: Copilot Coding Agent, Claude Code

## Code style
- Use const by default. Use let only for reassignment.
- Never use var.
- Use textContent over innerHTML.

.github/copilot-instructions.md

GitHub Copilot specific. Lives inside the .github/ directory. Copilot reads this when generating suggestions in your editor and when the Coding Agent works on issues.

# .github/copilot-instructions.md
# Read by: GitHub Copilot (editor + Coding Agent)

## HTML rules
- Use semantic elements (nav, main, section, article).
- Every img must have alt text.
- Labels must be linked to inputs with for/id.

.instructions.md

VS Code Copilot workspace-level instructions. This one scopes rules to the workspace you have open, so different projects can have different rules without conflict.

# .instructions.md (workspace root)
# Read by: VS Code Copilot

## Communication
- When explaining, use an analogy before showing code.
- Explain reasoning before giving the answer.

Good Rules vs Vague Rules

I learned this the hard way. Not all rules are equal. Vague rules waste tokens and change nothing.

Good rules: specific and testable

A good rule tells the agent exactly what to do. You could check the output and say yes or no — it followed the rule or it did not.

# Good: specific, testable
"Use const by default. Use let only for reassignment. Never use var."

# Good: changes behavior
"When explaining, use an analogy before showing code."

# Good: clear boundary
"Use textContent over innerHTML. Sanitize with DOMPurify if HTML is required."

Vague or contradictory rules

Vague rules like "be helpful" do not change output — the model is already trying to be helpful. Contradictory rules cause inconsistent behavior because the agent has to pick one.

# Vague: changes nothing
"Be helpful."
"Write good code."
"Follow best practices."

# Contradictory: which one wins?
"Always use semicolons."
"Follow Standard JS style."  # (no semicolons)

Example Rules by Category

These are real rules I have seen work well. Adapt them for your own projects.

1

Code Style

Rules that shape the code the agent writes.

## Code style
- Use ES modules (import/export), never CommonJS (require).
- Use const by default. Use let only for reassignment. Never use var.
- Use textContent over innerHTML. If HTML is required, sanitize with DOMPurify.
- Never use eval(), new Function(), or setTimeout with a string argument.
2

HTML and Accessibility

Rules that keep markup semantic and accessible.

## HTML
- Use semantic elements: nav, main, section, article, aside.
- Every form input must have a linked label (for/id match).
- Every img must have descriptive alt text.
- Maintain 4.5:1 minimum contrast ratio for text.
3

Communication

Rules that shape how the agent explains things to you.

## Communication
- When explaining a concept, use an analogy before showing code.
- Explain your reasoning before giving the final answer.
- If you are uncertain, say so explicitly.

The Self-Improving Loop

This is the Station 6 idea that tied everything together for me. Your instruction files are not static — they grow as you work.

1

Error appears

Something breaks. The agent made a mistake, or I made a mistake the agent could have caught.

2

Read the error

Actually read it. I used to skip error messages. Prof. Teeters taught me that the error is data — the most important data you have right now.

3

Fix it and ask the key question

After fixing the immediate problem, ask: "Should this be an AGENTS.md rule?" If the mistake could happen again in a future session, the answer is yes.

4

Draft the rule

Write a specific, testable rule. Ask the agent to help draft it if you want — it is good at turning "don't do that thing" into a clear instruction.

5

Rule persists across sessions

The rule lives in a file. Every new session, every new conversation, the agent reads it again. The mistake does not repeat. The harness improved.

Harness Engineering

The model stays the same. The harness improves. Every instruction file, every rule you add, every convention you document — that is the harness. You are not fine-tuning a model. You are engineering the context around it so that the same model produces better results for your specific project.

Prof. Teeters put it this way: "The model is a constant. Your instructions are the variable. Invest in the variable." That is context engineering. 🟠

← Back to Learning Lab Hub

HAP waving goodbye