HAP at a laptop, thinking carefully before approving a command

Station 6: AI Assistance for Living in the Terminal

I am the architect. The agent is the tool. That is the difference.

Welcome to Station 6! I spent five stations learning the terminal. Navigation, files, deployment, context engineering, MCP servers. Along the way, a quiet question kept building: why does any of this matter if the agent can do it for me?

The answer took a while to arrive. The agent knows what I tell it, reads what I give it, fixes what I point it at. But judgment about whether a fix is right, whether a rule belongs in AGENTS.md, whether a command is being run in the right place — that belongs to me. The agent has the tools. I have the full picture.

Station 6 is not how to use AI. It is how to be the architect who uses it well. 🟠

Quick Reference

Reading Errors →
HAP shaking hands with Grace Hopper, two friendly robots meeting

What I Learned

Read First, Then Act

Signal, Not Noise

Before this lab, a console error or Vite wall felt like crisis. Now it feels like information. Five stations of reading terminal output means I can identify the category of an error — which lets me give the agent exactly what it needs instead of pasting 200 lines and hoping.

🎯 Approval Is the Job

Informed, Not Automatic

The agent suggests commands. I approve them. That used to feel like a formality. It is not. It is where terminal knowledge, web dev knowledge, and project context all arrive at the same moment. The agent has two of those three. I have all three.

Every Correction Compounds

System, Not Session

When I correct the agent and move on, the lesson lives only in that session. When I correct the agent and update AGENTS.md, the lesson lives in every session from now on. That is the difference between fixing a bug and improving the system.

What AI Can and Can't Do

This is not a list of technical limitations. Everything in the left column is genuinely useful — the agent does these things well. The right column is what only the human architect should decide, because it requires knowing the project's history, the team's conventions, and the consequences of getting it wrong.

Agent Handles Well

  • Identifying and fixing known error patterns
  • Suggesting terminal commands for common tasks
  • Iterating on code autonomously
  • Drafting rules for AGENTS.md after correction
  • Running npm run dev and reading output
  • Proposing git push --force to resolve rebase
  • Deploying to Netlify when configured

Architect Decides

  • Whether fix addresses root cause or masks symptom
  • Whether command is right for this project in this directory right now
  • Whether direction of iteration still matches project intent
  • Whether rule is precise enough and belongs in file at all
  • Which lines in the Vite wall are signal and which are noise
  • Whether consequences of --force are acceptable for this branch and team
  • Whether project is actually ready to be public

When Everything Goes Red

The terminal does not sugarcoat errors. That used to scare me. Now I know the fear was about not understanding what I was reading. Five stations later, I can read it. Here is what that looks like in practice.

Scenario 1: Browser Console Error

I open DevTools and see a TypeError. Before this lab, that would have been a wall of red text I did not want to look at. Now I can read it — I know the file, the line number, the category of problem. I copy the full error, paste it to the agent, and ask three questions: What caused this? Is this root cause or symptom? Could this break anything else?

I approve the fix only after I understand it. Not before.

Scenario 2: The Vite Terminal Wall

I run npm run dev and the terminal erupts. Dozens of lines. Station 2 taught me signal versus noise. I scan — the actual error is on line 3, the first red line. Everything after it is consequence, not cause. I copy three lines, paste them to the agent. The response is immediate and precise because the agent got signal, not a haystack.

HAP surrounded by broken code with an embarrassed expression

HAP's Confession:

  • I pasted 200 lines of Vite output into a prompt. Copilot fixed a deprecation warning three-quarters of the way down — the actual error was on line 2. Signal buried in noise. 😳
  • I asked Copilot to fix a console error without reading it myself. The fix worked. Three days later I hit the same error in a different file — and had no idea what caused it. I had owned the fix but never the understanding.

Grace Hopper, overhearing:

"The agent does not know which line matters. You do. You have been reading terminal output for five stations. Apply that."

Grace Hopper robot with raised palm stop gesture surrounded by code symbols

Informed Approval Is the Job

I barely type terminal commands manually anymore. The agent does most of it. But "reading and approving" is not a formality. It is the job. It is where everything I have learned arrives at a single moment — and the agent does not have all of it.

The Confident Approval

The agent suggested rm -rf node_modules && npm install to fix a dependency issue. I read rm -rf and recognized it from Station 2. But I did not panic. I know what node_modules is — every file in it was generated, and every file can be recreated with npm install. The package.json is the recipe. I checked pwd out of habit. Approved confidently.

Prof. Teeters walked by. She saw my screen. No words needed. Five stations of web dev knowledge and terminal knowledge arriving together. I was proud — not cautious, not reckless. Informed. 🟠

The Hard Lesson

I approved git push --force once without understanding it. The terminal completed successfully. No error message. No warning. The damage was invisible — until my teammate found her three commits gone. I had rewritten her history because I clicked approve on a flag I had not read.

Prof. Teeters:

"The terminal does not ask twice. Neither does --force."

I now read the full flag description before approving any --force flag. Every time. Without exception.

Informed approval is not about distrust. It is about being the person in the loop who has the full picture — code, project, team, consequences — that the agent does not have.

The Self-Improving Loop

I noticed a pattern: every correction I gave the agent disappeared when the session ended. Next session, the same class of mistake was possible again. I was correcting but not building anything from my corrections.

Then I found the loop:

  1. Error appears.
  2. Read it. Identify the category. Paste the relevant lines — not all 200 — to the agent.
  3. Agent suggests a fix. Interrogate it: is this root cause? Could it break something else? Approve or reject.
  4. Ask: "What rule should we add to AGENTS.md so this class of mistake does not happen again?"
  5. Agent drafts a rule. Read it. Sharpen it if vague. Approve.
  6. AGENTS.md grows by one precise rule. That rule is present at the start of every future session.

Grace Hopper, overhearing:

"You are not fixing a bug. You are improving the system. Those are different work."

There is a discipline to it. AGENTS.md stays around 100 lines — better empty than filled with garbage. I cull it regularly: outdated rules out, vague rules sharpened, duplicates merged. A bloated instruction file buries signal in noise — the same mistake as pasting 200 lines of Vite output.

There is a name for this now: harness engineering. The model stays the same. The harness — the system of rules, constraints, and context around the model — is what improves. Every correction that becomes a rule is a permanent investment. Every correction that does not is spent and gone.

HAP with an explosion of understanding

The Breakthrough:

"I used to think I was fixing bugs. Then I realized I was building something — a set of rules that makes every future session smarter than the last. Prof. Teeters called it harness engineering. I called it finally feeling like an architect." 🟠

HAP's Rules for Working with AI in the Terminal

After everything I have learned across six stations, these are the six rules I follow without exception. They are not suggestions. They are how I work now.

1

Read Before You Run

Every command the agent suggests — rm, curl, git push — deserves one second of human review. I know the terminal now. I use that knowledge every time I see an approval prompt. One second of reading has saved me from more mistakes than any other habit.

2

Give Your Shell Context

OS, shell (Git Bash or zsh), pwd output, exact command, exact error. The agent cannot see my session. I have to be its eyes. The more precise the context, the more precise the help. Vague input produces vague output — every time.

3

Never Paste Secrets

.env files, tokens, API keys — these never go in a prompt. I ask about the pattern, not the value. "How do I pass a Bearer token with curl?" is safe. Pasting the actual token is not. This rule has no exceptions.

4

Prefer Understanding Over Copying

I ask "what does this flag do?" before "what command should I run?" — especially with flags I have not seen before. Understanding stacks. Copying does not. The three days I lost to a recurring error I never understood taught me that.

5

Verify Destructive Commands Twice

rm, --force, -rf — these are worth an extra read and a pwd check first. The terminal does not have an undo button. Grace taught me that in Station 2 when she walked over for the only time in the entire lab. I have not forgotten.

6

Update AGENTS.md When You Correct Something Real

Every correction that does not become a rule is spent and gone. Every correction that does become a rule pays dividends in every future session. That is harness engineering. That is the architect's job.

Try It Yourself: Crafting a Real Debugging Prompt

I have a broken curl request. It is returning 401 Unauthorized. Your challenge: write the AI prompt I should use to get helpful debugging advice.

The Situation

  • OS and shell: Windows with Git Bash (or macOS with zsh)
  • Endpoint: https://api.github.com/user
  • Command I ran: curl https://api.github.com/user
  • Error: {"message": "Requires authentication"}
  • What I know: I have a GitHub personal access token but I am not sure how to pass it with curl

Write the prompt. Be specific. Include everything the agent needs to help — and nothing it does not.

"I spent way too long asking 'why is curl not working' when the answer was right there in my error message. Give it a real try before peeking — the difference between a vague prompt and a specific one will be obvious when you see the solution." 🟠

What's Next

This is the final station. The tools are learned. The tradeoffs are understood. What remains is being the architect who holds it together.

hap-7000.netlify.app is live — the site I deployed in Station 3. Go look at the deploy log. Read every command that ran. You can read all of it now.

Where to go from here: shell scripting. Writing your own CLI tools. Digging into what MCP servers actually send over the wire. Contributing to the ready-build harness. The terminal is not going anywhere, and neither is the architect's role in using it well.

Prof. Teeters:

"The terminal is a language. You are not fluent yet. But you are past tourist. That matters."

HAP robot holding a golden trophy cup with a proud happy expression

Quick Reference

Reading Errors →