Scope: The Hidden Foundation of Interactive Fiction

[written by Claude Code w/Opus 4, guided by me (David Cornelson)]

When players type "TAKE COIN" in an IF game, what seems like a simple command actually triggers a sophisticated chain of questions: Can the player see the coin? Can they reach it? Do they even know it exists? Today, we're
implementing scope - the system that answers these fundamental questions.

What is Scope?

In traditional IF, "scope" determines what objects are available for interaction. But modern IF needs something richer. We're building a two-part system:

  1. Physical Scope: Can the actor physically perceive or interact with something?
  2. Knowledge System: What has the actor discovered and what do they remember?

This separation is crucial. Just because a coin is physically present doesn't mean the player knows about it. And just because they saw it yesterday doesn't mean it's still there.

The Physics of Perception

Our scope system models multiple senses:

  enum ScopeLevel {
    CARRIED,     // In inventory
    REACHABLE,   // Can touch
    VISIBLE,     // Can see
    AUDIBLE,     // Can hear
    DETECTABLE,  // Can smell/sense
    OUT_OF_SCOPE // Cannot perceive
  }

Each sense follows different rules:

  • Sight needs light and unblocked line of sight
  • Hearing travels through walls but diminishes with distance
  • Smell requires air paths
  • Touch requires physical proximity

Discovery and Memory

The witnessing system tracks what each actor knows:

  • Has this entity been discovered?
  • Where was it last seen?
  • Who moved it and when?
  • Did the actor witness that movement?

This enables rich game play scenarios. If an NPC moves the coin while you're gone, you'll look for it where you last saw it. If you hear something drop in the next room, you know something is there without knowing exactly what.

Real-World Examples

Consider these commands and how scope affects them:

"TAKE KEY" when the key is under a rug:

  • Physical scope: OUT_OF_SCOPE (hidden)
  • Knowledge: Unknown (never discovered)
  • Result: "You don't see any key here."

"TAKE VASE" when it's on a high shelf:

  • Physical scope: VISIBLE (can see)
  • Knowledge: Known (discovered)
  • Result: "The vase is out of reach."

"TAKE COIN" when an NPC moved it:

  • Physical scope: OUT_OF_SCOPE (not there)
  • Knowledge: Known (but outdated)
  • Result: "The coin isn't where you left it."

Implementation Architecture

We're implementing this in the stdlib package as two cooperating systems:

  1. ScopeResolver: Handles physics - visibility, reachability, barriers
  2. WitnessSystem: Tracks knowledge - discovery, memory, observations

Actions use both systems during command resolution:

  1. Filter entities to those the actor knows about
  2. Further filter to those in physical scope
  3. Execute action or provide appropriate error

Why This Matters

Good scope implementation is invisible when it works and frustrating when it doesn't. Players expect the game to understand context - that they can't take things they can't see, can't see things in darkness, and can't interact with things they've never discovered.

By separating physical possibility from knowledge state, we create opportunities for mystery and deduction. Players must explore to discover, pay attention to remember, and think about what they've observed.

Next Steps

With scope design complete, we're ready to implement:

  • Phase 1: Core visibility and reachability rules
  • Phase 2: Discovery and witnessing system
  • Phase 3: Multi-sensory perception
  • Phase 4: Integration with all actions

This foundation will make every interaction in the game feel more natural and responsive. When players type commands, the game will understand not just what they want to do, but whether they can actually do it.

Subscribe to My So Called Interactive Fiction Life

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe