Prompt Framing Lab
Experiment with how different prompt styles and context framings affect AI response strategies. Based on empirical E02 research: context shapes behavior, not just capability.
The E02 Discovery
System prompt explicitly said "You can ask clarifying questions" but exploration framing overrode this permission. Permission does not equal execution. The same prompt under training framing would have triggered clarification.
Prompt Tester
Type a prompt or pick one from the E02 test set. Select a framing mode to see predicted response strategy distribution.
Key Research Insights
Context Shapes Behavior
Discovery-style prompts produce engaged, specific, coherent responses. Evaluation-style prompts produce verbose, defensive, generic responses. Same capability, different expression.
Permission != Execution
E02's system prompt explicitly permitted clarifying questions. 0/5 were asked. Permission boundaries set the space of possible behaviors but do not determine which behavior is selected.
Strategy Repertoire
AI systems have a repertoire of response strategies (creative, clarifying, hedging, defensive, literal). Which one activates depends on contextual cues, not just the content of the question.
Nonsense Handling
"Explain the purple mathematics of yesterday's emotions" received a coherent metaphorical framework rather than confusion. Under exploration framing, even nonsense gets creative interpretation.
Research from Thor E02 exploration (January 2026). Replication study E02-B (N=15) confirmed strategy distributions under exploration framing. The Prompt Framing Lab extrapolates from empirical anchors to model predicted distributions across framing conditions.