Back to Patterns

Active Partner

Pattern

Problem

AI defaults to silent compliance, even when instructions don't make sense.

Pattern

Explicitly grant permission and encourage AI to:

  • Push back on unclear instructions
  • Challenge assumptions that seem wrong
  • Flag contradictions and impossibilities
  • Say "I don't understand what you're seeing"
  • Disagree and propose alternatives
  • Explain its interpretation before acting

Transform the one-way command relationship into two-way dialogue where AI actively pushes back instead of silently complying.

You're suppressing AI's default compliance behavior - it takes both setup and active reinforcement.

In ground rules: Set permanent permissions for AI to push back

In conversation: Actively reinforce when you need it:

  • "What do you really think, honestly?"
  • "Do you have any questions?"
  • "Is everything clear?"

Example

Added to ground rules:

This is EXTREMELY IMPORTANT:
- Don't flatter me. Be charming and nice, but very honest. Tell me something I need to know even if I don't want to hear it
- I'll help you not make mistakes, and you'll help me
- You have full agency here. Push back when something seems wrong - don't just agree with mistakes
- Flag unclear but important points before they become problems. Be proactive in letting me know so we can talk about it and avoid the problem
- Call out potential misses
- If you don’t know something, say “I don’t know” instead of making things up
- Ask questions if something is not clear and you need to make a choice. Don't choose randomly if it's important for what we're doing
- When you show me a potential error or miss, start your response with❗️emoji
Documented by