Prompting Playbook (How to Work Effectively with MCP)

This page is a reusable “prompt library” and set of habits for developers and analysts using an LLM with MCP Engine against semantic models.

You do not need to mention tool names. Focus on intent, constraints, and safety; the assistant will choose the right tools.

The 5 rules that prevent most mistakes

  1. Connect intentionally

    Don’t auto-select; tell the LLM which model/dataset to use.

  2. Confirm context

    Verify Desktop vs Service and the specific model name/id.

  3. Explore before acting

    Have the LLM list and search relevant objects to build a mental model.

  4. Plan first, then apply

    Ask for a plan, impact analysis, and a confirmation gate before any writes.

  5. Validate immediately

    Re-run queries and check results (and optionally run tests) after changes.

Core templates (copy/paste)

Prompting patterns (what to ask, and why)

1) Confirm model + environment early

Bad: “Fix our measures.”

Better: “List available models/datasets, ask me which one to connect to, then show what we’re currently connected to.”

2) Start with discovery, not edits

Bad: “Rename Customers[Region] to Customers[Sales Region].”

Better: “Find Customers[Region], list dependents/impact, then propose a safe rename plan. Don’t apply until I confirm.”

3) Ask for impact and alternatives

Use these often: “What depends on this?” “What could break?” “What’s the safest option?”

4) Prefer incremental changes

Bad: “Refactor all measures to match a new standard in one go.”

Better: “Refactor measures in 3 batches: (1) naming/folders/descriptions, (2) logic refactors, (3) calc group introduction. Validate after each batch.”

5) Always control output size (LLM-friendly + safer)

Ask for:

  • “top N” results
  • aggregates (by month, by category)
  • a 1–2 query “sanity suite”

Examples:

“Return top 20 products by [Total Sales] last 30 days.” “Return monthly totals for the last 12 months only.”

6) Ask for safe sharing (masking + redaction)

If outputs may be pasted into tickets/PRs:

“Enable masking and keep row limits conservative. Summarize results; don’t include sensitive values.”

If you’re in a regulated environment:

“Do not send any code/metadata to external services. Proceed without online formatting.”

If you do want formatting:

“Format DAX/M for readability, but tell me if that uses an online service and ask me to confirm first.”

Anti-patterns (what not to do)

When blocked (mode/policy/license)

When the assistant says an operation is blocked, see Modes and restrictions for what each mode allows and fallback strategies. The short version:

"Explain which policy rule blocked this or requires confirmation, and propose the allowed alternative."

Task-specific recipes

These prompts cover common workflows. Replace quoted names with your model's objects.

Validate business logic

"Write and run a validation DAX query for measure '[Total Sales]' by month for the last 12 months." "Compare '[Total Sales]' vs '[Net Sales]' and explain differences."

Performance troubleshooting

"Analyze performance for this query and recommend optimizations (modeling + DAX)." "Get VertiPaq stats for the biggest tables and summarize what to optimize first."

Schema edits (if allowed)

"Create a relationship between 'Sales[CustomerId]' and 'Customers[CustomerId]'. Explain cardinality and filter direction first." "Mark 'Date' as the date table and validate that time intelligence works (run a small query)."

Semantic authoring (if allowed)

"Create a set of base measures (Total Sales, Total Cost, Gross Margin, Gross Margin %) with consistent naming, folders, and descriptions." "Introduce a Time Intelligence calc group. Show the plan and impact first."

Security (RLS/OLS)

"List roles and summarize their filters." "Design an RLS role for SalesTeam. Explain the filter logic and how it interacts with relationships. Then implement and validate."

Preferences and governance

"Show the active preferences and explain them. Then export the baseline as JSON." "Before making any changes, evaluate policy and tell me what operations are blocked or require confirmation."

Dependencies / impact analysis (Pro)

"Before renaming measure '[Total Sales]', generate an impact analysis summary for a PR."

Model changes history (Pro)

"Create a checkpoint before refactoring. After changes, show a summary of what changed and how to rollback."

Unit testing (Pro)

"Apply a baseline test pack, then add 3 measure assertions for core KPIs. Run tests and export results as Markdown."

Localization

"Add 'fr-FR' and translate captions for the top 20 measures. Show proposed translations before applying."

Risk & governance

See also