Dependencies / Impact Analysis (Pro)
manage_dependencies prevents you from breaking things when you change a semantic model. In practice, it answers:
- "If I rename this column, what measures will break?" - it finds every DAX expression that references
Customers[Region]and tells you exactly what needs updating. - "Can I safely delete this measure?" - it checks whether other measures, calc items, or report elements depend on it and shows you the full blast radius.
- "What does this measure actually reference?" - it traces the dependency chain so you understand how a complex measure is wired into the model.
It builds an impact graph across DAX expressions, M expressions, partitions, relationships, sort-by columns, hierarchies, perspectives, calendars, and security metadata.
You don’t need to know tool parameters-ask the LLM for an “impact analysis” and a PR-ready summary, and it will use manage_dependencies behind the scenes.
What to ask the LLM (quick prompts)
“Find everything that depends on measure[Total Sales] (direct + transitive).”
“If we rename Customers[Region], what will it impact? Give me a PR-ready summary.”
“Show the dependency graph for calc group ‘Time Intelligence’ and highlight risk areas.”
“Render a dependency diagram (Mermaid) and also provide a short human-readable summary.”
Concepts (plain English)
Direct vs transitive dependencies
- Direct: items that reference the target directly.
- Transitive: items that reference something that references the target (the “blast radius”).
Prompt:
“Show direct dependents, then transitive dependents (depth 2).”Expression vs structural dependencies
- Expression dependencies come from code (DAX/M text).
- Structural dependencies come from model wiring (relationships, sort-by, hierarchy membership, perspective membership, calendar mappings, etc.).
Prompt:
“Include structural dependencies (relationships, sort-by, hierarchies, perspectives).”Why results can be “noisy”
Dependency analysis includes fast text matching and heuristics. You can get false positives-especially for generic terms like Date, Value, Amount.
Prompts:
“Use high-confidence matches only.” “Restrict to measures + calculation items only.” “Keep depth to 1 unless needed.”Recommended workflow (impact-first changes)
Rename safety check (recommended)
- “Show direct dependents (depth 1).”
- “Expand to transitive dependents (depth 2).”
- “Summarize what will likely break and what can be updated automatically vs manual.”
- “If approved, apply the rename and re-run dependency check.”
Copy/paste prompt:
“Before renamingCustomers[Region], show direct + transitive dependents (depth 2). Summarize impact and propose a safe change plan.”
Delete safety check (even stricter)
“Before deleting[Legacy Metric], show all dependents (depth 3) and tell me whether deletion is safe. If not safe, propose deprecation steps.”
Cleanup/refactor work (design improvements)
“Find measures depending on deprecated columns and propose a refactor plan grouped by table/domain.”Output formats you can ask for
Depending on your workflow, ask the assistant to render results as:
- Plain English summary (stakeholders)
- Markdown tree (PR descriptions)
- Mermaid diagram (visual impact graph)
- CSV edges/nodes (import into graph tooling)
Copy/paste prompts:
“Generate a PR-ready ‘Impact analysis’ section as a Markdown tree for renaming[Total Sales].”
“Render the dependency graph as a Mermaid flowchart and also provide a plain-English summary.”
“Export dependency edges as CSV so I can analyze them externally.”
Locked-down environments (mode behavior)
| Mode | Availability |
|---|---|
| Full mode | AvailableAvailable |
| Read-only mode | AvailableAvailable |
| Browse-only mode | Not availableNot available |
Notes: Impact analysis for renames/deletes/refactors; depth can get large.
Fallback in restricted environments:
“Without dependency tooling, uselist_model expression search to approximate impact and give me a manual verification checklist.”
Practical tips (consultants love these)
- Always run this before rename/delete.
- Start with depth 1 and only increase when needed (depth 3–4 can get large).
- Narrow scope to keep results actionable (“measures only”, “DAX only”, “exclude metadata edges”).
- Re-run after major edits (especially calc groups and relationship changes).
Troubleshooting
Ask for a fallback: list_model expression search + a manual verification checklist.
Ask: “Tell me what setting controls dependency index build timeout and what the current value is.”
Ask: “Restrict types (measures only), use high-confidence matches only, and keep depth to 1–2.”