Preferences (Memory)
manage_memory lets you give the assistant persistent “working agreements” for how it should behave when helping you with semantic models: naming conventions, guardrails, defaults, and safety settings. Think of it as configurable memory for model work.
Preferences don’t modify your semantic model by themselves; they change how the MCP server behaves and how the assistant plans/executes work.
You should not need to know the tool’s parameters; just tell the LLM what you want, and it will do the right thing (or tell you what’s blocked by policy/mode).
What you can do with Preferences (practical examples)
Without preferences, you would repeat the same instructions every session: "use PascalCase for measures", "always add descriptions", "limit query results to 200 rows". Preferences save these once so the assistant follows them automatically.
Here is what you can do in practice:
- Enforce naming conventions - tell the assistant that measures must be PascalCase with business terms (e.g.
TotalSales,GrossMarginPct). It will follow this in every session without being reminded. - Require metadata on new objects - set a guardrail that measures must have a description and display folder. The assistant will always include them when creating measures.
- Keep outputs safe to share - cap query results to 200 rows and enable masking so you can paste outputs into tickets or team chats without leaking sensitive data.
- Save business glossary - store that "GM" means "Gross Margin" and "COGS" means "Cost of Goods Sold" so the assistant interprets your shorthand correctly across sessions.
- Share conventions across the team - export preferences as JSON, commit them to a repo, and import them on a colleague's machine for consistent behavior.
Where preferences live (and what not to store)
Preferences are stored by the MCP server (on the machine where the server runs). Treat them like configuration:
- Don’t store secrets (tokens, passwords, connection strings).
- Don’t store row-level data extracts.
- Assume exported preferences JSON can be shared within your team/org, so keep it clean of sensitive content.
What to ask the LLM (quick prompts)
Start with these:
“Show me the active preferences for this workspace/model and explain what they do.” “Set a rule: measures must be PascalCase and use business terms (e.g.,TotalSales, CustomerCount).”
“Add a guardrail: don’t create measures without a description and display folder.”
“Cap query outputs to 200 rows and previews to 50 rows so responses stay small.”
“Enable masking so row-level outputs are safe to share (and tell me what won’t be masked).”
“Export the current preferences as JSON so I can share them with my team.”
What changes depending on your org’s deployment mode
Your organization can lock down the MCP server. From your perspective:
- Full mode: you can view and change preferences.
- Read-only mode: you can view preferences (and usually export them), but you can’t change them.
- Browse-only mode: preferences are unavailable (the server is locked down to a minimal browsing/discovery toolset).
| Mode | Availability |
|---|---|
| Full mode | AvailableAvailable |
| Read-only mode | LimitedLimited |
| Browse-only mode | Not availableNot available |
Notes: Read-only: view/export only; Pro enables workspace/model scopes + masking settings.
If a change is blocked, ask:
“I got a permission/mode error. Explain what’s blocked and what I can still do in this environment.”Next steps (learn the concepts)
- Scopes and rules (global vs workspace vs model, naming rules, guardrails, aliases)
- Runtime settings (row limits, formatting, masking)
- Workflows and baselines (onboarding, export/import, troubleshooting)