Create reusable, parameterized prompt templates for MCP clients.
Prompts are reusable message templates that help LLMs generate structured, purposeful responses. FastMCP simplifies defining these templates, primarily using the @mcp.prompt decorator.
The most common way to define a prompt is by decorating a Python function. The decorator uses the function name as the prompt’s identifier.
Copy
from fastmcp import FastMCPfrom fastmcp.prompts.prompt import Message, PromptMessage, TextContentmcp = FastMCP(name="PromptServer")# Basic prompt returning a string (converted to user message automatically)@mcp.promptdef ask_about_topic(topic: str) -> str: """Generates a user message asking for an explanation of a topic.""" return f"Can you please explain the concept of '{topic}'?"# Prompt returning a specific message type@mcp.promptdef generate_code_request(language: str, task_description: str) -> PromptMessage: """Generates a user message requesting code generation.""" content = f"Write a {language} function that performs the following task: {task_description}" return PromptMessage(role="user", content=TextContent(type="text", text=content))
Key Concepts:
Name: By default, the prompt name is taken from the function name.
Parameters: The function parameters define the inputs needed to generate the prompt.
Inferred Metadata: By default:
Prompt Name: Taken from the function name (ask_about_topic).
Prompt Description: Taken from the function’s docstring.
Functions with *args or **kwargs are not supported as prompts. This restriction exists because FastMCP needs to generate a complete parameter schema for the MCP protocol, which isn’t possible with variable argument lists.
While FastMCP infers the name and description from your function, you can override these and add additional metadata using arguments to the @mcp.prompt decorator:
Copy
@mcp.prompt( name="analyze_data_request", # Custom prompt name description="Creates a request to analyze data with specific parameters", # Custom description tags={"analysis", "data"}, # Optional categorization tags meta={"version": "1.1", "author": "data-team"} # Custom metadata)def data_analysis_prompt( data_uri: str = Field(description="The URI of the resource containing the data."), analysis_type: str = Field(default="summary", description="Type of analysis.")) -> str: """This docstring is ignored when description is provided.""" return f"Please perform a '{analysis_type}' analysis on the data found at {data_uri}."
New in version:2.11.0Optional meta information about the prompt. This data is passed through to the MCP client as the _meta field of the client-side prompt object and can be used for custom metadata, versioning, or other application-specific purposes.
New in version:2.9.0The MCP specification requires that all prompt arguments be passed as strings, but FastMCP allows you to use typed annotations for better developer experience. When you use complex types like list[int] or dict[str, str], FastMCP:
Automatically converts string arguments from MCP clients to the expected types
Generates helpful descriptions showing the exact JSON string format needed
Preserves direct usage - you can still call prompts with properly typed arguments
Since the MCP specification only allows string arguments, clients need to know what string format to use for complex types. FastMCP solves this by automatically enhancing the argument descriptions with JSON schema information, making it clear to both humans and LLMs how to format their arguments.
But you can still call it directly with proper types:
Copy
# This also works for direct callsresult = await prompt.render({ "numbers": [1, 2, 3, 4, 5], "metadata": {"source": "api", "version": "1.0"}, "threshold": 2.5})
Keep your type annotations simple when using this feature. Complex nested types or custom classes may not convert reliably from JSON strings. The automatically generated schema descriptions are the only guidance users receive about the expected format.Good choices: list[int], dict[str, str], float, bool
Avoid: Complex Pydantic models, deeply nested structures, custom classes
FastMCP intelligently handles different return types from your prompt function:
str: Automatically converted to a single PromptMessage.
PromptMessage: Used directly as provided. (Note a more user-friendly Message constructor is available that can accept raw strings instead of TextContent objects.)
list[PromptMessage | str]: Used as a sequence of messages (a conversation).
Any: If the return type is not one of the above, the return value is attempted to be converted to a string and used as a PromptMessage.
Copy
from fastmcp.prompts.prompt import Message, PromptResult@mcp.promptdef roleplay_scenario(character: str, situation: str) -> PromptResult: """Sets up a roleplaying scenario with initial messages.""" return [ Message(f"Let's roleplay. You are {character}. The situation is: {situation}"), Message("Okay, I understand. I am ready. What happens next?", role="assistant") ]
Parameters in your function signature are considered required unless they have a default value.
Copy
@mcp.promptdef data_analysis_prompt( data_uri: str, # Required - no default value analysis_type: str = "summary", # Optional - has default value include_charts: bool = False # Optional - has default value) -> str: """Creates a request to analyze data with specific parameters.""" prompt = f"Please perform a '{analysis_type}' analysis on the data found at {data_uri}." if include_charts: prompt += " Include relevant charts and visualizations." return prompt
In this example, the client must provide data_uri. If analysis_type or include_charts are omitted, their default values will be used.
New in version:2.8.0You can control the visibility and availability of prompts by enabling or disabling them. Disabled prompts will not appear in the list of available prompts, and attempting to call a disabled prompt will result in an “Unknown prompt” error.By default, all prompts are enabled. You can disable a prompt upon creation using the enabled parameter in the decorator:
Copy
@mcp.prompt(enabled=False)def experimental_prompt(): """This prompt is not ready for use.""" return "This is an experimental prompt."
You can also toggle a prompt’s state programmatically after it has been created:
Copy
@mcp.promptdef seasonal_prompt(): return "Happy Holidays!"# Disable and re-enable the promptseasonal_prompt.disable()seasonal_prompt.enable()
FastMCP seamlessly supports both standard (def) and asynchronous (async def) functions as prompts.
Copy
# Synchronous prompt@mcp.promptdef simple_question(question: str) -> str: """Generates a simple question to ask the LLM.""" return f"Question: {question}"# Asynchronous prompt@mcp.promptasync def data_based_prompt(data_id: str) -> str: """Generates a prompt based on data that needs to be fetched.""" # In a real scenario, you might fetch data from a database or API async with aiohttp.ClientSession() as session: async with session.get(f"https://api.example.com/data/{data_id}") as response: data = await response.json() return f"Analyze this data: {data['content']}"
Use async def when your prompt function performs I/O operations like network requests, database queries, file I/O, or external service calls.
New in version:2.2.5Prompts can access additional MCP information and features through the Context object. To access it, add a parameter to your prompt function with a type annotation of Context:
Copy
from fastmcp import FastMCP, Contextmcp = FastMCP(name="PromptServer")@mcp.promptasync def generate_report_request(report_type: str, ctx: Context) -> str: """Generates a request for a report.""" return f"Please create a {report_type} report. Request ID: {ctx.request_id}"
For full documentation on the Context object and all its capabilities, see the Context documentation.
New in version:2.9.1FastMCP automatically sends notifications/prompts/list_changed notifications to connected clients when prompts are added, enabled, or disabled. This allows clients to stay up-to-date with the current prompt set without manually polling for changes.
Notifications are only sent when these operations occur within an active MCP request context (e.g., when called from within a tool or other MCP operation). Operations performed during server initialization do not trigger notifications.Clients can handle these notifications using a message handler to automatically refresh their prompt lists or update their interfaces.
New in version:2.1.0You can configure how the FastMCP server handles attempts to register multiple prompts with the same name. Use the on_duplicate_prompts setting during FastMCP initialization.
Copy
from fastmcp import FastMCPmcp = FastMCP( name="PromptServer", on_duplicate_prompts="error" # Raise an error if a prompt name is duplicated)@mcp.promptdef greeting(): return "Hello, how can I help you today?"# This registration attempt will raise a ValueError because# "greeting" is already registered and the behavior is "error".# @mcp.prompt# def greeting(): return "Hi there! What can I do for you?"
The duplicate behavior options are:
"warn" (default): Logs a warning, and the new prompt replaces the old one.
"error": Raises a ValueError, preventing the duplicate registration.
"replace": Silently replaces the existing prompt with the new one.
"ignore": Keeps the original prompt and ignores the new registration attempt.