Skip to Content
Getting StartedBasic Concepts

Basic Concepts

This guide covers the fundamental concepts you need to understand when working with Prompty. These concepts form the foundation of how Prompty organizes and manages your AI development workflow.

Prompt

A prompt is a text input that instructs an AI model to perform a specific task or generate a desired output. In the field of AI, prompts are the primary way humans communicate with language models.

Key Components:

  • Input text: The main instruction or question you want the AI to respond to
  • System message: Context and behavioral instructions that shape how the AI interprets and responds to your input
  • Model configuration: Parameters that control the AI’s behavior (temperature, max tokens, etc.)
  • Variables: Dynamic placeholders that can be replaced with different values for testing variations

Example:

System: You are a helpful coding assistant User: Write a Python function to calculate fibonacci numbers

Folders

Folders help you organize your prompts in a hierarchical structure, similar to how you organize files on your computer.

Prompt History and Commit

Every time you modify a prompt, Prompty automatically saves a commit - a snapshot of your prompt at that moment. This creates a history of all changes over time.

With prompt history, you can:

  • Version tracking: See exactly what changed and when
  • Rollback capability: Revert to any previous version
  • Change comparison: Compare different versions side-by-side
  • Collaboration: Team members can see the evolution of prompts

AI Models

Prompty supports multiple AI providers and their models, giving you flexibility to choose the best model for your specific use case.

Supported Providers:

  • OpenAI: GPT-5, GPT-5 Turbo, GPT-5 Mini, GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
  • Anthropic: Claude 4, Claude 4 Turbo, Claude 3.5 Sonnet, Claude 3 Haiku
  • Google: Gemini Pro, Gemini Pro Vision
  • xAI: Grok 4, Grok 3, Grok-2

Model Parameters

Model parameters control how the AI model behaves and generates responses. These settings fine-tune the output quality and characteristics.

Temperature (0.0 - 2.0)

  • Low (0.0-0.3): More focused, deterministic responses
  • Medium (0.4-0.7): Balanced creativity and consistency
  • High (0.8-2.0): More creative, varied responses

Max Tokens

  • Limits the length of AI responses
  • Helps control costs and response time
  • Prevents overly long outputs

Top P (0.0 - 1.0)

  • Controls diversity of word choices
  • Lower values = more focused vocabulary
  • Higher values = more diverse word selection

Frequency Penalty (-2.0 to 2.0)

  • Reduces repetition in responses
  • Positive values discourage repeated phrases
  • Useful for creative writing tasks

Note: Each AI model supports different sets of parameters. Check the provider’s documentation for specific parameter availability and ranges.

System Message

A system message is a special instruction that defines the AI’s role, behavior, and context before processing user input. It sets the “personality” and guidelines for the AI.

Purpose:

  • Role definition: “You are a helpful coding assistant”
  • Behavior guidelines: “Always explain your reasoning”
  • Context setting: “You are helping a beginner programmer”
  • Output format: “Respond in JSON format”

Example:

You are an expert Python developer. When writing code: 1. Include clear comments 2. Follow PEP 8 style guidelines 3. Explain complex logic 4. Suggest improvements when possible

Next Steps

Now that you understand the basic concepts, you’re ready to:

Last updated on