Back to app
PeerLMHelp Center
Getting Started
What is PeerLM?Quick Start GuideSetting Up Your LibraryCreating Your First SuiteUnderstanding Your Results
Core Concepts
How Evaluations WorkCredits & PricingPlans & LimitsModel Tiers & CapabilitiesCachingDeterministic Mode
Features & Workflows
Managing System PromptsManaging Test Prompts & DatasetsConfiguring Evaluation CriteriaPass/Fail ThresholdsAuto-Run (CI/CD for LLMs)Sharing & Exporting ResultsBaselines & Run ComparisonPrompt Writing Guide
Team & Account
Workspaces & OrganizationsTeam ManagementBilling & SubscriptionsAudit Log
API Reference
AuthenticationEndpointsError Codes & ResponsesMCP Server
Troubleshooting
Run Failures & Action RequiredCommon Error MessagesEvaluation ErrorsCredit DiscrepanciesGlossary

Help Center

Learn how to evaluate and compare LLMs with PeerLM.

Getting Started

  • What is PeerLM?
  • Quick Start Guide
  • Setting Up Your Library
  • Creating Your First Suite
  • Understanding Your Results

Core Concepts

  • How Evaluations Work
  • Credits & Pricing
  • Plans & Limits
  • Model Tiers & Capabilities
  • Caching
  • Deterministic Mode

Features & Workflows

  • Managing System Prompts
  • Managing Test Prompts & Datasets
  • Configuring Evaluation Criteria
  • Pass/Fail Thresholds
  • Auto-Run (CI/CD for LLMs)
  • Sharing & Exporting Results
  • Baselines & Run Comparison
  • Prompt Writing Guide

Team & Account

  • Workspaces & Organizations
  • Team Management
  • Billing & Subscriptions
  • Audit Log

API Reference

  • Authentication
  • Endpoints
  • Error Codes & Responses
  • MCP Server

Troubleshooting

  • Run Failures & Action Required
  • Common Error Messages
  • Evaluation Errors
  • Credit Discrepancies
  • Glossary