Skip to content

Component: Smrt.Tools.ChatLab

Canonical source: SmrtApps/src/Smrt.Tools.ChatLab/README.md (mirrored below)


Smrt.Tools.ChatLab

Dev harness for exercising Smrt.Chat.Core end-to-end with real provider execution.

Overview and responsibilities

  • Validates chat planning/routing and real provider execution in a dev harness.
  • Persists metadata-only chat session state.

Public surface / entry points

  • CLI entry point: dotnet run --project SmrtApps/src/Smrt.Tools.ChatLab/Smrt.Tools.ChatLab.csproj

Dependencies and integrations

  • Provider config/state + secrets: Smrt.CloudProviders (Windows Credential Manager for secrets).
  • Planning/routing: Smrt.Chat.Core.

Configuration and operational data

  • Provider configuration/state is managed via Smrt.CloudProviders.
  • Secrets are stored via Windows Credential Manager (through Smrt.CloudProviders).

Observability and diagnostics

  • Does not log prompts or responses.
  • Does not persist message content.

Testing and validation

  • Not documented yet.

Support Bundle

  • Prefer Support Bundle when diagnosing provider configuration or execution issues.

What it does

  • Loads provider config/state via Smrt.CloudProviders.
  • Plans routing via Smrt.Chat.Core (capability routing + preferences).
  • Executes chat via a concrete IChatCompletionExecutor implementation.
  • Persists metadata-only chat session state via Smrt.Chat.Core.Storage.ChatStateStore.

Privacy / Safety

  • Does not log prompts or responses.
  • Does not persist message content.
  • Uses Windows Credential Manager via Smrt.CloudProviders for secrets.

Usage

Run interactive mode: - dotnet run --project SmrtApps/src/Smrt.Tools.ChatLab/Smrt.Tools.ChatLab.csproj

Optional args: - --profile <profileId> - --prefer <vendor1,vendor2,...> (e.g. OpenAI,MicrosoftAzure) - --kind <general|document> - --system <text> - --message <text> - --vision - --image <path> (repeatable; implies --vision) - --plan / --dry-run (plan + validate configuration only; no provider calls)

Provider settings expected

This tool expects providers to be onboarded via Smrt.Tools.ProviderDoctor.

For Azure OpenAI, ensure the service settings include: - endpoint (absolute https URL) - deployment (deployment name)

For OpenAI, set a non-secret model id in settings (recommended): - model (e.g. gpt-4o-mini)

For Anthropic, set a non-secret model id in settings: - model (e.g. claude-3-5-sonnet-20241022)

For Google Gemini API (AI Studio), set a non-secret model id in settings: - model (e.g. gemini-1.5-flash)

For Google Vertex AI (service account auth), set non-secret settings: - project-id (GCP project id) - location (e.g. us-central1) - model (e.g. gemini-1.5-flash)

Vision chat

Vision chat is supported for: - OpenAI (chat completions, vision-capable model) - Azure OpenAI (chat completions, vision-capable deployment) - Google Gemini API (AI Studio) - Google Vertex AI

Use --image one or more times (it implies --vision): - dotnet run --project SmrtApps/src/Smrt.Tools.ChatLab/Smrt.Tools.ChatLab.csproj -- --prefer OpenAI --image C:\\path\\img.png --message "What is in this image?"

If you want to validate routing + required config/secrets without spending money on an LLM call: - dotnet run --project SmrtApps/src/Smrt.Tools.ChatLab/Smrt.Tools.ChatLab.csproj -- --prefer OpenAI --vision --plan

Optional Anthropic settings: - anthropic-version (defaults to 2023-06-01) - max-tokens (defaults to 512)

You can set these with ProviderDoctor: - service-setting-set MicrosoftAzure <profileId> AzureOpenAI endpoint https://... - service-setting-set MicrosoftAzure <profileId> AzureOpenAI deployment <deployment> - service-setting-set OpenAI <profileId> OpenAI model gpt-4o-mini - service-setting-set Anthropic <profileId> Anthropic model claude-3-5-sonnet-20241022 - service-setting-set GoogleCloud <profileId> GeminiApi model gemini-1.5-flash - service-setting-set GoogleCloud <profileId> VertexAI project-id <your-project-id> - service-setting-set GoogleCloud <profileId> VertexAI location us-central1 - service-setting-set GoogleCloud <profileId> VertexAI model gemini-1.5-flash