feat(agent): add structured outputs and media archive support

This commit is contained in:
2026-04-10 19:01:04 +02:00
parent a1df097f9c
commit 9732022461
34 changed files with 3276 additions and 482 deletions

130
components/agents/CLAUDE.md Normal file
View File

@@ -0,0 +1,130 @@
# components/agents/ - Agent Specs (Markdown)
Dieser Ordner enthaelt die menschenlesbaren Agent-Spezifikationen.
Jede Datei ist gleichzeitig Produktdoku und kuratierte Prompt-Quelle.
---
## Dual Model (verbindlich)
Die Agent-Runtime basiert auf zwei Quellen:
1. **Struktur in TypeScript**
- `lib/agent-definitions.ts`
- `lib/agent-run-contract.ts`
2. **Kuratierte Prompt-Segmente in Markdown**
- `components/agents/*.md`
- kompiliert via `scripts/compile-agent-docs.ts`
- konsumiert aus `lib/generated/agent-doc-segments.ts`
Wichtig:
- `convex/agents.ts` liest **kein** Raw-Markdown zur Laufzeit.
- Nur markierte `AGENT_PROMPT_SEGMENT`-Bloecke wirken prompt-relevant.
- Unmarkierter Text ist Doku fuer Menschen.
---
## Dateikonvention pro Agent
Dateiname muss dem Agent-Id-Muster folgen, z. B.:
- `campaign-distributor.md` fur `campaign-distributor`
Die Zuordnung passiert ueber `docs.markdownPath` in `lib/agent-definitions.ts`.
---
## Frontmatter (Pflicht)
Jede Agent-Datei startet mit Frontmatter:
```md
---
name: Campaign Distributor
description: ...
tools: WebFetch, WebSearch, Read, Write, Edit
color: yellow
emoji: lemon
vibe: ...
---
```
Hinweise:
- `emoji` soll als ASCII-Token gepflegt werden (z. B. `lemon`), nicht als Unicode-Zeichen.
- Frontmatter ist Referenz fuer Doku und muss mit der TS-Definition konsistent bleiben.
---
## Prompt Segment Marker (Pflicht)
Aktuell required keys:
- `role`
- `style-rules`
- `decision-framework`
- `channel-notes`
Marker-Format:
```md
<!-- AGENT_PROMPT_SEGMENT:role:start -->
Segment text
<!-- AGENT_PROMPT_SEGMENT:role:end -->
```
Regeln:
- Pro required key genau **ein** start- und **ein** end-marker.
- Kein leerer Segment-Inhalt.
- Marker-Namen muessen exakt passen.
- Segment-Reihenfolge in der Generierung folgt `AGENT_PROMPT_SEGMENT_KEYS`.
Optional zusaetzliche Segmenttypen sind erlaubt, muessen aber erst im Compiler/
Runtime-Prompting verankert werden, bevor sie Wirkung haben.
---
## Schreibregeln fuer Segmente
- Schreibe handlungsorientiert und spezifisch.
- Keine versteckte Denkspur (kein chain-of-thought).
- Keine erfundenen Produktfakten, Statistiken oder Deadlines.
- Kanalregeln konkret, aber nicht auf fragile Einzelformate ueber-engineeren.
- Immer auf strukturierten Runtime-Output ausrichten (`artifactType`, `sections`, `metadata`, `qualityChecks`, `previewText`, `body`).
---
## Nach jeder Aenderung
1. Prompt-Segmente kompilieren:
```bash
npx tsx scripts/compile-agent-docs.ts
```
2. Relevante Tests laufen lassen:
```bash
npm run test -- tests/lib/agent-doc-segments.test.ts tests/lib/agent-prompting.test.ts
```
3. Bei Struktur-Aenderungen zusaetzlich:
```bash
npm run test -- tests/lib/agent-definitions.test.ts tests/lib/agent-run-contract.test.ts
```
---
## Wann andere Dateien mitziehen
- `lib/agent-definitions.ts` anpassen, wenn sich Inputs, Kanaele, Regeln, Blueprints oder Parameter aendern.
- `lib/agent-prompting.ts` anpassen, wenn neue Segmenttypen wirklich in Prompts einfliessen sollen.
- `scripts/compile-agent-docs.ts` anpassen, wenn required segment keys geaendert werden.
- `messages/de.json` / `messages/en.json` anpassen, wenn neue UI-Labels sichtbar werden.
---
## Anti-Patterns
- Komplettes monolithisches Prompt-Dokument ohne Marker-Struktur.
- Raw-Markdown als Runtime-Input ohne Compile-Step.
- Agent-Output nur als Freitext ohne strukturierte Deliverables.
- Segment-Inhalte, die den TS-Contracts widersprechen.

View File

@@ -1,189 +1,124 @@
---
name: Campaign Distributor
description: Entwickelt und verteilt LemonSpace-Kampagneninhalte kanalgerecht über Social Media und Messenger. Transformiert Canvas-Outputs in plattformspezifische Posts, Stories, Captions und Nachrichten — mit konsistenter Markenstimme und maximaler Reichweite.
description: Turns LemonSpace visual variants and optional campaign context into channel-native distribution packages with explicit asset assignment, format guidance, and publish-ready copy.
tools: WebFetch, WebSearch, Read, Write, Edit
color: yellow
emoji: 🍋
vibe: Verwandelt Canvas-Outputs in kampagnenfähige Inhalte für jeden Kanal.
emoji: lemon
vibe: Transforms canvas outputs into channel-native campaign content that can ship immediately.
---
# Campaign Distributor Agent
## Rolle
## Prompt Segments (Compiled)
Spezialist für kanalübergreifende Content-Distribution im LemonSpace-Ökosystem. Der Agent nimmt fertige Canvas-Outputs (KI-Bilder, Varianten, Renders) und transformiert sie in plattformgerechte Inhalte — mit angepasstem Format, Ton und Rhythmus für jeden Kanal. Kein generischer Einheitsbrei, sondern natives Content-Verhalten je Plattform.
This document has a dual role in the agent runtime:
Besonderheit gegenüber generischen Social-Media-Agenten: Der Campaign Distributor kennt den LemonSpace-Canvas-Workflow. Er weiß, wie Bildvarianten entstehen, wie Compare-Nodes zur A/B-Entscheidung genutzt werden, und kann direkt aus einem Canvas-Export heraus Verteilungsvorschläge machen.
1. Product and behavior reference for humans.
2. Source for prompt segment extraction during compile time.
---
Only explicitly marked `AGENT_PROMPT_SEGMENT` blocks are compiled into runtime prompt input.
Unmarked prose in this file stays documentation-only and is not injected into model prompts.
## Kernfähigkeiten
<!-- AGENT_PROMPT_SEGMENT:role:start -->
You are the Campaign Distributor for LemonSpace, an AI creative canvas used by small design and marketing teams. Your mission is to transform visual canvas outputs and optional campaign briefing into channel-native distribution packages that are ready to publish, mapped to the best-fitting asset, and explicit about assumptions when context is missing.
<!-- AGENT_PROMPT_SEGMENT:role:end -->
- **Canvas-to-Content**: Nimmt Bildvarianten, KI-Outputs und Render-Exports aus LemonSpace und leitet daraus kanalspezifische Content-Pakete ab
- **Kanalstrategie**: Entwickelt Distributionspläne, die Formatanforderungen, Algorithmuslogik und Nutzerverhalten je Plattform berücksichtigen
- **Messenger-Integration**: Plant und formuliert Inhalte für Direct-Messaging-Kanäle (WhatsApp Business, Telegram, Newsletter-E-Mail) — nicht nur Broadcast, sondern dialogorientiert
- **Caption & Copy**: Erstellt plattformgerechte Texte, Hashtag-Sets, CTAs und Alt-Texte für alle Kanäle
- **Posting-Rhythmus**: Empfiehlt Zeitpläne basierend auf Plattformdaten und Zielgruppe
- **Variantensteuerung**: Entscheidet welche Bildvariante auf welchem Kanal ausgespielt wird (basierend auf Format, Aspect Ratio, Zielgruppe)
- **Performance-Hypothesen**: Formuliert A/B-Thesen für Variantenvergleiche, bevor Daten vorliegen
<!-- AGENT_PROMPT_SEGMENT:style-rules:start -->
Write specific, decisive, and immediately usable copy. Prefer concrete verbs over vague language, keep claims honest, and never invent product facts, statistics, or deadlines that were not provided. Adapt tone by channel while preserving campaign intent, and keep each deliverable concise enough to be practical for operators.
<!-- AGENT_PROMPT_SEGMENT:style-rules:end -->
---
<!-- AGENT_PROMPT_SEGMENT:decision-framework:start -->
Reason in this order: (1) validate required visual context, (2) detect language from brief and default to English if ambiguous, (3) assign assets to channels by format fit and visual intent, (4) select the best output blueprint per channel, (5) generate publish-ready sections and metadata, (6) surface assumptions and format risks explicitly. Ask clarifying questions only when required fields are missing or conflicting. For each selected channel, produce one structured deliverable with artifactType, previewText, sections, metadata, and qualityChecks.
<!-- AGENT_PROMPT_SEGMENT:decision-framework:end -->
## Kanalmatrix
<!-- AGENT_PROMPT_SEGMENT:channel-notes:start -->
Instagram needs hook-first visual storytelling with clear CTA and practical hashtag sets. LinkedIn needs professional framing, strong insight opening, and comment-driving close without hype language. X needs brevity and thread-aware sequencing when 280 characters are exceeded. TikTok needs native conversational phrasing and 9:16 adaptation notes. WhatsApp and Telegram need direct, high-signal copy with one clear action. Newsletter needs subject cue, preview line, and a reusable body block that fits any email builder. If asset format mismatches channel constraints, flag it and suggest a fix.
<!-- AGENT_PROMPT_SEGMENT:channel-notes:end -->
### Social Media
## Runtime Contract Snapshot
| Kanal | Hauptformat | Ton | Besonderheit |
|-------|-------------|-----|--------------|
| Instagram Feed | 1:1, 4:5 | Visuell, knapp | Carousel für Variantenvergleiche nutzen |
| Instagram Stories | 9:16 | Schnell, direkt | Swipe-Up/Link-Sticker, Polls |
| Instagram Reels | 9:16 Video | Unterhaltsam | KI-Prozess als Timelapse/BTS |
| LinkedIn | 1:1, 1200×627 | Professionell, substanziell | Thought Leadership, Produkt-Demos |
| Twitter / X | 16:9, 1:1 | Prägnant, mutig | Threads für Canvas-Workflows |
| TikTok | 9:16 Video | Nativ, lo-fi | Tool-Demos, Before/After |
| Pinterest | 2:3, 9:16 | Inspirierend | Moodboards aus Canvas-Outputs |
This agent is wired through two contracts that must stay in sync:
### Messenger & Direct
- Structural/runtime contract in TypeScript (`lib/agent-definitions.ts`, `lib/agent-run-contract.ts`).
- Curated prompt influence from compiled markdown segments (`scripts/compile-agent-docs.ts` -> `lib/generated/agent-doc-segments.ts`).
| Kanal | Format | Ton | Besonderheit |
|-------|--------|-----|--------------|
| WhatsApp Business | Bild + Text, Status | Persönlich, direkt | Kampagnenstart-Announcement, Exklusiv-Previews |
| Telegram | Bild, Kanal-Post, Bot | Community-nah | Changelog-Posts, Beta-Zugang |
| E-Mail Newsletter | HTML, Text-Fallback | Persönlich, kuratiert | Canvas-Workflow-Tutorials, Produkt-Updates |
| Discord | Embeds, Channels | Community | Creator-Feedback, Feature-Previews |
`convex/agents.ts` consumes generated segments through `lib/agent-prompting.ts`. It does not parse markdown at runtime.
---
### Output shape expectations
## Canvas-Workflow-Integration
Execution outputs are expected to provide structured deliverables using:
Der Agent versteht LemonSpace-spezifische Konzepte und kann direkt damit arbeiten:
- `artifactType`
- `previewText`
- `sections[]` with `id`, `label`, `content`
- `metadata` as `Record<string, string | string[]>`
- `qualityChecks[]`
- **Bildvarianten aus Compare-Node**: Welche Variante geht auf Instagram, welche auf LinkedIn? Begründung und Empfehlung.
- **KI-Bild-Outputs**: Automatisch Alt-Text, Caption und Hashtags vorschlagen, basierend auf dem verwendeten Prompt.
- **Render-Node-Export**: PNG/WebP-Dateien kanalgerecht benennen, Metadaten vorschlagen.
- **Frame-Dimensionen**: Prüfen, ob Canvas-Frames den Zielkanal-Spezifikationen entsprechen (z.B. 1080×1080 für Instagram Feed). Bei Abweichung: Zuschnitt-Empfehlung.
- **Branching-Stacks**: Verschiedene Adjustment-Varianten (warm vs. cool) gezielt auf verschiedene Plattformen aufteilen.
`body` remains a compatibility fallback for older render paths.
---
## Node Purpose
## Spezialisierte Skills
The node takes visual assets plus optional brief context and emits structured `agent-output` nodes per selected channel.
It does not emit raw text blobs as primary output.
- Algorithmus-Optimierung je Plattform (Reach vs. Engagement-Logik, Posting-Zeitfenster)
- Hashtag-Recherche und -Clustering (branded, community, discovery)
- Caption-Strukturen: Hook → Body → CTA, angepasst je Plattform
- Messenger-Broadcast-Texte: kurz, handlungsauslösend, mit klarem Mehrwert
- Newsletter-Sequenz-Design für Onboarding und Feature-Announcements
- Before/After-Storytelling mit Canvas-Outputs (Bild-Node → Render-Node)
- Community-Management-Vorlagen für Kommentar-Replies und DMs
- UTM-Parameter-Logik für Attribution je Kanal
## Canonical Inputs
---
| Handle | Source types | Required | Notes |
| --- | --- | --- | --- |
| `image` | `image`, `ai-image`, `render`, `compare`, optional `asset` / `video` | yes | One or more visual assets are required. |
| `brief` | `text`, `note` | no | Audience, tone, campaign goal, constraints, language hints. |
## Workflow-Integration
If brief is missing, the agent should infer from asset labels/prompts and write assumptions explicitly.
- **Handoff von**: KI-Bild-Node, Render-Node, Compare-Node (Canvas-Exports), Content Creator Agent
- **Kollaboriert mit**: Instagram Curator Agent (Feintuning Reels/Stories), E-Mail-Agent, Analytics Agent
- **Liefert an**: Scheduling-Tool, Kanal-Manager, Analytics Reporter
- **Eskaliert an**: Brand Guardian bei Messaging-Abweichungen, Legal Compliance bei regulierten Themen
## Operator Parameters (Definition Layer)
---
- `targetChannels`: multi-select channel set
- `variantsPerChannel`: `1 | 2 | 3`
- `toneOverride`: `auto | professional | casual | inspiring | direct`
## Entscheidungsrahmen
## Channel Coverage
Diesen Agent einsetzen, wenn:
- Canvas-Outputs (Bilder, Varianten, Renders) über mehrere Kanäle verteilt werden sollen
- Kanalspezifische Caption, Hashtags und CTAs benötigt werden
- Variantenentscheidungen (welches Bild auf welchem Kanal) getroffen werden müssen
- Messenger-Kampagnen (WhatsApp, Telegram, Newsletter) geplant werden
- Ein Posting-Kalender für einen Canvas-Projekt-Output erstellt werden soll
- Before/After oder Prozess-Content aus dem Canvas-Workflow entwickelt wird
- Instagram Feed
- Instagram Stories
- Instagram Reels
- LinkedIn
- X (Twitter)
- TikTok
- Pinterest
- WhatsApp Business
- Telegram
- E-Mail Newsletter
- Discord
---
## Analyze Stage Responsibilities
## Erfolgsmetriken
Before execute, the agent should build a plan that includes:
- **Instagram Engagement Rate**: ≥4% Feed, ≥6% Stories
- **LinkedIn Reichweite**: ≥20% monatliches Wachstum Impressionen
- **Newsletter Open Rate**: ≥35% (Indie/Creator-Segment), ≥25% (SMB)
- **WhatsApp Business**: ≥60% Öffnungsrate, ≥15% Click-Rate auf Links
- **Telegram**: ≥50% Views pro Post im Kanal
- **Follower-Wachstum**: ≥8% monatlich über alle Kanäle
- **Canvas-to-Post-Zykluszeit**: ≤30 Minuten von Export bis distributionsfertigem Content-Paket
- **Variantenperformance-Delta**: A/B-Hypothesen haben ≥70% Trefferrate
- channel-to-deliverable step mapping
- asset assignment by channel with rationale
- language detection result
- explicit assumptions list
---
## Execute Stage Responsibilities
## Beispiel-Anfragen
For each planned channel step, the agent should produce:
- „Ich habe 6 Bildvarianten aus meinem LemonSpace Canvas exportiert. Welche gehört auf welchen Kanal?"
- „Schreib mir Captions für Instagram, LinkedIn und einen WhatsApp-Status für dieses Produktbild"
- „Entwickle einen 2-Wochen-Distributionsplan für unseren Kampagnen-Launch"
- „Erstelle Telegram-Kanal-Posts für unser LemonSpace Feature-Update"
- „Schreib einen Newsletter für unsere Starter-Nutzer über die neuen Bildbearbeitungs-Nodes"
- „Welche Caption-Struktur funktioniert für Before/After-Posts auf TikTok vs. LinkedIn?"
- „Erstelle ein Hashtag-Set für unsere KI-Kreativ-Workflow-Posts"
- publish-ready copy sections
- channel format guidance
- CTA and accessibility/context signals where relevant
- metadata that explains asset, language, tone, and format decisions
- quality checks that are user-visible and testable
---
## Constraints
## Content-Kaskaden-Prinzip
- Do not fabricate facts, claims, or urgency.
- Keep CTA honest and actionable.
- If channel-format mismatch exists, call it out and propose fix.
- When context is missing, expose assumptions instead of pretending certainty.
Jeder Canvas-Output wird maximal verwertet — kein Inhalt wird für nur einen Kanal erstellt:
## Human Reference Examples
```
Canvas-Export (Render-Node)
→ Instagram Feed Post (1:1, kuratierte Caption)
→ Instagram Story (9:16 Crop, Swipe-Up)
→ LinkedIn Post (1:1, professioneller Kontext)
→ Twitter/X Thread (Prozess-Story, mehrere Bilder)
→ WhatsApp Status (komprimiert, direkter CTA)
→ Newsletter-Sektion (eingebettet mit Kontext)
→ Telegram Kanal-Post (Community-Framing)
```
Die Cascade nutzt LemonSpace-spezifisch die verschiedenen Adjustment-Stack-Varianten: warme Variante → Instagram/Pinterest, kühle Variante → LinkedIn/Newsletter.
---
## Messenger-Strategie
### WhatsApp Business
- **Broadcast-Listen**: Segmentiert nach Kundenstatus (Free, Starter, Pro, Max)
- **Status-Updates**: Exklusiv-Previews von Canvas-Outputs vor öffentlichem Release
- **Willkommenssequenz**: Automatisierter Flow nach Sign-Up mit ersten Canvas-Tipps
- **Ton**: Persönlich, knapp, immer mit konkretem Nutzen
### Telegram
- **Öffentlicher Kanal**: Feature-Announcements, Changelog, Canvas-Tutorials
- **Community-Gruppe**: Creator-Austausch, Feedback, Beta-Testing-Rekrutierung
- **Bot-Integration**: Canvas-Export-Notifications, Credit-Alerts (Phase 2)
- **Ton**: Community-nah, technisch informiert, offen für Diskussion
### E-Mail Newsletter
- **Segmente**: Neue Nutzer (Onboarding), aktive Creator (Feature-Deep-Dives), Inaktive (Re-Engagement)
- **Kadenz**: Wöchentlich für aktive Nutzer, monatlich für passive Segmente
- **Inhalt**: Canvas-Workflow-Tutorials mit Screenshot-Sequenzen, Modell-Empfehlungen, Credit-Tipps
- **Ton**: Kuratiert, substanziell, respektiert die Zeit des Lesers
### Discord
- **Channels**: #canvas-showcase, #feedback, #feature-requests, #changelog
- **Engagement**: Creator spotlights mit Canvas-Outputs, monatliche Challenges
- **Ton**: Community-first, technisch offen, Fehler werden transparent kommuniziert
---
## Kommunikationsstil
- **Direkt**: Keine generischen Plattitüden — spezifische, umsetzbare Empfehlungen
- **Kanalspezifisch**: Schreibt und denkt nativ in der Sprache jedes Kanals
- **Output-orientiert**: Jede Empfehlung endet mit einem konkreten Artefakt (Text, Plan, Zeitplan)
- **LemonSpace-bewusst**: Versteht Canvas-Konzepte (Nodes, Varianten, Render-Exports) und kommuniziert diese als Stärken
---
## Lernmuster
- **Algorithmus-Updates**: Verfolgt Plattformänderungen bei Reichweite und Engagement-Logik
- **Content-Performance**: Dokumentiert, welche Canvas-Output-Typen auf welchem Kanal performen
- **Messenger-Öffnungsraten**: Lernt optimale Versandzeitpunkte je Segment
- **Kanal-Trends**: Beobachtet, welche Content-Formate gerade Reichweite gewinnen (z.B. Karussell vs. Einzelbild)
- **LemonSpace-ICP-Verhalten**: Passt Strategie an das Verhalten kleiner Design- und Marketing-Teams an
- "Map these 4 campaign variants to Instagram, LinkedIn, and X."
- "Create WhatsApp, Telegram, and newsletter package from this render output."
- "Give me two per-channel variants with professional tone override."
- "No brief given: infer safely from asset prompts and list assumptions."

View File

@@ -68,8 +68,8 @@ Alle verfügbaren Node-Typen sind in `lib/canvas-node-catalog.ts` definiert:
| `video-prompt` | 2 | ✅ | ai-output | source: `video-prompt-out`, target: `video-prompt-in` |
| `ai-text` | 2 | 🔲 | ai-output | source: `text-out`, target: `text-in` |
| `ai-video` | 2 | ✅ (systemOutput) | ai-output | source: `video-out`, target: `video-in` |
| `agent` | 2 | ✅ | control | target: `agent-in` (input-only MVP) |
| `agent-output` | 3 | 🔲 | ai-output | systemOutput: true |
| `agent` | 2 | ✅ | control | target: `agent-in`, source (default) |
| `agent-output` | 2 | ✅ (systemOutput) | ai-output | target: `agent-output-in` |
| `crop` | 2 | 🔲 | transform | 🔲 |
| `bg-remove` | 2 | 🔲 | transform | 🔲 |
| `upscale` | 2 | 🔲 | transform | 🔲 |
@@ -215,7 +215,17 @@ Im **Light Mode** wird der eigentliche Edge-`stroke` ebenfalls aus dieser Akzent
| `ai-image-node.tsx` | KI-Bild-Output-Node mit Bildvorschau, Metadaten, Retry |
| `video-prompt-node.tsx` | KI-Video-Steuer-Node mit Modell-/Dauer-Selector, Credit-Anzeige, Generate-Button |
| `ai-video-node.tsx` | KI-Video-Output-Node mit Video-Player, Metadaten, Retry-Button |
| `agent-node.tsx` | Statischer Agent-Input-Node (Campaign Distributor) mit Kanal-/Input-/Output-Metadaten |
| `agent-node.tsx` | Definitionsgetriebener Agent-Node mit Briefing, Constraints, Model-Auswahl, Run/Resume und Clarification-Flow |
| `agent-output-node.tsx` | Agent-Ausgabe-Node fuer Skeletons plus strukturierte Deliverables (`sections`, `metadata`, `qualityChecks`, `previewText`) mit `body`-Fallback |
---
## Agent Runtime Nodes (aktuell)
- `agent-node.tsx` liest Template-Metadaten ueber `getAgentTemplate(...)` (projektiert aus `lib/agent-definitions.ts`).
- Node-Daten enthalten `briefConstraints`, `clarificationQuestions`, `clarificationAnswers`, `executionSteps` und Laufstatus.
- Run startet `api.agents.runAgent`, Clarification-Submit nutzt `api.agents.resumeAgent`.
- `agent-output-node.tsx` rendert strukturierte Outputs bevorzugt (Sections/Metadata/Quality Checks/Preview) und faellt auf JSON oder Plain-Text-`body` zurueck.
---
@@ -281,6 +291,6 @@ useCanvasData (use-canvas-data.ts)
- **Optimistic IDs:** Temporäre Nodes/Edges erhalten IDs mit `optimistic_` / `optimistic_edge_`-Prefix, werden durch echte Convex-IDs ersetzt, sobald die Mutation abgeschlossen ist.
- **Node-Taxonomie:** Alle Node-Typen sind in `lib/canvas-node-catalog.ts` definiert. Phase-2/3 Nodes haben `implemented: false` und `disabledHint`.
- **Video-Connection-Policy:** `video-prompt` darf **nur** mit `ai-video` verbunden werden (und umgekehrt). `text → video-prompt` ist erlaubt (Prompt-Quelle). `ai-video → compare` ist erlaubt.
- **Agent-MVP:** `agent` ist aktuell input-only (`agent-in`), ohne ausgehenden Handle. Er akzeptiert nur Content-/Kontext-Quellen (z. B. `render`, `compare`, `text`, `image`), keine Prompt-Steuerknoten.
- **Agent-Flow:** `agent` akzeptiert nur Content-/Kontext-Quellen (z. B. `render`, `compare`, `text`, `image`) als Input; ausgehende Kanten sind fuer `agent -> agent-output` vorgesehen.
- **Convex Generated Types:** `api.ai.generateVideo` wird u. U. nicht in `convex/_generated/api.d.ts` exportiert. Der Code verwendet `api as unknown as {...}` als Workaround. Ein `npx convex dev`-Zyklus würde die Typen korrekt generieren.
- **Canvas Graph Query:** Der Canvas nutzt `canvasGraph.get` (aus `convex/canvasGraph.ts`) statt separater `nodes.list`/`edges.list` Queries. Optimistic Updates laufen über `canvas-graph-query-cache.ts`.

View File

@@ -1,6 +1,6 @@
"use client";
import { useCallback, useEffect, useMemo, useState } from "react";
import { useCallback, useMemo, useState } from "react";
import { Bot } from "lucide-react";
import { Handle, Position, type Node, type NodeProps } from "@xyflow/react";
import { useAction } from "convex/react";
@@ -16,7 +16,6 @@ import {
DEFAULT_AGENT_MODEL_ID,
getAgentModel,
getAvailableAgentModels,
type AgentModelId,
} from "@/lib/agent-models";
import {
type AgentClarificationAnswerMap,
@@ -88,11 +87,16 @@ function useSafeSubscription() {
}
}
function useSafeAction(reference: FunctionReference<"action", "public", any, unknown>) {
function useSafeAction<Args extends Record<string, unknown>, Output>(
reference: FunctionReference<"action", "public", Args, Output>,
) {
try {
return useAction(reference);
} catch {
return async (_args: any) => undefined;
return async (args: Args): Promise<Output | undefined> => {
void args;
return undefined;
};
}
}
@@ -183,6 +187,10 @@ function CompactList({ items }: { items: readonly string[] }) {
);
}
function toTemplateTranslationKey(templateId: string): string {
return templateId.replace(/-([a-z])/g, (_match, letter: string) => letter.toUpperCase());
}
export default function AgentNode({ id, data, selected }: NodeProps<AgentNodeType>) {
const t = useTranslations("agentNode");
const locale = useLocale();
@@ -195,13 +203,35 @@ export default function AgentNode({ id, data, selected }: NodeProps<AgentNodeTyp
const userTier = normalizePublicTier(subscription?.tier ?? "free");
const availableModels = useMemo(() => getAvailableAgentModels(userTier), [userTier]);
const [modelId, setModelId] = useState(nodeData.modelId ?? DEFAULT_AGENT_MODEL_ID);
const [clarificationAnswers, setClarificationAnswers] = useState<AgentClarificationAnswerMap>(
normalizeClarificationAnswers(nodeData.clarificationAnswers),
const clarificationAnswersFromNode = useMemo(
() => normalizeClarificationAnswers(nodeData.clarificationAnswers),
[nodeData.clarificationAnswers],
);
const [briefConstraints, setBriefConstraints] = useState<AgentBriefConstraints>(
normalizeBriefConstraints(nodeData.briefConstraints),
const briefConstraintsFromNode = useMemo(
() => normalizeBriefConstraints(nodeData.briefConstraints),
[nodeData.briefConstraints],
);
const nodeModelId =
typeof nodeData.modelId === "string" && nodeData.modelId.trim().length > 0
? nodeData.modelId
: DEFAULT_AGENT_MODEL_ID;
const [modelDraftId, setModelDraftId] = useState<string | null>(null);
const [clarificationAnswersDraft, setClarificationAnswersDraft] =
useState<AgentClarificationAnswerMap | null>(null);
const [briefConstraintsDraft, setBriefConstraintsDraft] =
useState<AgentBriefConstraints | null>(null);
const modelId = modelDraftId === nodeModelId ? nodeModelId : modelDraftId ?? nodeModelId;
const clarificationAnswers =
clarificationAnswersDraft &&
areAnswerMapsEqual(clarificationAnswersDraft, clarificationAnswersFromNode)
? clarificationAnswersFromNode
: clarificationAnswersDraft ?? clarificationAnswersFromNode;
const briefConstraints =
briefConstraintsDraft &&
areBriefConstraintsEqual(briefConstraintsDraft, briefConstraintsFromNode)
? briefConstraintsFromNode
: briefConstraintsDraft ?? briefConstraintsFromNode;
const agentActionsApi = api as unknown as {
agents: {
@@ -234,57 +264,30 @@ export default function AgentNode({ id, data, selected }: NodeProps<AgentNodeTyp
const resumeAgent = useSafeAction(agentActionsApi.agents.resumeAgent);
const normalizedLocale = locale === "en" ? "en" : "de";
useEffect(() => {
setModelId(nodeData.modelId ?? DEFAULT_AGENT_MODEL_ID);
}, [nodeData.modelId]);
useEffect(() => {
const normalized = normalizeClarificationAnswers(nodeData.clarificationAnswers);
setClarificationAnswers((current) => {
if (areAnswerMapsEqual(current, normalized)) {
return current;
}
return normalized;
});
}, [nodeData.clarificationAnswers]);
useEffect(() => {
const normalized = normalizeBriefConstraints(nodeData.briefConstraints);
setBriefConstraints((current) => {
if (areBriefConstraintsEqual(current, normalized)) {
return current;
}
return normalized;
});
}, [nodeData.briefConstraints]);
useEffect(() => {
if (availableModels.length === 0) {
return;
}
const resolvedModelId = useMemo(() => {
if (availableModels.some((model) => model.id === modelId)) {
return;
return modelId;
}
const nextModelId = availableModels[0]!.id;
setModelId(nextModelId);
return availableModels[0]?.id ?? DEFAULT_AGENT_MODEL_ID;
}, [availableModels, modelId]);
const selectedModel =
getAgentModel(modelId) ??
getAgentModel(resolvedModelId) ??
availableModels[0] ??
getAgentModel(DEFAULT_AGENT_MODEL_ID);
const resolvedModelId = selectedModel?.id ?? DEFAULT_AGENT_MODEL_ID;
const creditCost = selectedModel?.creditCost ?? 0;
const clarificationQuestions = nodeData.clarificationQuestions ?? [];
const templateTranslationKey = `templates.${toTemplateTranslationKey(template?.id ?? DEFAULT_AGENT_TEMPLATE_ID)}`;
const translatedTemplateName = t(`${templateTranslationKey}.name`);
const translatedTemplateDescription = t(`${templateTranslationKey}.description`);
const templateName =
template?.id === "campaign-distributor"
? t("templates.campaignDistributor.name")
: (template?.name ?? "");
translatedTemplateName === `${templateTranslationKey}.name`
? (template?.name ?? "")
: translatedTemplateName;
const templateDescription =
template?.id === "campaign-distributor"
? t("templates.campaignDistributor.description")
: (template?.description ?? "");
translatedTemplateDescription === `${templateTranslationKey}.description`
? (template?.description ?? "")
: translatedTemplateDescription;
const isExecutionActive = nodeData._status === "analyzing" || nodeData._status === "executing";
const executionProgressLine = useMemo(() => {
if (nodeData._status !== "executing") {
@@ -373,7 +376,7 @@ export default function AgentNode({ id, data, selected }: NodeProps<AgentNodeTyp
const handleModelChange = useCallback(
(value: string) => {
setModelId(value);
setModelDraftId(value);
void persistNodeData({ modelId: value });
},
[persistNodeData],
@@ -381,33 +384,29 @@ export default function AgentNode({ id, data, selected }: NodeProps<AgentNodeTyp
const handleClarificationAnswerChange = useCallback(
(questionId: string, value: string) => {
setClarificationAnswers((prev) => {
const next = {
...prev,
[questionId]: value,
};
void persistNodeData({ clarificationAnswers: next });
return next;
});
const next = {
...clarificationAnswers,
[questionId]: value,
};
setClarificationAnswersDraft(next);
void persistNodeData({ clarificationAnswers: next });
},
[persistNodeData],
[clarificationAnswers, persistNodeData],
);
const handleBriefConstraintsChange = useCallback(
(patch: Partial<AgentBriefConstraints>) => {
setBriefConstraints((prev) => {
const next = {
...prev,
...patch,
};
void persistNodeData({ briefConstraints: next });
return next;
});
const next = {
...briefConstraints,
...patch,
};
setBriefConstraintsDraft(next);
void persistNodeData({ briefConstraints: next });
},
[persistNodeData],
[briefConstraints, persistNodeData],
);
const handleRunAgent = useCallback(async () => {
const handleRunAgent = async () => {
if (isExecutionActive) {
return;
}
@@ -431,9 +430,9 @@ export default function AgentNode({ id, data, selected }: NodeProps<AgentNodeTyp
modelId: resolvedModelId,
locale: normalizedLocale,
});
}, [isExecutionActive, nodeData.canvasId, id, normalizedLocale, resolvedModelId, runAgent, status.isOffline, t]);
};
const handleSubmitClarification = useCallback(async () => {
const handleSubmitClarification = async () => {
if (status.isOffline) {
toast.warning(
t("offlineTitle"),
@@ -453,7 +452,7 @@ export default function AgentNode({ id, data, selected }: NodeProps<AgentNodeTyp
clarificationAnswers,
locale: normalizedLocale,
});
}, [clarificationAnswers, nodeData.canvasId, id, normalizedLocale, resumeAgent, status.isOffline, t]);
};
if (!template) {
return null;

View File

@@ -12,6 +12,15 @@ type AgentOutputNodeData = {
stepTotal?: number;
title?: string;
channel?: string;
artifactType?: string;
previewText?: string;
sections?: Array<{
id?: string;
label?: string;
content?: string;
}>;
metadata?: Record<string, string | string[] | unknown>;
qualityChecks?: string[];
outputType?: string;
body?: string;
_status?: string;
@@ -40,6 +49,70 @@ function tryFormatJsonBody(body: string): string | null {
}
}
function normalizeSections(raw: AgentOutputNodeData["sections"]) {
if (!Array.isArray(raw)) {
return [] as Array<{ id: string; label: string; content: string }>;
}
const sections: Array<{ id: string; label: string; content: string }> = [];
for (const item of raw) {
const label = typeof item?.label === "string" ? item.label.trim() : "";
const content = typeof item?.content === "string" ? item.content.trim() : "";
if (label === "" || content === "") {
continue;
}
const id = typeof item.id === "string" && item.id.trim() !== "" ? item.id.trim() : label;
sections.push({ id, label, content });
}
return sections;
}
function normalizeMetadata(raw: AgentOutputNodeData["metadata"]) {
if (!raw || typeof raw !== "object" || Array.isArray(raw)) {
return [] as Array<[string, string]>;
}
const entries: Array<[string, string]> = [];
for (const [rawKey, rawValue] of Object.entries(raw)) {
const key = rawKey.trim();
if (key === "") {
continue;
}
if (typeof rawValue === "string") {
const value = rawValue.trim();
if (value !== "") {
entries.push([key, value]);
}
continue;
}
if (Array.isArray(rawValue)) {
const values = rawValue
.filter((value): value is string => typeof value === "string")
.map((value) => value.trim())
.filter((value) => value !== "");
if (values.length > 0) {
entries.push([key, values.join(", ")]);
}
}
}
return entries;
}
function normalizeQualityChecks(raw: AgentOutputNodeData["qualityChecks"]): string[] {
if (!Array.isArray(raw)) {
return [];
}
return raw
.filter((value): value is string => typeof value === "string")
.map((value) => value.trim())
.filter((value) => value !== "");
}
export default function AgentOutputNode({ data, selected }: NodeProps<AgentOutputNodeType>) {
const t = useTranslations("agentOutputNode");
const nodeData = data as AgentOutputNodeData;
@@ -65,6 +138,16 @@ export default function AgentOutputNode({ data, selected }: NodeProps<AgentOutpu
nodeData.title ??
(isSkeleton ? t("plannedOutputDefaultTitle") : t("defaultTitle"));
const body = nodeData.body ?? "";
const artifactType = nodeData.artifactType ?? nodeData.outputType ?? "";
const sections = normalizeSections(nodeData.sections);
const metadataEntries = normalizeMetadata(nodeData.metadata);
const qualityChecks = normalizeQualityChecks(nodeData.qualityChecks);
const previewText =
typeof nodeData.previewText === "string" && nodeData.previewText.trim() !== ""
? nodeData.previewText.trim()
: sections[0]?.content ?? "";
const hasStructuredOutput =
sections.length > 0 || metadataEntries.length > 0 || qualityChecks.length > 0 || previewText !== "";
const formattedJsonBody = isSkeleton ? null : tryFormatJsonBody(body);
return (
@@ -110,44 +193,108 @@ export default function AgentOutputNode({ data, selected }: NodeProps<AgentOutpu
<div className="min-w-0">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">{t("channelLabel")}</p>
<p className="truncate text-xs font-medium text-foreground/90" title={nodeData.channel}>
{nodeData.channel ?? "-"}
{nodeData.channel ?? t("emptyValue")}
</p>
</div>
<div className="min-w-0">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">{t("typeLabel")}</p>
<p className="truncate text-xs font-medium text-foreground/90" title={nodeData.outputType}>
{nodeData.outputType ?? "-"}
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">{t("artifactTypeLabel")}</p>
<p className="truncate text-xs font-medium text-foreground/90" title={artifactType}>
{artifactType || t("emptyValue")}
</p>
</div>
</section>
<section className="space-y-1">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">
{t("bodyLabel")}
</p>
{isSkeleton ? (
{isSkeleton ? (
<section className="space-y-1">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">
{t("bodyLabel")}
</p>
<div
data-testid="agent-output-skeleton-body"
className="animate-pulse rounded-md border border-dashed border-amber-500/40 bg-gradient-to-r from-amber-500/10 via-amber-500/20 to-amber-500/10 p-3"
>
<p className="text-[11px] text-amber-800/90 dark:text-amber-200/90">{t("plannedContent")}</p>
</div>
) : formattedJsonBody ? (
</section>
) : hasStructuredOutput ? (
<>
{sections.length > 0 ? (
<section data-testid="agent-output-sections" className="space-y-1.5">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">{t("sectionsLabel")}</p>
<div className="space-y-1.5">
{sections.map((section) => (
<div key={section.id} className="rounded-md border border-border/70 bg-background/70 p-2">
<p className="text-[11px] font-semibold text-foreground/90">{section.label}</p>
<p className="whitespace-pre-wrap break-words text-[12px] leading-relaxed text-foreground/90">
{section.content}
</p>
</div>
))}
</div>
</section>
) : null}
{metadataEntries.length > 0 ? (
<section data-testid="agent-output-metadata" className="space-y-1.5">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">{t("metadataLabel")}</p>
<div className="space-y-1 text-[12px] text-foreground/90">
{metadataEntries.map(([key, value]) => (
<p key={key} className="break-words">
<span className="font-semibold">{key}</span>: {value}
</p>
))}
</div>
</section>
) : null}
{qualityChecks.length > 0 ? (
<section data-testid="agent-output-quality-checks" className="space-y-1.5">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">{t("qualityChecksLabel")}</p>
<div className="flex flex-wrap gap-1.5">
{qualityChecks.map((qualityCheck) => (
<span
key={qualityCheck}
className="rounded-full border border-amber-500/40 bg-amber-500/10 px-2 py-0.5 text-[10px] font-medium text-amber-800 dark:text-amber-200"
>
{qualityCheck}
</span>
))}
</div>
</section>
) : null}
<section data-testid="agent-output-preview" className="space-y-1">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">{t("previewLabel")}</p>
<div className="max-h-40 overflow-auto rounded-md border border-border/70 bg-background/70 p-3 text-[13px] leading-relaxed text-foreground/90">
<p className="whitespace-pre-wrap break-words">{previewText || t("previewFallback")}</p>
</div>
</section>
</>
) : formattedJsonBody ? (
<section className="space-y-1">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">
{t("bodyLabel")}
</p>
<pre
data-testid="agent-output-json-body"
className="max-h-48 overflow-auto rounded-md border border-border/80 bg-muted/40 p-3 font-mono text-[11px] leading-relaxed text-foreground/95"
>
<code>{formattedJsonBody}</code>
</pre>
) : (
</section>
) : (
<section className="space-y-1">
<p className="text-[10px] font-semibold uppercase tracking-wide text-muted-foreground">
{t("bodyLabel")}
</p>
<div
data-testid="agent-output-text-body"
className="max-h-48 overflow-auto rounded-md border border-border/70 bg-background/70 p-3 text-[13px] leading-relaxed text-foreground/90"
>
<p className="whitespace-pre-wrap break-words">{body}</p>
</div>
)}
</section>
</section>
)}
</div>
</BaseNodeWrapper>
);

View File

@@ -112,7 +112,7 @@ export function CreditsActivityChart({ balance, recentTransactions }: CreditsAct
<ChartTooltip
content={
<ChartTooltipContent
formatter={(value: number) => formatCredits(Number(value), locale)}
formatter={(value) => formatCredits(Number(value), locale)}
/>
}
/>

View File

@@ -0,0 +1,175 @@
// @vitest-environment jsdom
import React, { act } from "react";
import { createRoot, type Root } from "react-dom/client";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
const mocks = vi.hoisted(() => ({
useAuthQuery: vi.fn(),
resolveUrls: vi.fn(async () => ({})),
}));
vi.mock("convex/react", () => ({
useMutation: () => mocks.resolveUrls,
}));
vi.mock("@/hooks/use-auth-query", () => ({
useAuthQuery: (...args: unknown[]) => mocks.useAuthQuery(...args),
}));
vi.mock("@/components/ui/dialog", () => ({
Dialog: ({ open, children }: { open: boolean; children: React.ReactNode }) =>
open ? <div>{children}</div> : null,
DialogContent: ({ children }: { children: React.ReactNode }) => <div>{children}</div>,
DialogHeader: ({ children }: { children: React.ReactNode }) => <div>{children}</div>,
DialogTitle: ({ children }: { children: React.ReactNode }) => <h2>{children}</h2>,
DialogDescription: ({ children }: { children: React.ReactNode }) => <p>{children}</p>,
}));
import { MediaLibraryDialog } from "@/components/media/media-library-dialog";
(globalThis as typeof globalThis & { IS_REACT_ACT_ENVIRONMENT?: boolean }).IS_REACT_ACT_ENVIRONMENT = true;
function makeItems(count: number, page = 1) {
return Array.from({ length: count }).map((_, index) => ({
kind: "image" as const,
source: "upload" as const,
filename: `Item ${page}-${index + 1}`,
previewUrl: `https://cdn.example.com/${page}-${index + 1}.jpg`,
width: 1200,
height: 800,
createdAt: page * 1000 + index,
}));
}
describe("MediaLibraryDialog", () => {
let container: HTMLDivElement | null = null;
let root: Root | null = null;
beforeEach(() => {
mocks.useAuthQuery.mockReset();
mocks.resolveUrls.mockReset();
mocks.resolveUrls.mockImplementation(async () => ({}));
container = document.createElement("div");
document.body.appendChild(container);
root = createRoot(container);
});
afterEach(async () => {
if (root) {
await act(async () => {
root?.unmount();
});
}
container?.remove();
container = null;
root = null;
});
it("calls media library query with page and default pageSize 8", async () => {
mocks.useAuthQuery.mockReturnValue(undefined);
await act(async () => {
root?.render(
<MediaLibraryDialog open onOpenChange={() => undefined} kindFilter="image" />,
);
});
const firstCallArgs = mocks.useAuthQuery.mock.calls[0]?.[1];
expect(firstCallArgs).toEqual(
expect.objectContaining({
page: 1,
pageSize: 8,
kindFilter: "image",
}),
);
});
it("renders at most 8 cards and shows Freepik-style pagination footer", async () => {
mocks.useAuthQuery.mockReturnValue({
items: makeItems(10),
page: 1,
pageSize: 8,
totalPages: 3,
totalCount: 24,
});
await act(async () => {
root?.render(<MediaLibraryDialog open onOpenChange={() => undefined} />);
});
const cards = document.querySelectorAll("img[alt^='Item 1-']");
expect(cards).toHaveLength(8);
expect(document.body.textContent).toContain("Previous");
expect(document.body.textContent).toContain("Page 1 of 3");
expect(document.body.textContent).toContain("Next");
});
it("updates query args when clicking next and previous", async () => {
const responseByPage = new Map<number, {
items: ReturnType<typeof makeItems>;
page: number;
pageSize: number;
totalPages: number;
totalCount: number;
}>();
mocks.useAuthQuery.mockImplementation((_, args: { page: number; pageSize: number }) => {
if (!responseByPage.has(args.page)) {
responseByPage.set(args.page, {
items: makeItems(8, args.page),
page: args.page,
pageSize: args.pageSize,
totalPages: 3,
totalCount: 24,
});
}
return responseByPage.get(args.page);
});
await act(async () => {
root?.render(<MediaLibraryDialog open onOpenChange={() => undefined} />);
});
const nextButton = Array.from(document.querySelectorAll("button")).find(
(button) => button.textContent?.trim() === "Next",
);
if (!(nextButton instanceof HTMLButtonElement)) {
throw new Error("Next button not found");
}
await act(async () => {
nextButton.click();
});
const nextCallArgs = mocks.useAuthQuery.mock.calls.at(-1)?.[1];
expect(nextCallArgs).toEqual(expect.objectContaining({ page: 2, pageSize: 8 }));
const previousButton = Array.from(document.querySelectorAll("button")).find(
(button) => button.textContent?.trim() === "Previous",
);
if (!(previousButton instanceof HTMLButtonElement)) {
throw new Error("Previous button not found");
}
await act(async () => {
previousButton.click();
});
const previousCallArgs = mocks.useAuthQuery.mock.calls.at(-1)?.[1];
expect(previousCallArgs).toEqual(expect.objectContaining({ page: 1, pageSize: 8 }));
});
it("renders 8 loading skeleton cards", async () => {
mocks.useAuthQuery.mockReturnValue(undefined);
await act(async () => {
root?.render(<MediaLibraryDialog open onOpenChange={() => undefined} />);
});
expect(document.querySelectorAll(".aspect-square.animate-pulse.bg-muted")).toHaveLength(8);
});
});

View File

@@ -20,9 +20,7 @@ import {
resolveMediaPreviewUrl,
} from "@/components/media/media-preview-utils";
const DEFAULT_LIMIT = 200;
const MIN_LIMIT = 1;
const MAX_LIMIT = 500;
const DEFAULT_PAGE_SIZE = 8;
export type MediaLibraryMetadataItem = {
kind: "image" | "video" | "asset";
@@ -54,18 +52,18 @@ export type MediaLibraryDialogProps = {
onPick?: (item: MediaLibraryItem) => void | Promise<void>;
title?: string;
description?: string;
limit?: number;
pageSize?: number;
kindFilter?: "image" | "video" | "asset";
pickCtaLabel?: string;
};
function normalizeLimit(limit: number | undefined): number {
if (typeof limit !== "number" || !Number.isFinite(limit)) {
return DEFAULT_LIMIT;
}
return Math.min(MAX_LIMIT, Math.max(MIN_LIMIT, Math.floor(limit)));
}
type MediaLibraryResponse = {
items: MediaLibraryMetadataItem[];
page: number;
pageSize: number;
totalPages: number;
totalCount: number;
};
function formatDimensions(width: number | undefined, height: number | undefined): string | null {
if (typeof width !== "number" || typeof height !== "number") {
@@ -128,20 +126,39 @@ export function MediaLibraryDialog({
onPick,
title = "Mediathek",
description,
limit,
pageSize = DEFAULT_PAGE_SIZE,
kindFilter,
pickCtaLabel = "Auswaehlen",
}: MediaLibraryDialogProps) {
const normalizedLimit = useMemo(() => normalizeLimit(limit), [limit]);
const [page, setPage] = useState(1);
const normalizedPageSize = useMemo(() => {
if (typeof pageSize !== "number" || !Number.isFinite(pageSize)) {
return DEFAULT_PAGE_SIZE;
}
return Math.max(1, Math.floor(pageSize));
}, [pageSize]);
useEffect(() => {
if (!open) {
setPage(1);
}
}, [open]);
useEffect(() => {
setPage(1);
}, [kindFilter]);
const metadata = useAuthQuery(
api.dashboard.listMediaLibrary,
open
? {
limit: normalizedLimit,
page,
pageSize: normalizedPageSize,
...(kindFilter ? { kindFilter } : {}),
}
: "skip",
);
) as MediaLibraryResponse | undefined;
const resolveUrls = useMutation(api.storage.batchGetUrlsForUserMedia);
const [urlMap, setUrlMap] = useState<Record<string, string | undefined>>({});
@@ -164,7 +181,7 @@ export function MediaLibraryDialog({
return;
}
const storageIds = collectMediaStorageIdsForResolution(metadata);
const storageIds = collectMediaStorageIdsForResolution(metadata.items);
if (storageIds.length === 0) {
setUrlMap({});
setUrlError(null);
@@ -206,12 +223,14 @@ export function MediaLibraryDialog({
return [];
}
return metadata.map((item) => ({
return metadata.items.map((item) => ({
...item,
url: resolveMediaPreviewUrl(item, urlMap),
}));
}, [metadata, urlMap]);
const visibleItems = useMemo(() => items.slice(0, DEFAULT_PAGE_SIZE), [items]);
const isMetadataLoading = open && metadata === undefined;
const isInitialLoading = isMetadataLoading || (metadata !== undefined && isResolvingUrls);
const isPreviewMode = typeof onPick !== "function";
@@ -244,9 +263,9 @@ export function MediaLibraryDialog({
<div className="min-h-[320px] overflow-y-auto pr-1">
{isInitialLoading ? (
<div className="grid grid-cols-2 gap-3 sm:grid-cols-3 lg:grid-cols-4">
{Array.from({ length: 12 }).map((_, index) => (
<div key={index} className="overflow-hidden rounded-lg border">
<div className="grid grid-cols-2 gap-3 lg:grid-cols-4">
{Array.from({ length: DEFAULT_PAGE_SIZE }).map((_, index) => (
<div key={index} className="overflow-hidden rounded-lg border">
<div className="aspect-square animate-pulse bg-muted" />
<div className="space-y-1 p-2">
<div className="h-3 w-2/3 animate-pulse rounded bg-muted" />
@@ -270,8 +289,8 @@ export function MediaLibraryDialog({
</p>
</div>
) : (
<div className="grid grid-cols-2 gap-3 sm:grid-cols-3 lg:grid-cols-4">
{items.map((item) => {
<div className="grid grid-cols-2 gap-3 lg:grid-cols-4">
{visibleItems.map((item) => {
const itemKey = getItemKey(item);
const isPickingThis = pendingPickItemKey === itemKey;
const itemLabel = getItemLabel(item);
@@ -347,6 +366,30 @@ export function MediaLibraryDialog({
</div>
)}
</div>
{metadata && !isInitialLoading && !urlError && items.length > 0 ? (
<div className="flex shrink-0 items-center justify-center gap-2 border-t px-5 py-3" aria-live="polite">
<Button
variant="outline"
size="sm"
onClick={() => setPage((current) => Math.max(1, current - 1))}
disabled={page <= 1}
>
Previous
</Button>
<span className="text-xs text-muted-foreground">
Page {metadata.page} of {metadata.totalPages}
</span>
<Button
variant="outline"
size="sm"
onClick={() => setPage((current) => Math.min(metadata.totalPages, current + 1))}
disabled={page >= metadata.totalPages}
>
Next
</Button>
</div>
) : null}
</DialogContent>
</Dialog>
);