Skip to main content

Advanced Case 2: Tech Teams Standardize Reporting Metrics

Overview

The basic guide shows simple query scenarios. But in real project management, different roles need very different views of the same data — product managers want "requirement distribution by project," developers want "status of my assigned requirements," and executives want "overall progress and delay risks." Having AI interpret the metric and generate SQL on the fly is inefficient and error-prone.

This case study demonstrates: when reporting logic is complex and metrics need to be consistent, how the tech team can wrap SQL into an endpoint, and how the AI loads a Skill to return standardized results with a single sentence.


Business Scenario: Why Standardize Metrics

The Cost of Not Standardizing

When AI generates SQL on the fly, the same question can produce different answers each time:

  • First time you ask "new requirements this week," AI interprets it as "creation time"
  • Second time you ask the same thing, AI might use "update time" to define "new"
  • Third month, business adjusts the metric definition, and AI's interpretation shifts again

The result: same metric, three different numbers. Management can't compare trends, and decisions lose their foundation.

The Value of Standardized Metrics

Consistency: No matter who asks, when, or how many times — the number is always the same. This is a fundamental requirement for management reporting.

Auditability: When leadership asks "where does this number come from?", you can trace it to a specific SQL file — not "what the AI generated at the time."

Clear ownership: SQL metrics are managed by the tech team, Skills are executed by AI, and metric changes go through a proper change process — no reliance on AI's improvisation.

Why It Matters for the Enterprise

Enterprises make decisions based on data. Inconsistent data is worse than no data — wrong numbers lead people in wrong directions more easily than no numbers at all.

Once metrics are standardized, they're no longer "one-off query results" — they become enterprise data assets:

  • New hires see the same numbers as everyone else
  • Monthly and quarterly reports have consistent baselines, making trend analysis meaningful
  • Cross-department collaboration doesn't produce conflicting numbers for the same metric

This isn't a technology issue — it's a management issue. Standardizing metrics is the prerequisite for data to truly serve as a basis for decisions.


Technical Implementation: Building the SQL Statistics Endpoint

The architecture has two layers with clear separation of concerns:

┌─────────────────────────────────┐
│ Layer 1: Custom SQL (registered)│
│ SQL template with #{param} │
└──────────────┬──────────────────┘
│ yields sqlCode

┌─────────────────────────────────┐
│ Layer 2: Backend Function │
│ Calls sql.execute(sqlCode,params)│
│ Add pre-validation, post-format │
└──────────────┬──────────────────┘
│ yields endpoint path

AI calls the endpoint

Step 1: Register a Custom SQL on the Platform

The tech team creates a custom SQL on the Lovrabet platform, which assigns a sqlCode:

-- sqlCode: get_requirement_stats
-- Params: project_id (optional), time_range (this week/this month/this quarter)

SELECT
p.name AS project_name,
COUNT(CASE WHEN r.status = "new") AS new_count,
COUNT(CASE WHEN r.status = "in_progress") AS in_progress_count,
COUNT(CASE WHEN r.status = "completed") AS completed_count,
COUNT(CASE WHEN r.status = "closed") AS closed_count
FROM yt_requirements r
JOIN yt_projects p ON r.project_id = p.id
WHERE r.updated_at >= #{start_date}
AND (#{project_id} IS NULL OR r.project_id = #{project_id})
GROUP BY p.name

Step 2: Build a Backend Function to Call the SQL

The BF calls the SQL and handles post-processing — it doesn't contain SQL logic itself:

/**
* Requirement Statistics Endpoint
* Script name: get_requirement_stats
* App: app-173e8652
*/
export default async function getRequirementStats(params, context) {
const { project_id, time_range } = params;

// Calculate start date based on time range
const now = new Date();
let startDate;
if (time_range === "this_month") {
startDate = new Date(now.getFullYear(), now.getMonth(), 1).toISOString();
} else if (time_range === "this_quarter") {
const quarter = Math.floor(now.getMonth() / 3);
startDate = new Date(now.getFullYear(), quarter * 3, 1).toISOString();
} else {
// Default: this week
const dayOfWeek = now.getDay();
startDate = new Date(now.setDate(now.getDate() - dayOfWeek)).toISOString();
}

// Call the registered custom SQL
const result = await context.client.sql.execute({
sqlCode: "get_requirement_stats",
params: {
start_date: startDate,
project_id: project_id || null,
},
});

if (!result.execSuccess || !result.execResult) {
throw new Error("SQL execution failed: " + (result.errorMsg || "unknown error"));
}

// Post-processing: calculate totals row
const rows = result.execResult;
const totals = rows.reduce(
(acc, row) => ({
new_count: acc.new_count + Number(row.new_count || 0),
in_progress_count:
acc.in_progress_count + Number(row.in_progress_count || 0),
completed_count: acc.completed_count + Number(row.completed_count || 0),
closed_count: acc.closed_count + Number(row.closed_count || 0),
}),
{
new_count: 0,
in_progress_count: 0,
completed_count: 0,
closed_count: 0,
}
);

return {
success: true,
data: rows,
totals: { project_name: "Total", ...totals },
};
}

Registration Path

After registration, the BF is accessible via:

POST /api/endpoint/{appCode}/{scriptName}
// Full path: POST /api/endpoint/app-173e8652/get_requirement_stats

Prerequisites

Before development, the tech team should confirm:

  • App app-173e8652 exists
  • yt_requirements (requirements table) and yt_projects (projects table) are created and populated
  • Custom SQL get_requirement_stats is registered on the platform
  • The operator has read permissions for these tables

Skill Encapsulation: One Sentence to Trigger Statistics

Once the endpoint is ready, encapsulate the flow into a Skill. The AI loads it and automatically: identifies the reporting intent → calls the endpoint → formats the result.

Skill Template

---
name: get_requirement_stats
version: 0.1.0
description: "Requirement statistics query. Trigger words: show me this week's requirements, requirement status by project, requirement progress report."
---

# Requirement Statistics Query

## Step 1: Identify the metric scope

Based on user input, determine:
- Time range: this week / this month / this quarter (default: this week)
- Project scope: specific project or all
- Role context: if the user's project or module is available, auto-apply the filter

If the user doesn't specify, ask before proceeding.

## Step 2: Call the statistics endpoint

Call the get_requirement_stats endpoint (/api/endpoint/app-173e8652/get_requirement_stats) with:
- project_id (if any)
- time_range (this week / this month / this quarter)

If the endpoint returns failure, inform the user of the reason and stop.

## Step 3: Format the results

Format the endpoint response as a table or list:
- Show status distribution per project in columns
- Include a totals row
- Right-align numbers, left-align text

## Notes

- Don't explain the SQL logic — just present the final numbers
- If a project has no data, show 0 instead of blank
- Adjust emphasis based on the user's role (executive / product / engineering)

Usage Example

User Input

Show me this week's requirement completion status across all projects

AI Automatically

Loads Skill → Identifies time range (this week) + project scope (all) → Calls get_requirement_stats endpoint → Formats the response as a table

Example Output

✅ This Week's Requirement Statistics
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Project New In Progress Completed Closed
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Platform & Docs 3 2 5 1
Agent Runtime 7 4 3 0
Rabetbase-CLI 2 1 6 2
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total 12 7 14 3

FAQ

Q: What if the metric definition changes?

The tech team updates the custom SQL on the platform. The BF script and Skill don't need changes — results sync automatically.

Q: What if we need to add more reporting dimensions?

Add fields to the custom SQL — for example, "overdue requirement count" or "requirements in testing." The Skill just handles the display.

Q: Can we query at an individual level?

Yes. Add a WHERE r.assignee_id = #{user_id} condition to the custom SQL. The Skill can recognize context like "my requirements."

Q: Will the endpoint be slow with large datasets?

For reporting endpoints, consider adding caching on the platform (e.g., this week's metric doesn't change within the week). The tech team should evaluate the specific approach.


Next Steps

When business users frequently request a particular report, the product/tech team can:

  1. Align with stakeholders on the metric definition
  2. Register a custom SQL on the platform, build a BF, and register it in Lovrabet
  3. Load the corresponding Skill on the AI side — one sentence to trigger