Skip to main content

Using Rabetbase in Claude Code: From a Sentence to a Complete Feature

This article answers a practical question: after opening Claude Code, how do you get AI to help you complete real development tasks with Rabetbase. You will see the entire process from a single sentence to a complete feature -- including how AI automatically selects commands, looks up fields, and ensures correctness.


Before You Start: 30-Second Checklist

Make sure all of the following conditions are met, otherwise the AI will not work properly:

CheckCommand
CLI installedrabetbase --version outputs a version number (>= 2.0)
Skill installednpx skills add lovrabet/rabetbase --global (one-time setup)
Authenticatedrabetbase auth login completes OAuth in the browser
Project initializedProject directory contains .rabetbase.json (generated by rabetbase init)

Not sure if the environment is healthy? Run rabetbase doctor for a one-click diagnostic.

For detailed installation steps, see: Rabetbase CLI 2.0 Quick Start Guide.


First Conversation: Exploring Data

The simplest scenario. Say one sentence in Claude Code and see what happens.

In Claude Code, type: Show me the datasets in this application

What the AI Does Automatically

Claude Code reads the Skill's command manual and knows that "list datasets" should use rabetbase dataset list. Then it automatically executes:

rabetbase dataset list --format compress

The AI automatically adds --format compress (single-line compact JSON). You didn't ask for this -- the Skill taught it to do so because the compressed format saves about 60% of tokens, giving the AI more room in its context window.

After getting the results, the AI presents the dataset list to you:

Found 5 datasets:
1. customers (Customer) - dataset_abc123
2. orders (Order) - dataset_def456
3. order_items (Order Items) - dataset_ghi789
4. products (Product) - dataset_jkl012
5. users (User) - dataset_mno345

Follow-up Question

You ask: What fields does the customers dataset have?

The AI automatically executes:

rabetbase dataset detail --code <code> --format compress

It returns the complete field structure -- field names, types, required status, and enum values, all at a glance.

How Is This Different from Manual Operation?

The manual approach requires opening a browser, logging into the admin console, navigating to the dataset management page, searching through the list for the customer dataset, and clicking to view field details. This takes 3-5 minutes.

With Claude Code + Skill, you say one sentence, the AI automatically executes the correct command, and the result appears directly in the conversation. This takes about 10 seconds.


Intermediate: Creating a SQL Query

This time, let the AI complete a multi-step task -- writing a custom SQL query.

In Claude Code, type: Write a SQL query that summarizes order amounts by month, with year filtering support

The AI's Complete Workflow

The AI doesn't just start writing SQL from the requirement. The Skill teaches it a strict SOP:

Step 1: Check dataset structure (never guess field names)

rabetbase dataset detail --code <code> --format compress

The AI gets the real fields of the orders table: order_date (not date), total_amount (not amount), status (enum: pending/paid/completed/cancelled). Field names are real, not guessed.

Without the Skill, the AI would guess field names -- roughly 3 out of 10 SQL statements would have correct field names. With the Skill, the AI is required to check dataset detail first and get real field names from the response. Accuracy jumps from 30% to nearly 100%.

Step 2: Check if similar SQL already exists

rabetbase sql list --name "order"

See if there's anything reusable. If not, create a new one.

Step 3: Write the SQL and validate

After writing the SQL, the AI uses rabetbase sql validate to check the syntax. If validation fails (misspelled field names, syntax errors, dangerous statements), the AI automatically corrects and resubmits.

Step 4: Save to the platform

rabetbase sql save --file ./monthly-order-summary.sql

sql save has built-in validation that cannot be skipped -- DELETE, DROP, and other dangerous statements are automatically blocked.

Step 5: Test execution

rabetbase sql exec --sqlcode <sqlcode> --params '{"year":"2025"}'

The AI verifies the query results are correct, then tells you the SQL is ready to be called from your frontend code.

What Did You Do During This Process?

You said one sentence. The AI automatically executed a 5-step workflow -- check fields, check existing SQL, write SQL, validate, save, and test. The entire process follows the platform's SQL development standards.

This is the core value of the Skill: it encodes "best practices" into an SOP that the AI can follow. The AI isn't working because it's "smart" -- it follows the process defined by the Skill step by step, so the results are stable and reliable.


Full Practical Example: From Requirement to Feature Launch

This time, an end-to-end scenario -- from understanding the requirement to page generation to build and deployment.

In Claude Code, type: Use the Rabetbase CLI to create a customer list page with search and pagination support

The AI's Full Process

StepCommand the AI ExecutesDescription
Data explorationdataset list then dataset detailFirst find the customer dataset, then query the full field structure (field names, types, enum values, relationships)
API pullrabetbase api pullGenerate TypeScript SDK client code, ready to import and use in the project
Code generationAI writes code based on SDK specs from the SkillUses real field names and API calling conventions -- not guessed, but obtained from Step 1
Local previewrabetbase run startStart the local dev server for live preview
Buildrabetbase run buildBuild the production bundle; the CLI automatically handles micro-frontend integration config
Menu syncrabetbase menu syncAutomatically scans pages and syncs them to the main app menu -- users click to access

What Does the Generated Code Look Like?

The AI-generated customer list page has core code similar to:

// Real fields from dataset detail:
// name(string), phone(string), company(string),
// status(enum: potential|active|lost)

const result = await client.models.customers.filter({
where: {
name: { $contain: searchKeyword }, // Use $contain for fuzzy search
status: { $eq: selectedStatus }, // Use $eq for exact match
},
select: ["id", "name", "phone", "company", "status"],
orderBy: [{ createTime: "desc" }],
currentPage: page,
pageSize: 20,
});

Note several details -- none of these are written casually by the AI; they are standard conventions defined by the Skill:

  • Parameter names use select (not fields), orderBy (not sort), currentPage/pageSize (not page/limit)
  • Fuzzy search uses $contain, exact match uses $eq -- not LIKE '%xxx%'
  • Field names are name, phone, status -- obtained from the dataset detail response, not guessed

Final Result

Users see "Customer List" in the main app menu. Clicking it loads your sub-application page. Search, filtering, and pagination all work correctly -- data comes from real datasets, not mock data.


Why the Skill Makes the AI "Smart"

You might wonder: it's the same Claude Code, so what's the difference with and without the Skill?

Without the Skill:

  • The AI guesses field names -- customer_name or name or customerName? About 30% chance of guessing correctly
  • The AI guesses parameter formats -- page=1&size=20 or currentPage=1&pageSize=20? When unsure, it tries both
  • The AI doesn't know SQL needs validation first -- it writes the SQL and calls the API directly, getting a bunch of errors when field names are wrong
  • The AI isn't sure when to use filter and when to write SQL -- it relies on "intuition"
  • The AI tries to call HTTP APIs directly -- assembling URLs, handling auth, handling errors, each step can go wrong

With the Skill:

  • The AI first runs dataset detail to get real field names -- accuracy is nearly 100%
  • The AI uses the parameter format defined by the Skill: select, orderBy, currentPage, pageSize -- no ambiguity
  • The AI follows the process strictly: first sql validate, then sql save -- syntax errors are automatically caught
  • The AI selects solutions by priority: filter -> aggregate -> SQL -> BFF -- no random guessing
  • The AI operates through CLI commands -- auth, error handling, and risk control are all built in

What's Inside the Skill?

Rabetbase Skill 2.0 contains three layers:

LayerContent
Command ManualComplete parameters, risk levels, output formats, and use cases for 35+ commands. The AI reads the relevant manual before executing any command -- ensuring correct parameters and controllable risk
Cross-Domain Guides10 topic guides: complete SDK usage, SQL MyBatis syntax, BFF script conventions, frontend page development constraints, data interface best practices, conflict resolution, troubleshooting, and more. The AI references these conventions when generating code
SOP RulesNo guessing field names, no skipping validation, no manually assembling URLs, selecting solutions by priority. These "prohibitions" and "requirements" transform the AI's behavior from "probably correct" to "almost never wrong"

Security guardrails: The Skill also defines risk levels -- read-only commands can be executed directly by the AI, write operations get a --dry-run preview first, and high-risk operations require your confirmation. The AI cannot escalate its own permission level (riskLevel can only be manually changed in the config file by a human).


AI-Friendly Parameter Quick Reference

The following parameters are what the Skill teaches the AI to use. You generally don't need to worry about them during manual operations -- but understanding them helps explain the AI's behavior:

ParameterEffectWhy the AI Uses It
--format compressSingle-line compact JSON; semantically identical to jsonSaves ~60% tokens, giving the AI more context window room
--format jsonIndented JSON, easy for humans to readUsed during debugging; output structure is clear
--jq '.data[]'Extract a subset of results using a jq expressionWhen the return data is large, only takes the needed portion to avoid flooding
--dry-runPreview changes without actually executingA safety net before write operations -- look before you leap
--yesSkip interactive confirmationFor CI/CD pipelines or trusted scenarios

About rabetbase schema: If the AI is unsure what subcommands or parameters a command has, it will first run rabetbase schema (which exports machine-readable contract data sourced from --help). This command doesn't require login and costs nothing to get the full command metadata.


More Prompt Templates

Here are some prompts you can copy directly into Claude Code, covering common development scenarios:

Data Exploration:

  • "Show me the datasets in this application and their relationships"
  • "What fields does the customers dataset have? Which ones are enums? What are the enum values?"
  • "Pull the latest API client code for me"

SQL Development:

  • "Write a SQL query that counts total orders and total amount per customer, sorted by amount descending"
  • "Is there any existing SQL related to order statistics? If so, show me the details"
  • "This SQL is throwing an error when executed, help me find the problem"

BFF Development:

  • "Create a BFF standalone endpoint that accepts an order ID parameter and returns order details (including customer info and item details)"
  • "Create a pre-validation function for a filter operation that checks the phone number format must be 11 digits"

Page Development:

  • "Create an order list page with status filtering and date range filtering, with pagination"
  • "Create a new customer form with phone number format validation, returning to the list after submission"
  • "Create an order submission page with a main form for customer info and a detail table where product rows can be dynamically added or removed, submitted all at once"

Build and Deploy:

  • "Build the project and sync the menu to the main application"
  • "Check the menu sync status and see if any pages are missing"

DocumentDescription
Rabetbase CLI 2.0 Quick Start GuideInstall CLI, install Skill, authenticate, initialize project
Command ReferenceGrouped command overview and parameter details
Risk Level Complete GuideDetailed explanation of read / write / high-risk-write three-tier risk control
Configuration ReferenceComplete field reference for .rabetbase.json