Overview

Riffer is a Ruby framework for building AI-powered applications and agents. It provides a complete toolkit for integrating Large Language Models (LLMs) into your Ruby projects.

Core Concepts

Agent

The Agent is the central orchestrator for AI interactions. It manages messages, calls the LLM provider, and handles tool execution.

class MyAgent < Riffer::Agent
  model 'openai/gpt-4o'
  instructions 'You are a helpful assistant.'
end

See Agents for details.

Tool

Tools are callable functions that agents can invoke to interact with external systems. They have structured parameter definitions and automatic validation.

class WeatherTool < Riffer::Tool
  description "Gets the weather for a city"

  params do
    required :city, String, description: "The city name"
  end

  def call(context:, city:)
    WeatherAPI.fetch(city)
  end
end

See Tools for details.

Structured Output

Agents can return structured JSON responses that conform to a schema. The response is automatically parsed and validated. Schemas support nested objects (Hash), typed arrays (Array, of:), and arrays of objects (Array with block):

class SentimentAgent < Riffer::Agent
  model 'openai/gpt-4o'
  structured_output do
    required :sentiment, String
    required :score, Float
    required :entities, Array, description: "Named entities" do
      required :name, String
      required :type, String, enum: ["person", "place", "org"]
    end
  end
end

response = SentimentAgent.generate('Analyze: "I love this!"')
response.structured_output  # => {sentiment: "positive", score: 0.95, entities: [...]}

See the structured output section in Agents for details.

Provider

Providers are adapters that connect to LLM services. Riffer supports:

See Providers for details.

Messages

Messages represent the conversation between user and assistant. Riffer uses strongly-typed message objects:

See Messages for details.

Stream Events

When streaming responses, Riffer emits typed events:

See Stream Events for details.

Architecture

User Request
     |
     v
+------------+
|   Agent    |  <-- Manages conversation flow
+------------+
     |
     v
+------------+
|  Provider  |  <-- Calls LLM API
+------------+
     |
     v
+------------+
|    LLM     |  <-- Returns response
+------------+
     |
     v
+------------+
|   Tool?    |  <-- Execute if tool call present
+------------+
     |
     v
Response

Next Steps