Integration · 60-second setup · Zero markup
LlamaIndex + OrcaRouter
LlamaIndex's OpenAI LLM class accepts api_base and api_key overrides. Route indexing, query, and synthesis calls through OrcaRouter for zero markup and automatic failover across providers.
إعداد
خمس خطوات.
- 1.Install: pip install llama-index-llms-openai
- 2.Import OpenAI from llama_index.llms.openai
- 3.Construct with api_base='https://api.orcarouter.ai/v1' and api_key='sk-orca-…'
- 4.Assign to Settings.llm so every query engine picks it up.
- 5.Build indices and query as usual — synthesis routes through OrcaRouter.
Configuration
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
Settings.llm = OpenAI(
api_base="https://api.orcarouter.ai/v1",
api_key="sk-orca-...",
model="gpt-4o",
)
# Now every query engine, agent, and chat engine uses OrcaRouter.
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
docs = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(docs)
response = index.as_query_engine().query("Summarize the key points.")Why route LlamaIndex through OrcaRouter?
RAG pipelines make many small calls per query (retrieve → rerank → synthesize). OrcaRouter's per-request routing means each of those calls independently picks the cheapest healthy backend, and you see the full breakdown in one dashboard.
Other integrations
Route LlamaIndex through OrcaRouter today.
Sign up in under a minute, grab an sk-orca-… key, and paste it into LlamaIndex. Zero markup on tokens. Automatic failover across every provider.
