/통합/LlamaIndex
통합 · 60초 안에 연결 · 추가 마진 없음

LlamaIndex + OrcaRouter

LlamaIndex's OpenAI LLM class accepts api_base and api_key overrides. Route indexing, query, and synthesis calls through OrcaRouter for zero markup and automatic failover across providers.

연결

다섯 단계로 완료.

  1. 1.Install: pip install llama-index-llms-openai
  2. 2.Import OpenAI from llama_index.llms.openai
  3. 3.Construct with api_base='https://api.orcarouter.ai/v1' and api_key='sk-orca-…'
  4. 4.Assign to Settings.llm so every query engine picks it up.
  5. 5.Build indices and query as usual — synthesis routes through OrcaRouter.
설정
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings

Settings.llm = OpenAI(
    api_base="https://api.orcarouter.ai/v1",
    api_key="sk-orca-...",
    model="gpt-4o",
)

# Now every query engine, agent, and chat engine uses OrcaRouter.
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
docs = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(docs)
response = index.as_query_engine().query("Summarize the key points.")
LlamaIndex을(를) OrcaRouter로 라우팅하는 이유는 무엇입니까?

RAG pipelines make many small calls per query (retrieve → rerank → synthesize). OrcaRouter's per-request routing means each of those calls independently picks the cheapest healthy backend, and you see the full breakdown in one dashboard.

기타 통합

지금 LlamaIndex을(를) OrcaRouter로 라우팅하십시오.

1분 만에 가입하여 sk-orca-… key를 받아 LlamaIndex에 붙여넣으십시오. token 추가 마진 없이 모든 공급자에서 자동 failover가 지원됩니다.

API key 받기 →
© 2026 OrcaRouter