MiniMax M2.7 high-speed — same model + same 200k context as M2.7, faster output (~100 tps vs ~60 tps…
from openai import OpenAI
client = OpenAI(
base_url="https://orcarouter.ai/v1",
api_key="$ORCAROUTER_API_KEY",
)
response = client.chat.completions.create(
model="minimax/minimax-m2.7-highspeed",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)| Input / 1M tokens | $0.600 |
| Output / 1M tokens | $2.40 |
| Cache read / 1M | $0.060 |
| Cache write / 1M | $0.375 |
| Currency | USD |