Anyscale
Fast, cost-efficient, serverless APIs for LLM Serving and Fine Tuning
Anyscale Endpoints offers fast, cost-efficient, serverless APIs for serving and fine-tuning Large Language Models (LLMs) with a focus on production-readiness. Users can start with common LLMs, including the Llama-2 family and Mistral 7B, and fine-tune them for specific applications.
Pricing: Pay-as-you-go
Anyscale Alternatives
Explore 55 products in the Inference APIs category. View all Anyscale alternatives.
deepinfra
Run the top AI models using a simple API, pay per use. Low cost, scalable and production ready infrastructure.
Cerebras
Ultra-fast inference on custom wafer-scale hardware with OpenAI-compatible API
AiQu
Swedish GPU infrastructure and LLM hosting platform with API-first deployment, no Kubernetes required
Also listed in
Is your product missing?