AiProductsHunt
OpenAI API Proxy

OpenAI API Proxy

Provides the same proxy OpenAI API for different LLM models

OpenAI API Proxy
About Details
Name: OpenAI API Proxy
Submited By: Rocio Cormier
Release Date 10 months ago
Website Visit Website
Category API Open Source GitHub

Provides the same proxy OpenAI API interface for different LLM models(OpenAI, Anthropic, Vertex AI, Gemini), and supports deployment to any Edge Runtime environment. ✅ Compatible with OpenAI API ✅ Support for multiple models ✅ Suitable for Edge environments


Curt Nienow

If you're into AI, this proxy is a game changer! The fact that it's open-source and can be deployed on Edge Runtime environments is a big plus. No need for a third party. Good luck with the launch!

10 months ago


Burley Collins

Update 2024-08-27 Several new models are supported, and Groq's speed is indeed amazingly fast. Why is it so quick? - [x] Groq - [x] DeepSeek - [x] Moonshot What's the next model you want to support?

11 months ago


Esteban Kilback

Congrats on launching this API proxy! Super useful for folks who wanna avoid routing through third parties. Can't wait to see how this evolves. Good luck!

1 year ago


Korey Hammes

What are the costs associated with using this API proxy beyond the free requests provided? For me, understanding the pricing model help in evaluating the overall value of the tool.

1 year ago


Esteban Hilpert

Update 2024-08-29 - Supports Cerebras model, which seem to be extremely fast (1700 tok/s), refer to the blog <a href="https://cerebras.ai/blog/introducing-cerebras-inference-ai-at-instant-speed" target="_blank" rel="nofollow noopener noreferrer">https://cerebras.ai/blog/introdu...</a> - Supports Azure OpenAI models. Due to potential inconsistencies between Deployment and Model, it is necessary to configure aliases specifically to specify the deployment names corresponding to the models, similar to `gpt-4o-mini:gpt-4o-mini-dev,gpt-35-turbo:gpt-35-dev`, other configurations are the same as the official ones

1 year ago


Trenton Spencer

While the idea is promising, it would be helpful to have detailed documentation on how to handle different API endpoints and features for each LLM model

1 year ago