
Shark Puddle
Pitch your business/startup idea to a panel of Puddle Sharks



About | Details |
---|---|
Name: | Shark Puddle |
Submited By: | Nils Auer |
Release Date | 10 months ago |
Website | Visit Website |
Category | SaaS GitHub Startup Lessons |
Ever had a great business idea, but no idea how to turn it into reality, or whether it's even viable? Bring your ideas - good, bad, and ridiculous - to the Shark Puddle to have them tested against our AI-generated Puddle Sharks.
A great idea for demonstrating how AI can easily be added to any product.... A fun and useful app in it's own right :)
10 months ago
Hello, Thank you for stopping by! I'm one of the technical team behind Shark-Puddle and LLMAsAService.io, and I'd like to share some details and requests with you in addition to what @indomitabelehef commented on earlier. If you have any questions about implementation ,just reply to me here. 1. Shark-Puddle is Open Source You can find the source code here: <a href="https://github.com/PredictabilityAtScale/shark-puddle" target="_blank" rel="nofollow noopener noreferrer">https://github.com/Predictabilit...</a> 2. Built with Next.js on AWS Amplify (Gen 2) The application is a Next.js app hosted on AWS Amplify's second generation. 3. All Streaming LLM Services via <a href="https://LLMAsAService.io" target="_blank" rel="nofollow noopener noreferrer">LLMAsAService.io</a> In the source code, you'll notice there are no direct calls to OpenAI, Anthropic, or Google Gemini. Instead, all streaming LLM services are centrally managed through LLMAsAService.io, which handles: a) Failover Management Provides failover to one of the eight defined vendors and models. b) Customer Token Tracking and Allowance We monitor token usage and allowances (we appreciate your support, but it's currently on my credit card). c) Safety Guardrails for PII and Toxic Requests Feel free to test this by attempting to input "bad" things and see how the system responds. d) Prompt Complexity Routing We analyze your prompts and route them to either "simple/fast" or "slow/high-power" models. Tip: if you click "Try Again," we use a stronger model. 4. Streaming Responses and Backend Testing You might notice streaming responses, sometimes multiple at once. We're aiming to push our backend to its limits, so please give it a good workout! Our component, `llmasaservice-client` (available on NPM), includes our `useLLM` hook, which supports all these features and has a callback when a response is finally complete. ------------------------------- Calling LLM Implementation: ------------------------------- The only code used to call LLMs is the following. Step 1: Create the hook instance (configures the LLM service with a customer so we can keep track of token usage): import { useLLM } from "llmasaservice-client"; const { response, idle, send } = useLLM({ project_id: process.env.NEXT_PUBLIC_PROJECT_ID, customer: { customer_id: idea?.email ?? "", customer_name: idea?.email ?? "", }, }); Step 2: Make a streaming call. Use `send` to send the prompt, and `response` is what is displayed: const handleSubmit = () =>; { const prompt = `Summarize the following idea in one or two sentences Idea: "${ideaText}."`; send(prompt); }; And that's it! We manage the keys, services, monitoring, security, and customer onboardingāall from a control panel. Nothing in the code needs to changeāeven when OpenAI adds a new model, like the o1 model a few days before launch :) Adding it was easy for us (it's in the premium model group!). So, while you're having fun and getting solid feedback on business ideas, please take a look at how we built it and share any suggestions on how we can improve. Best regards, Troy
10 months ago