📊 Your 2024 Portkey Wrapped is here! Dive into your AI journey: requests processed, response times, reliability stats, and the innovative ways your team used LLMs. Plus, see how much you saved through cache optimization! Check your inbox for the full story ✨
Portkey
Technology, Information and Internet
San Francisco, California 4,722 followers
Production Stack for Gen AI
About us
AI Gateway, Guardrails, and Governance. Processing 14 Billion+ LLM tokens every day. Backed by Lightspeed.
- Website
-
https://portkey.ai
External link for Portkey
- Industry
- Technology, Information and Internet
- Company size
- 51-200 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
- Founded
- 2023
Locations
-
Primary
San Francisco, California, US
-
Bengaluru, Karnataka, IN
Employees at Portkey
Updates
-
Everyone's asking if they should use OpenAI or Azure OpenAI in 2025. They're asking the wrong question. Here's what happened when we analyzed billions of production requests across both the platfroms this year, and why the answer might surprise you... Swipe ➡️ #AIEngineering #OpenAI #AzureOpenAI #LLMsInProd
-
If you're building any AI app that uses RAG, you need to check out the Perplexity API — they have some of the most advanced search & retrieval features, all available for relatively low cost. Let's dive into what's possible: Feature 1️⃣: Search Domain Filter Want your LLM to only cite scientific papers? Or avoid certain websites? Just send search_domain_filter=["arxiv.org", "nature.com"] Perfect for research tools, corporate compliance, and filtering out noise. Feature 2️⃣: Search Recency Filter Need real-time data? Filter search results by time: search_recency_filter="hour" Game-changing for market analysis, news monitoring, and trending topics. Feature 3️⃣: Related Questions Generation Auto-generate intelligent follow-up questions: return_related_questions=True Great for research assistants, learning tools, and guided exploration of complex topics. Feature 4️⃣: Include Image Results Get relevant images, charts, and diagrams right in your LLM responses: return_images=True Essential for technical documentation, visual research, and richer user experiences. All these features are available over the Portkey API! Docs: https://lnkd.in/gkAwZkBv
-
🔥 Last week's Portkey Office Hour was packed with production insights: - Gemini outperforming GPT-4o for Hinglish translation - How Springworks & Haptik cut latency in HALF with managed gateways - Real talk on RAG bottlenecks & scaling challenges The best part? Engineers from Springworks and Haptik shared their actual implementation patterns: - Virtual key mapping for faster prototyping - Pre-scale resource monitoring tricks - Smart caching strategies that worked And yes, we sent everyone Theobroma cookies because great conversations deserve great treats 🍪 Join this week's discussion with other AI builders: https://lnkd.in/gP3559WP
-
This is not all — we've also gone ahead and mapped the entire MCP universe. All MCP servers and their implementations, in one place. It's the most comprehensive directory of what you can build with MCP. The ecosystem spans: - Data & Storage - Cloud & Infrastructure - Development Tools - Content & Search - AI & Memory - Productivity - System Utilities Browse here: https://lnkd.in/gEtrMJKe
🧵 We gave AI an impossible request, then watched it come alive. Today, we're announcing MCP (Model Context Protocol) Client by Portkey — the world's first truly magical agent platform that turns "write me an app" into "here's your deployed app." No complex setup. No integration hell. Just pure possibility. See it in action 👇 and sign up for the waitlist → https://portkey.ai/mcp
-
🧵 We gave AI an impossible request, then watched it come alive. Today, we're announcing MCP (Model Context Protocol) Client by Portkey — the world's first truly magical agent platform that turns "write me an app" into "here's your deployed app." No complex setup. No integration hell. Just pure possibility. See it in action 👇 and sign up for the waitlist → https://portkey.ai/mcp
-
After evaluating 17 different platforms, this AI team replaced 2+ years of homegrown tooling with Portkey Prompts. Why? 1. Prompt partials + Mustache templates: Easily build modular, reusable prompts 2. Robust versioning & publishing: Confidently update and roll out changes 3. Simple SDKs & OpenAI-compatible APIs: Integrate seamlessly, no refactoring needed Bonus: Built-in monitoring for instant insights!
-
🔥 MASSIVE UPDATE to Portkey's AI Gateway! Introducing the Gateway Console - your new supercharged debugging companion that just works. Zero config needed. Zero extra tools required. What's new? - Built-in request logging & monitoring - Crystal-clear response tracking - Latency & status code monitoring - Quick-start guides that actually help - Copy-paste ready code samples for 250+ LLMs Getting started is dead simple: Just run npx @portkey-ai/gateway in your terminal and visit localhost:8787/public/ to see the console in action! We built this because our amazing community asked for it - and we're just getting started! Take it for a spin and tell us what you think 🚀