Augment Code

Augment Code

Software Development

Palo Alto, California 3,687 followers

The Developer AI that deeply understands your codebase and how your team builds software.

About us

Augment puts your team’s collective knowledge—codebase, documentation, and dependencies—at your fingertips via chat, code completions, and suggested edits. Get up to speed, stay in the flow and get more done. Lightning fast and highly secure, Augment works in your favorite IDEs and Slack. We proudly augment developers at Webflow, Kong, Pigment, and more. We are alumni of great AI and cloud companies, including Google, Meta, NVIDIA, Snowflake, and Databricks. If, like us, you believe in augmenting and not replacing software developers, join us on our mission to improve software development at scale using AI.

Industry
Software Development
Company size
51-200 employees
Headquarters
Palo Alto, California
Type
Privately Held
Founded
2022
Specialties
AI, Software Engineering, Developer Tools, Platform Engineering, and Developer Productivity

Locations

Employees at Augment Code

Updates

  • Hello, world! We’re Augment Code, the first developer AI purpose-built for teams. What makes Augment different? Context. Every Augment feature is context-aware. Read more: https://bit.ly/4fdJ1UU

    View profile for Scott Dietzen, graphic

    Reimagining Software Engineering with AI

    AI coding tools are everywhere. Yet they all fall short when scaling to 100s of developers and complex codebases. We’re solving this problem at Augment Code. Today, we’re sharing the first look at our platform: the first developer AI for teams. What makes Augment different? Context. Every Augment feature is context-aware. This means every suggestion, completion, and interaction reflects the components, APIs, and coding patterns in your codebase. For software engineers on professional teams, context really matters. When AI deeply understands your codebase, incredible things happen… 🤝 Onboarding new developers? Done. Augment brings instant answers to every engineer, so new teammates can quickly get up to speed. Use chat to ask questions like, “where do we bootstrap this app?” or “what’s the cadence of releases?” to get your team ramped, fast. 🎛 Constant context switching? Solved. Get instant answers and code completions, right in your IDE. 🏭 Institutional knowledge silos? Eliminated. No more searching through out-of-date or missing documentation, or worse, fielding constant questions. 💾 Legacy code? Handled. Augment deeply understands your entire codebase, regardless of when it was built, who wrote it, or what languages and dependencies it uses. 💻 Augment works as an IDE extension, starting with VSCode or JetBrains. 💪 Teams at Webflow, Kong Inc., Pigment, and more are already building with Augment. The best way to see if Augment works for your team is to try it, for free. Give us your largest repos and most complex code to see what we can do: https://bit.ly/4fjUFxL Thank you to our earliest customers: your feedback, support, and ideas inspire our team daily. 💚 And thank you to our investors Sutter Hill Ventures, Index Ventures, Innovation Endeavors, Lightspeed, and Meritech Capital 🚀

  • We've achieved a significant breakthrough in AI code completion by pioneering a new approach that learns directly from natural coding workflows. Here's why this matters for the future of developer tools. The Challenge: Traditional reinforcement learning approaches rely heavily on human annotation and explicit feedback, which becomes impractical when dealing with large codebases. Consider this: it might take a human annotator an entire day just to understand enough context to label a single example in a million-line codebase. Additionally, public repositories rarely contain the incomplete, work-in-progress code states that developers regularly work with in their IDEs. Our solution, RLDB, takes a fundamentally different approach: 🛠️ Data Infrastructure: We've built a system that captures full repository content and IDE states at regular intervals, enabling context-aware reward model (RM) and reinforcement learning (RL) training. 💻 Algorithm Design: Rather than relying on traditional ranking-based approaches, we developed a custom RL algorithm that optimizes towards the distribution of real-world coding tasks. This allows our reward model to train on 100x more data than traditional comparison pairs. The improvements we've seen are substantial: 1️⃣ Performance boost equivalent to doubling model size or using 10x more training data 2️⃣ 8%+ perplexity reduction over traditional RL methods 3️⃣ Better handling of incomplete code states 4️⃣ Reduced hallucinations and repetitions in suggestions Most importantly, these improvements came without requiring developers to change their workflow or provide explicit feedback. The system learns from natural coding patterns, making it scalable and non-intrusive. #AI #AIforCode #DeveloperTools #AIResearch

  • Augment Code reposted this

    View profile for Scott Dietzen, graphic

    Reimagining Software Engineering with AI

    Thrilled to announce a major step forward in our mission to advance open source development: Augment Code is now free for open source contributors and maintainers. What excites me most about this initiative is how it addresses two fundamental challenges in open source:  1. The steep learning curve for new contributors trying to understand large codebases  2. The time-intensive back-and-forth between maintainers and contributors on PRs We've seen remarkable results already. Jonathan Ellis, Apache Cassandra committer and DataStax CTO, reports a 50% hit rate in understanding Cassandra's complex codebase using Augment - a breakthrough for developer productivity in large-scale open source projects. What makes this different? Full access to Augment's capabilities - no rate limits, no outdated models. Contributors get comprehensive codebase understanding and intelligent suggestions that align with project patterns and practices. This means faster onboarding, more meaningful contributions, and less review friction. Sign up at augmentcode.com/opensource. Happy Holidays!

    Get up to speed, stay in the flow, get more done

    Get up to speed, stay in the flow, get more done

    augmentcode.com

  • The secret to efficient LLM inference: rethinking how we batch requests We discovered something fascinating while optimizing our inference stack: traditional request batching is leaving massive GPU potential untapped. Even when batching 10 parallel decoding requests together, you're typically using just 2% of your GPU's FLOPS. We knew there had to be a better way. Our solution? Let decode tokens "piggyback" on context processing. Instead of traditional request batching, we: - Mix tokens from multiple requests in the same batch - Stay FLOPS-bound whenever possible - Optimize for real-world developer workflows The results speak for themselves: ✨ Higher GPU utilization ⚡ Lower latency 📈 Better cost efficiency 🎯 Faster response times The academic world calls this approach "chunked prefill." We call it the key to achieving deep context with low latency. Here's how we did it: https://lnkd.in/gVT5kQRg #MachineLearning #GPUOptimization #AI #Engineering #Innovation

    Rethinking LLM Inference: Why Developer AI Needs a Different Approach

    Rethinking LLM Inference: Why Developer AI Needs a Different Approach

    augmentcode.com

  • LMNT co-founder, Zach Johnson, explains how context-aware code completions helped their team maintain consistent style while building speech models. See how they use Augment to speed up development without sacrificing code quality.

Similar pages

Browse jobs

Funding