Linkt

Linkt

Software Development

Austin, Texas 406 followers

Applied AI Lab | Partnering with companies to develop and deploy real-world capabilities.

About us

Linkt is at the forefront of democratizing AI technology. Our primary goal is to transform how various industries leverage artificial intelligence, ensuring it's not just accessible but also profoundly beneficial. We excel in creating tailor-made AI solutions that revolutionize customer engagement and significantly elevate team productivity. Our expertise lies in deploying cutting-edge algorithms and user-friendly interfaces to deliver seamless experiences. Join us on our journey as we redefine the future of AI in business, fostering innovation, and driving measurable success across sectors.

Website
https://www.linkt.ai/
Industry
Software Development
Company size
2-10 employees
Headquarters
Austin, Texas
Type
Privately Held
Founded
2023

Locations

Employees at Linkt

Updates

  • Linkt reposted this

    View profile for Reid McCrabb, graphic

    Co-founder and CEO of Linkt AI

    This year our dev team delivered 10x the number of features—while only adding two new engineers. Here’s how: AI in our workflow: We made Cursor our companies required IDE, greatly increasing our teams efficiency. Lean team structure: We operate without traditional PM or design roles, relying on direct customer feedback, using call transcripts and AI for instant Jira ticket creation. AI problem solving: Our team is full of AI-native problem solvers. Bottlenecks tend to just be a few ChatGPT questions away from resolution. Curious to see the impact? Take a look at two of our developers’ GitHub stats below.

    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Linkt, graphic

    406 followers

    A recent Microsoft paper reveals that OpenAI's GPT-4o-mini is an approximately 8 billion parameter model, demonstrating effective distillation techniques. The paper outlines other notable model sizes, including Claude 3.5 Sonnet at 175 billion, GPT-4 at 1.76 trillion, GPT-4o at 200 billion, o1-preview at 300 billion, and o1-mini also at 200 billion. These metrics highlight the advancements in AI model development, particularly in optimizing performance while reducing size. The successful distillation of GPT-4o-mini suggests that smaller models can retain significant reasoning capabilities, making them more accessible for various applications or devices like smartphones. The findings are vital as they indicate that smaller models like GPT-4o-mini can provide competitive performance compared to larger counterparts, potentially reducing computational costs and improving deployment efficiency. This is particularly relevant for developers and organizations looking to integrate AI solutions without the extensive resource requirements of larger models. For more in-depth insights, you can access the full paper here: https://lnkd.in/ezZcEDzq

    • No alternative text description for this image
  • View organization page for Linkt, graphic

    406 followers

    Sonus AI has launched its new family of large language models (LLMs) with the Sonus-1 series, designed to cater to various needs in AI applications. The series includes four models: Sonus-1 Mini, optimized for speed and cost-effectiveness; Sonus-1 Air, which balances performance and resource usage; Sonus-1 Pro, the top-tier model for complex tasks; and Sonus-1 Pro (with Reasoning), featuring advanced chain-of-thought reasoning capabilities. The Sonus-1 Pro model has demonstrated impressive performance across several benchmarks. It achieved an 87.5% accuracy on the MMLU benchmark, which increased to 90.15% when utilizing its reasoning capabilities. On the MMLU-Pro benchmark, Sonus-1 Pro scored 71.8%, showcasing its versatility in handling diverse tasks. The model excels in mathematical reasoning as well, achieving a remarkable 91.8% on the MATH-500 benchmark, indicating its strong capabilities in complex calculations. Additionally, it scored 88.9% on the DROP reasoning benchmark and 67.3% on GPQA-Diamond, further solidifying its position as a competitive player in the LLM space. These advancements highlight Sonus AI's commitment to delivering high-performance models tailored for various applications, from rapid responses to intricate reasoning tasks. For more details, visit https://lnkd.in/eH-YeZBm

    • No alternative text description for this image
  • DeepSeek-AI has launched DeepSeek-V3, a powerful Mixture-of-Experts (MoE) language model boasting 671 billion parameters, with 37 billion activated per token. This model aims to optimize inference and training costs through innovative architectures like Multi-head Latent Attention (MLA) and an auxiliary-loss-free strategy for load balancing. DeepSeek-V3 was pre-trained on an impressive 14.8 trillion diverse tokens, followed by supervised fine-tuning and reinforcement learning stages. The model's performance was rigorously evaluated across multiple benchmarks, demonstrating superior accuracy in various tasks. For instance, it achieved 88.5 on the MMLU benchmark and 75.9 on MMLU-Pro, outperforming all other open-source models and rivaling leading closed-source models like GPT-4o. Training DeepSeek-V3 required only 2.788 million H800 GPU hours, translating to approximately $5.576 million in costs, making it economically efficient for the scale of its capabilities. The training process was notably stable, with no irrecoverable loss spikes observed. DeepSeek-V3 excels in code generation and mathematical reasoning, achieving state-of-the-art performance on benchmarks like MATH-500 and LiveCodeBench. This positions it as a leading choice for developers and researchers seeking robust AI solutions. 🔗 Read more about DeepSeek-V3 here: https://lnkd.in/gkUTJRd8

    • No alternative text description for this image
  • A recent study published on arXiv reveals that OpenAI's o1-preview model significantly outperforms human doctors in medical reasoning tasks. The findings show that while doctors achieved only a 30% correct diagnosis rate, the AI model reached an impressive 80% accuracy. This research, conducted by teams from Harvard Medical School and Stanford University, evaluated o1-preview’s performance through five rigorous experiments, including differential diagnosis and management reasoning. Traditional evaluations of large language models (LLMs) often rely on multiple-choice questions, which do not effectively simulate real clinical scenarios. In contrast, the o1-preview model was assessed based on its ability to synthesize clinical data and provide accurate diagnoses, showcasing its potential in real-world applications. The model excelled particularly in generating differential diagnoses and demonstrated high-quality diagnostic reasoning. Despite its strengths, the study noted that o1-preview did not show significant improvements in probabilistic reasoning tasks. Nevertheless, this advancement is crucial as it opens up new avenues for accessible medical care. With AI providing high-quality diagnostic advice that is always available, it could alleviate some challenges faced by individuals seeking medical attention. Overall, this study highlights the potential of AI in enhancing clinical decision-making and improving healthcare accessibility while emphasizing the need for ongoing research to refine these technologies further. 🔗 Read more about the study here: https://lnkd.in/e9HD2j3y 

    • No alternative text description for this image
  • Google DeepMind has introduced Veo 2, a revolutionary AI video generation model that sets new standards in video quality and realism. Veo 2 can generate videos in up to 4K resolution and lengths exceeding two minutes, significantly outperforming competitors like OpenAI's Sora. The model understands cinematographic language, allowing users to specify genres, lens types, and cinematic effects, accurately replicating complex shots like low-angle tracking shots and shallow depth of field. Veo 2 also enhances realism by simulating real-world physics, fluid dynamics, and human movement with greater accuracy, reducing common issues like hallucinated details. To ensure safety, Veo 2 includes Google's proprietary SynthID watermark to identify AI-generated content and mitigate risks of deepfakes and misinformation. Currently available through Google's VideoFX tool, Veo 2 is set to expand to YouTube Shorts and other Google products in 2025. At Linkt.ai, we see Veo 2 as a significant advancement in AI video generation. How do you think Veo 2 will impact the future of video content creation in your industry? 🔗 Read more about Veo 2 here: https://lnkd.in/gu9RRaQ7

    • No alternative text description for this image
  • Google DeepMind has introduced FACTS Grounding, a benchmark to evaluate the factuality and grounding of large language models (LLMs). Here’s what you need to know: - Comprehensive Benchmark: 1,719 examples testing LLMs' ability to generate accurate responses from documents up to 32,000 tokens. - Evaluation Method: Uses Gemini 1.5 Pro, GPT-4o, and Claude 3.5 Sonnet as judges. - Criteria: Responses judged on query relevance and factual accuracy. - Applications: Summaries, question-answering, and rephrasing across various fields. The analysis reveals some intriguing insights: - Top models struggle with factuality: Gemini 1.5 Pro (64.1%), GPT-4o (61.4%), and Claude 3.5 Sonnet (58.3%). - Models often generate unsupported responses, highlighting the need for better grounding. This benchmark addresses hallucinations in LLMs, aiming to enhance trust and reliability in AI systems. Do you think FACTS Grounding will impact the development and deployment of AI models? Let us know! 🔗 Read more about FACTS Grounding paper here: https://lnkd.in/eiU8jfgV

    • No alternative text description for this image
  • We are already half way the 12 Days of OpenAI, 12 days of launches, demos, and more from the OpenAI team as a christmas gift for everyone. One major announcement is the introduction of ChatGPT Pro, which aims to broaden access to cutting-edge AI capabilities. This new offering is set to empower developers, startups, and businesses by providing enhanced features, improved performance, and greater flexibility in how they utilize AI. In addition, OpenAI has released an updated o1 System Card, emphasizing safety and robustness. This version incorporates insights from red teaming and evaluations, showcasing OpenAI's commitment to improving the reliability and security of its models. And the latest big news is the release of Sora. Besides all the leaks from the last weeks, Open AI announced their new AI video generation tool that allows users to create and edit videos using text, images, and existing video clips. Available through ChatGPT Plus and Pro plans, Sora allows users to generate up to 50 priority videos at 720p resolution with the Plus plan, while the Pro plan offers unlimited generations at 1080p resolution. As we follow these developments, it’s clear that OpenAI is focused on making AI more accessible while prioritizing ethical considerations and responsible usage. What features or improvements are you most looking forward to from OpenAI's latest announcements?

    • No alternative text description for this image
  • Cerebras Systems has launched Llama 3.3 70B on its inference platform, achieving remarkable performance metrics that set a new standard in AI. This model boasts an impressive 405B accuracy and is now running at a staggering 2,100 tokens per second, making it the fastest inference solution available. In comparison, the previous Llama 3.1 405B model runs at only 969 tokens per second, highlighting a 10x speed boost with Cerebras. This performance leap positions Cerebras as the world's fastest inference provider, outperforming competitors like Groq and Fireworks by over 70x. The Llama 3.3 model excels in various applications, including code generation, summarization, and agentic tasks. With its ability to deliver instant responses, it is poised to enhance productivity across numerous AI projects. At Linkt.ai, we see significant potential in leveraging Llama 3.3 for real-time applications that require high throughput and low latency. Its capabilities can transform how developers approach AI tasks, enabling faster and more efficient workflows. As we explore the implications of this advancement, we invite you to consider: How do you plan to integrate high-speed inference models like Llama 3.3 into your projects?

    • No alternative text description for this image
  • IBM has launched Granite 3.0, the latest generation of its AI language models designed for enterprise applications. These models are open-sourced and optimized for a wide range of tasks, including cyber security, text summarization, and content generation. Granite 3.0 includes base and instruction-tuned models that excel in agentic workflows and retrieval-augmented generation (RAG). The new models support 12 languages and 116 programming languages, making them versatile for various applications. With parameters ranging from sub-billion to 34 billion, Granite outperforms comparable models across numerous enterprise tasks. One standout feature is Granite Guardian, which ensures data security by mitigating risks across user prompts and model responses. This model has achieved top performance in over 15 safety benchmarks, making it a robust choice for organizations prioritizing responsible AI. As we continue to explore the capabilities of Granite 3.0, we see it as a significant asset for startups and enterprises aiming to leverage AI responsibly while driving innovation. What specific applications do you envision for Granite 3.0 in your projects? 🔗 Read more about Granite 3.0 here: https://lnkd.in/dCTkag3Y

    • No alternative text description for this image

Similar pages

Browse jobs