Sakana AIさんが再投稿しました
Sakana AIは採用しています! 進化的計算と基盤モデルの更なる発展を自ら切り開きたい方は、当社の募集要項をご覧ください。sakana.ai/careers
We are building a world class AI R&D company in Tokyo. We want to develop AI solutions for Japan’s needs, and democratize AI in Japan.
Sakana AIの外部リンク
Tokyo、JP
Sakana AIさんが再投稿しました
Sakana AIは採用しています! 進化的計算と基盤モデルの更なる発展を自ら切り開きたい方は、当社の募集要項をご覧ください。sakana.ai/careers
AIに1人勝ちはない AIモデルの開発競争に、新たな潮流が生まれつつあります。DeepSeekの登場は、大規模モデル至上主義に一石を投じ、AI開発における「効率性」の重要性を浮き彫りにしました。小型で効率的なAIモデルは、私たちの生活にどのような変化をもたらすのか?Sakana AIのDavid Ha(CEO)が、今後の開発競争の展望、NVIDIAとの協業、そして日本における可能性について語ります。 (TBS x ブルームバーグ 2025年2月3日配信) https://lnkd.in/gNDv6Zgx
我々の新しい知識蒸留の手法「TAID」の論文が #ICLR2025 でSpotlight (Top 5%)を受賞しました。✨ https://lnkd.in/g9p3wTcd
“TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models” has been accepted at #ICLR2025 as a Spotlight Paper ✨ Blog post (English): https://sakana.ai/taid/ Paper (OpenReview): https://lnkd.in/g9p3wTcd A key problem with the standard distillation approach is the mismatch between student-teacher distributions. TAID is a new knowledge distillation approach that dynamically interpolates student and teacher distributions through an adaptive intermediate distribution, gradually shifting from the student’s initial distribution towards the teacher’s distribution. We show that TAID can distill LLMs and VLMs into state-of-the-art small models that run locally on mobile devices.
“TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models” has been accepted at #ICLR2025 as a Spotlight Paper ✨ Blog post (English): https://sakana.ai/taid/ Paper (OpenReview): https://lnkd.in/g9p3wTcd A key problem with the standard distillation approach is the mismatch between student-teacher distributions. TAID is a new knowledge distillation approach that dynamically interpolates student and teacher distributions through an adaptive intermediate distribution, gradually shifting from the student’s initial distribution towards the teacher’s distribution. We show that TAID can distill LLMs and VLMs into state-of-the-art small models that run locally on mobile devices.
Sakana AIは、パリのAIアクションサミットで発表された、公益目的のAIを促進する新たなパートナーシップの支援メンバーとなりました。オープンソースをベースとした開発をこれからも進めてまいります。 PR: https://lnkd.in/eNwUxkPU #AIActionSummit
Sakana AIの伊藤錬(COO)がLinkedIn創設者のリード・ホフマン氏らとパリで対談します!日本時間2月10日(月)22時よりLive視聴が可能です。ぜひご覧ください。Ren Ito, COO of Sakana AI, will be speaking at the #AIActionSummit. Tune in using the link on Feb 10 at 2PM CET! https://lnkd.in/djQMDFX8
Sakana AIさんが再投稿しました
Sakana AI CEOのデイビッド・ハがBloomberg TVに出演しました。 「現在のLLMはメインフレームのようなもの」 未来のLLMは、DeepSeek、MetaのLlama、Sakana AIなどさまざまなプレイヤーが開発した多様なオープンソースモデルが使える世界になるでしょう。 メインフレームからパソコン、スマホへ。
“Today’s LLMs are the ‘Mainframe Computers’ of our generation” I was on Bloomberg News today discussing Sakana AI, and to share my view that today’s LLMs are our generation’s “Mainframe Computers”. We are still in the very early stages of AI, and it is inevitable, due to market competition and global innovation (especially from those innovating with resource constraints), that this technology will become a million times more efficient. There is a narrative established in Silicon Valley that AI is a “Winner Takes All” technology, and that scaling up existing models and consuming ever greater resources will require (and even justify) the largest investments of our generation, in order to “win” the AI race. In contrast, I believe that AI is not a “winner takes all” technology. LLMs will be commoditized, become vastly more efficient, and made widely available in all countries. Ultimately, there will be thousands, if not millions, of AI models used by everyone. Just like with the evolution of early clunky mainframe computers to modern computing, how we use AI today will look very different in a few years (or even a year from now), compared to today’s ‘clunky’ LLMs.
Sakana AI on AI Development Outlook David Ha, CEO and Co-Founder of Sakana AI, discusses his outlook for the global AI landscape and developments, saying that the emergence of models like China's Deepseek only fuels more demand for GPUs from Nvidia, and eventually more alternative models of AI tools will be produced. (Source: Bloomberg News) https://lnkd.in/giujiwaJ
Sakana AIさんが再投稿しました
“Today’s LLMs are the ‘Mainframe Computers’ of our generation” I was on Bloomberg News today discussing Sakana AI, and to share my view that today’s LLMs are our generation’s “Mainframe Computers”. We are still in the very early stages of AI, and it is inevitable, due to market competition and global innovation (especially from those innovating with resource constraints), that this technology will become a million times more efficient. There is a narrative established in Silicon Valley that AI is a “Winner Takes All” technology, and that scaling up existing models and consuming ever greater resources will require (and even justify) the largest investments of our generation, in order to “win” the AI race. In contrast, I believe that AI is not a “winner takes all” technology. LLMs will be commoditized, become vastly more efficient, and made widely available in all countries. Ultimately, there will be thousands, if not millions, of AI models used by everyone. Just like with the evolution of early clunky mainframe computers to modern computing, how we use AI today will look very different in a few years (or even a year from now), compared to today’s ‘clunky’ LLMs.