AI-Powered news roundup: Edition 18

AI-Powered news roundup: Edition 18

We’ve surpassed 5,000 subscribers! We appreciate your ongoing support!

Our bi-weekly AI news roundup is designed to keep you informed on the latest (and most important) developments, all in under 5 minutes.


In this edition:

  1. OpenAI’s 12 Days of AI: A holiday sprint of tech announcements
  2. Google introduces Android XR
  3. Gemini introduces Deep Research and experimental 2.0 Flash model
  4. GitHub makes Copilot free for casual users
  5. Grammarly acquires Coda to build an AI productivity platform
  6. Google enhances Gemini Code Assist with third-party tools
  7. YouTube introduces creator controls for AI model training
  8. Siili releases AI-powered productivity whitepaper


1. OpenAI’s 12 Days of AI: A holiday sprint of tech announcements

Source: OpenAI

In a series of announcements dubbed "12 Days of OpenAI," the company unveiled groundbreaking updates daily from December 5 to December 20. From advancing multimodal capabilities to developer tools, OpenAI revealed its plans for 2025.

This aggressive release schedule reflects a strategic response to Google's competitive pace, signaling a heated rivalry in the AI space. It also underscores OpenAI’s focus on building versatile multimodal systems and empowering developers with customizable tools. The sprint positions OpenAI for a promising 2025, where generative AI is expected to play a pivotal role across industries.

Here is the recap of the announcements:

Day 1: December 5 OpenAI launched the full version of the o1 model, making it available for ChatGPT Plus and Team users. The o1 model introduced improved vision analysis, API support for structured outputs, and reduced error rates in complex queries. A new $200/month ChatGPT Pro tier was also unveiled, offering enhanced access to o1 and GPT-4o for power users.


Day 2: December 6 Reinforcement Fine-Tuning (RFT) was introduced as a novel approach to model customization. It allows developers to refine "o-series" models using iterative feedback, with applications already tested in healthcare and legal fields. Limited access to RFT is available for select research programs, with a broader rollout planned for 2024.


Day 3: December 9 The Sora text-to-video model moved from research to production, now available through sora.com for Plus and Pro subscribers. The release marks OpenAI’s official entry into video synthesis, with faster processing and expanded subscription tiers.


Day 4: December 10 OpenAI made Canvas, a tool for extended writing and coding projects, widely available. Now integrated with GPT-4o, Canvas supports Python code execution, custom GPTs, and a change-tracking feature. It’s accessible on web and desktop platforms, with future updates planned.


Day 5: December 11 ChatGPT gained integration with Apple Intelligence, allowing users to access its features across iOS, iPadOS, and macOS. The feature works on newer Apple devices and respects Apple’s privacy frameworks.


Day 6: December 12 OpenAI enhanced ChatGPT’s voice capabilities, adding support for video calling with screen sharing and introducing a seasonal Santa Claus voice mode. These updates are available for Plus and Pro users, with broader rollout planned for January 2024.


Day 7: December 13 The Projects feature debuted, enabling users to group conversations, files, and custom instructions into centralized workspaces. Initially available to paid subscribers, Projects will expand in 2024 to include cloud storage integrations and more file types.


Day 8: December 16 Search capabilities in ChatGPT were expanded to free users, with improved speed and mobile optimization. A new maps interface and voice-search integration were also introduced.


Day 9: December 17 The o1 model became available via API, featuring vision processing, function calling, and developer messages. Pricing adjustments and new SDKs for Go and Java were also announced, broadening OpenAI’s developer toolkit.


Day 10: December 18 ChatGPT launched voice and messaging access through a toll-free number and WhatsApp, targeting users with limited internet access. The features provide experimental interfaces for engaging with AI.


Day 11: December 19 Desktop app integrations expanded, adding compatibility with popular coding environments like JetBrains IDEs, VS Code, and productivity tools like Apple Notes and Notion. Advanced Voice Mode was also extended to desktop applications.


Day 12: December 20 OpenAI previewed o3 and o3-mini, its next-generation simulated reasoning models, showcasing record-setting performance in mathematical and programming benchmarks. Applications for testing the models are now open to researchers.


2. Google introduces Android XR

Source: Google Blog

Google has unveiled Android XR, a next-generation operating system designed for extended reality (XR) devices such as headsets and AR glasses. Developed in collaboration with Samsung, Android XR integrates AI, AR, and VR technologies to create natural and immersive user experiences.

Video source: Google Blog

The platform launches with Project Moohan, Samsung's first Android XR-powered headset set for release next year. Users will enjoy features like Gemini, Google’s AI assistant, enabling seamless interactions and contextual assistance. Android XR supports immersive apps like Google Maps in 3D, Google Photos, and multitasking with virtual screens on Chrome. Popular mobile apps from Google Play will function out of the box, with more XR-specific apps arriving soon.

Future glasses will offer stylish, always-on assistance, delivering instant translations, directions, and message summaries within the user’s line of sight. Google plans real-world testing of prototype glasses to refine the experience while prioritizing privacy.

This initiative builds on Android's legacy of scalability, aiming to create a robust XR ecosystem for developers and users alike.


3. Gemini introduces Deep Research and experimental 2.0 Flash model

Source: Google Blog

Google has unveiled new updates for its Gemini AI assistant, including Deep Research, a feature that streamlines complex research tasks, and an experimental preview of the Gemini 2.0 Flash model.

Deep Research leverages advanced AI to generate comprehensive reports on complex topics by autonomously analyzing relevant web information. Users can guide the process by approving a multi-step research plan, and the results are organized with links to sources for further exploration. This tool is ideal for students, entrepreneurs, and marketers needing detailed, time-saving insights. Deep Research is now available for Gemini Advanced users on desktop and mobile web, with mobile app support coming in 2025.

The Gemini 2.0 Flash Experimental model offers improved performance and faster responses, optimized for chat interactions. While still in an early preview phase, it showcases Gemini's evolving capabilities. Both updates mark significant steps toward making Gemini a more agentic and efficient AI assistant.


4. GitHub makes Copilot free for casual users

Source: GitHub Blog

GitHub has announced a free version of its AI-powered code assistant, Copilot, now available by default in Microsoft’s VS Code editor. Previously, Copilot was only free for students, teachers, and open source maintainers, while others paid a minimum of $10/month.

The free version targets occasional users, offering up to 2,000 code completions per month and 50 chat messages, with access to AI models like Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-4o. Paid plans, by contrast, include additional models such as OpenAI’s o1 and Google’s Gemini 1.5 Pro. Despite these limitations, free-tier users gain full access to Copilot extensions and skills across editors like VS Code, Visual Studio, JetBrains, and GitHub.com.

GitHub CEO Thomas Dohmke highlighted this move as part of the company’s mission to lower barriers to entry for aspiring developers, particularly in countries where subscription costs are prohibitive. By adopting a freemium model, GitHub aims to expand Copilot’s reach and further its goal of enabling one billion developers worldwide.

The decision also comes amid increasing competition in the AI coding space, with rivals like Tabnine and AWS offering free tiers. By integrating Copilot with VS Code and offering free access, GitHub strengthens its position as the leading AI tool for developers while staying true to its roots of accessibility and innovation.


5. Grammarly acquires Coda to build an AI productivity platform

Source: Grammarly

Grammarly announced its acquisition of productivity startup Coda, with Coda’s CEO Shishir Mehrotra set to become Grammarly’s new CEO. Current CEO Rahul Roy-Chowdhury will step down and take on an advisory role. Financial terms of the deal were not disclosed.

The acquisition aims to transform Grammarly’s AI assistant into a comprehensive AI productivity platform. By integrating Coda’s tools, Grammarly users will gain access to generative AI chat, enhanced writing features, and a robust productivity suite designed to streamline workflows. Additionally, Coda’s flagship product, Coda Docs, will integrate Grammarly Assistant to offer smarter writing and productivity solutions.

Mehrotra envisions an AI assistant capable of connecting seamlessly to tools like email, project trackers, and CRMs, leveraging company knowledge to redefine productivity. The merger will unite both companies’ strengths, offering features like generative AI chat, company knowledge integration, and task-specific agents.

With 40 million active users and a $13 billion valuation, Grammarly is poised to compete in the growing AI productivity market, combining its expertise in writing tools with Coda’s innovative approaches to collaborative work.



6. Google enhances Gemini Code Assist with third-party tools

Source: TechCrunch

Google has introduced third-party tool support for its enterprise-focused AI coding service, Gemini Code Assist. This update, currently in private preview, enables developers to integrate real-time data and external application insights directly into their coding environment.


Image credits: Google

Code Assist, launched in April as a rebrand of Duet AI, leverages Google’s Gemini models to reason over and modify extensive codebases. With the new tools, developers can retrieve information or take actions beyond the coding environment. For instance, they can summarize Jira comments, track changes in Git, or monitor live site issues via Sentry.

At launch, integrations include tools from GitLab, GitHub, Sentry.io, Atlassian Rovo, Snyk, and Google Docs. For now, only Google Cloud partners can create tools for the platform, with more options potentially opening up in the future.

This feature aims to reduce “context switching” for developers by consolidating productivity, observability, and security solutions within the coding workspace. While Gemini Code Assist competes with GitHub’s Copilot Enterprise, Google highlights its edge in supporting on-premises codebases and providing customized suggestions based on private repositories.

AI coding tools continue to gain traction, with developers increasingly adopting solutions like Code Assist and Copilot despite ongoing concerns about security and copyright implications.



6. YouTube introduces creator controls for AI model training

Source: TechCrunch

YouTube has launched a new feature allowing creators to control how third parties use their content to train AI models. Accessible via the YouTube Studio dashboard, creators can now opt in to permit specific companies, such as OpenAI, Meta, Nvidia, or Adobe, to train their generative AI models on their videos.

Key features:

  • Creators can choose from a list of 18 approved companies or grant access to "all third-party companies."
  • The default setting prevents third-party AI training, ensuring unauthorized use is flagged as against creators’ wishes.
  • Eligible users include creators with administrator roles in YouTube Studio Content Manager.

The feature aims to provide transparency and control, addressing creators’ concerns over the unconsented use of their content for AI training. While the setting doesn’t impact Google’s ability to use YouTube content under existing agreements, it marks a step toward fairer AI practices. YouTube also plans to explore options for allowing direct downloads for authorized AI partners in the future.

This update follows YouTube’s earlier announcement of AI detection tools to safeguard creators’ likenesses and copyrights, broadening its approach to ethical AI practices.

Notifications about the new feature will roll out globally over the next few days. Meanwhile, Google’s AI research lab DeepMind has debuted Veo 2, a video-generating AI model poised to compete with OpenAI’s Sora.


6. Siili releases AI-powered productivity whitepaper

Source: Siili Research

Okay, we’ll admit it—this isn’t exactly breaking news, but it’s an exciting development for us! Siili’s internal research on developer productivity has confirmed what many in the industry suspect: AI-powered tools are revolutionizing how we work.

While not every task benefits from automation, the data shows that AI tools help our developers work faster and smarter, freeing up time to focus on creative problem-solving and innovation.


Curious to learn more? We compiled the findings into a comprehensive whitepaper. Download it for FREE and see how AI tools might transform your team too!


Are you as passionate about AI as we are? Explore open positions at Siili and work on exciting, cutting-edge projects. Join us and take the next step in your career!


Such a packed edition of AI updates—perfect for wrapping up the year!

To view or add a comment, sign in

More articles by Siili Solutions

Explore topics