VCI Global, through its subsidiary V-Gallant Sdn. Bhd., partners with Hexatoff Group for a USD 24M data center project. With NVIDIA H200 Tensor Core GPUs at its core, this project positions Malaysia as a leading hub for AI innovation in Southeast Asia! Read more on VCI Global's website: https://lnkd.in/gHnZRhHg
VCI Global Limited (VCIG)’s Post
More Relevant Posts
-
Top articles in July on eeNews Europe ... https://lnkd.in/dXvB8wiA AMD bought Europe’s largest private AI lab, Silo.ai ... while the acquisition of UK AI chip design @Graphcore was confirmed by SoftBank. An Arm microcontroller was also at the heart of a system in package for #edge #AI alongside a sparse AI accelerator from Femtosense ... while MikroElektronika provided a snappy path for its Click boards ... ‘No Chips, No Glory’ to boost the supply chain with a funding bid under the EU Chips Acts ... the news that Microchip Technology Inc. has expanded into the 64bit market is a key article this month ... and much more
To view or add a comment, sign in
-
Exciting news from #CES2025! Alif Semiconductor has unveiled the second generation of Ensemble microcontrollers: E4, E6, and E8. These are the world's first microcontrollers to introduce Ethos-U85 NPUs from Arm! Check it out here: https://okt.to/r7AQwz
Arm on LinkedIn: #ces2025 | 12 comments
linkedin.com
To view or add a comment, sign in
-
UCIe 64Gbps will not only power the next generation of chiplets, but also custom HBM4 implementations, to unleash unprecedented bandwidth that is paramount for AI hardware systems. #UCIe #customHBM #AI
Alphawave Semi proudly introduces the industry’s first 64 Gbps Universal Chiplet Interconnect Express (UCIe™) Die-to-Die (D2D) IP Subsystem, to deliver unprecedented chiplet interconnect data rates, setting a new standard for ultra-high-performance D2D connectivity solutions in the industry. The third generation, 64 Gbps IP Subsystem builds on the successes of the most recent Gen2 36 Gbps IP subsystem and silicon-proven Gen1 24 Gbps which is available in TSMC’s 3nm Technology for both Standard and Advanced packaging. The silicon-proven success and tapeout milestones pave the way for Alphawave Semi’s Gen3 UCIe™ IP subsystem Offering. To read the full announcement, visit: https://lnkd.in/ehry28y6 #AlphawaveSemi #ConnectivityIP #ConnectivitySolutions #Chiplets #AI #Connectivity #CustomSilicon #Silicon #HighSpeedAIConnections #UCle #D2D #IPSubsytem
To view or add a comment, sign in
-
-
TechInsights’ latest analysis uncovers the first commercial use of Samsung Semiconductor’s HBM3 memory, featured in AMD’s MI300X AI accelerator. This breakthrough highlights advancements in AI processors, memory technology, and advanced packaging. As the demand for high-performance AI solutions grows, Samsung’s HBM3 delivers the speed, efficiency, and scalability needed for next-gen workloads. Learn more about this milestone discovery and its implications for the semiconductor industry: https://bit.ly/4jpHmi8 #TechInsights #AIAccelerators #HBM3 #SemiconductorIndustry #AdvancedPackaging
To view or add a comment, sign in
-
-
Practical Use Cases: DGX SuperPOD and B200 Systems The DGX SuperPOD and DGX B200 systems fully leverage these features to deliver world-class AI performance. These systems are built with eight Blackwell GPUs, each connected by fourth-generation NVIDIA NVLink and third-generation NVSwitch technologies for intra-system communication, along with Quantum-X800 InfiniBand for inter-system communication. These connections allow the system to deliver 144 petaflops of AI performance with 64 TB/s memory bandwidth, allowing for the efficient training and inference of models with trillion parameters NVIDIA's SHARPv4 technology takes this even further by enabling In-Network Computing—performing collective operations such as reductions (e.g., summing gradients in distributed training) directly on the network switches, reducing the time spent transferring intermediate data between nodes.
To view or add a comment, sign in
-
Faraday's Wenew Wang, Director of Business Development, presented "System-Based and 3D-IC Design Flow Achieve AI Chiplet ASIC Development" at the Samsung Foundry Forum & SAFE Forum (Samsung Advanced Foundry Ecosystem). You can use the message button on this page to request a copy of the presentation. #ai #chiplet #aiasic #3dic #multichiplethiplet
To view or add a comment, sign in
-
-
SK hynix prepares 16-layer #HBM3e #DRAM for 2025 SK Hynix has announced that its 16-layer HBM3e #DRAM will be available in 2025. The device is expected to sample in 1H25. The company’s CEO, Kwak Noh-Jung, announced the development of the 16-layer HBM3e memory with a capacity of 48Gbytes at the SK Group’s AI Summit in Seoul, Korea. “We stacked 16 #DRAM chips to realize 48Gbyte capacity and applied advanced MR-MUF [mass reflow-molded underfill] #technology proven for mass production. In addition, we are developing hybrid bonding #technology as a backup process,” said Kwak in a keynote. “The 16-layer HBM3E can improve #AI learning performance and inference performance by up to 18 and 32 percent, respectively, compared to the 12-layer HBM3E.” he added. “In the long term, we will commercialize custom #HBM and #CXL optimized for AI to become a full stack AI #memory provider,” he concluded. Kwak added that SK Hynix is working with the world’s leading foundry – TSMC – to improve the performance of the foundational die for next-generation standard HBM4. This is a logical carrier die upon which the stack of DRAMs is mounted. The goal is to optimize the die to reduce power consumption. “With our ‘one-team’ partnership, we will deliver the most competitive products and further solidify our position as the HBM leader,” Kwak said. https://lnkd.in/ehq784Ys #embeddedsystem #embedded #IIOT #SSD #storage #memory #NAND #flashmemory #electronics #semiconductor #technology #DRAM #DDR #datacenter #data
To view or add a comment, sign in
-
-
Kwak No-jeong, CEO of SK Hynix, unveils world’s first 16-layer HBM3E Kwak No-jeong, CEO and President of SK Hynix, unveiled the development of 16-layer HBM3E for the first time in the world. This is the highest-end product that surpasses the 12-layer HBM3E, which currently has the highest performance among HBM (High Bandwidth Memory). At the 'SK AI Summit 2024' held at COEX in Samseong-dong, Seoul on the 4th, President Kwak No-jeong gave a keynote speech on the topic of 'A new journey of next-generation AI memory, beyond hardware to everyday life'. At this event, President Kwak officially announced the development of the 16-layer HBM3E, which has the world's first 48GB (gigabytes) of the current HBM maximum capacity. The capacity of the existing 12-layer HBM3E was 36GB, which is 12 stacked 3GB DRAM chips. At the event that day, President Kwak explained the changes in the concept of memory over time and introduced SK Hynix's technology and products that are leading the AI era. President Kwak said, "We will work closely with customers, partners, and stakeholders to become a 'Full Stack AI Memory Provider' He presented his vision for the future, saying, “We will grow.” The full-stack AI memory provider means that SK Hynix has an AI memory product lineup in all areas of DRAM and NAND. On this day, President Kwak No-jeong announced that SK Hynix is currently preparing various ‘World First’ products that it is developing and mass-producing for the first time in the world, and is planning ‘Beyond Best’ products with the highest competitiveness. He also emphasized that it will introduce ‘Optimal Innovation’ products for system optimization in the AI era. Regarding the World First products, he explained that the 16-layer market will open in earnest starting with HBM4, and in preparation for this, SK Hynix is developing a 48GB 16-layer HBM3E to secure technological stability and plans to provide samples to customers early next year. To produce the 16-layer HBM3E, SK Hynix plans to utilize the Advanced MR-MUF process, which has proven its mass production competitiveness in 12-layer products, and hybrid as a backup process. He also explained that hybrid bonding technology is being developed. According to SK Hynix's internal analysis results, the 16-layer HBM3E has improved performance by 18% in the learning field and 32% in the inference field compared to the 12-layer product. https://lnkd.in/gpGGDGPh #SKHynix #NVIDIA #HBM3E #HBM #AISummit
To view or add a comment, sign in
-
-
🔍 SEMICON Taiwan 2024: Shifting Focus to Memory & Network Chipmakers 💾📡 At #SEMICON #Taiwan 2024, the spotlight is on high bandwidth memory (HBM) 🧠 and high-speed transmission technologies 🚀, reflecting a growing industry focus on overcoming bottlenecks in AI computing. Unlike previous years dominated by major chipmakers like Nvidia and AMD, this year's event highlights the critical role of memory and high-speed interfaces in advancing cloud-based generative AI. With companies like Samsung, SK Hynix, and Micron leading the charge in HBM production, and firms like Nvidia, Intel, and MediaTek developing cutting-edge high-speed interfaces, the future of AI is all about connectivity and memory. Even emerging technologies like silicon photonics are set to play a crucial role 🔦. #SEMICONTaiwan2024 #AI 🤖 #TechInnovation 🌟 #Memory 💾 #HighSpeedTransmission 🚀 #Semiconductors 🧩 #CloudAI ☁️ #FutureOfTech 🔮 #HBM #TechInnovation #SiliconPhotonics #TechTrends https://lnkd.in/dsxQ6dg8
At SEMICON Taiwan 2024, the AI focus turns to memory and network chipmakers
digitimes.com
To view or add a comment, sign in
-
The 2024 Taiwan Edge AI Day aimed to promote cross-industry cooperation between Taiwan and Japan’s Edge AI companies in a localized method and to welcome the era of Edge AI together with Taiwan and Japan’s ICT and semiconductor industries. The event invited speakers including Albert Liu –Founder & CEO of Kneron, KS Pua - Founder & CEO of 群聯電子 PHISON Electronics Electronics, and Kenji Tsuda - Editor-in-chief of News & Chips who shared their insights on the latest trends in semiconductor and Edge AI. Read the full press release here: >>> https://lnkd.in/gq7GRyrX
To view or add a comment, sign in
-