The field of #graphene #technology has witnessed a significant breakthrough, thanks to a remarkable paper that showcases the ability to differentiate between different types of #milk, as well as between pristine and adulterated milk, different types of coke drinks, types of #coffee, and fresh versus stale #food, using an off-the-shelf #chip, all with the help of Machine Learning (#ML) and Artificial Intelligence (#AI). Strikingly, these feats were achieved with devices that are not functionalized, which opens doors for broad #applications and accessibility. https://lnkd.in/dkU5gkdr
Graphenea’s Post
More Relevant Posts
-
The #ArtificialIntelligence is a game changer not only in the good but also where hype isn't welcome. I am referring to #sustainability, a topic we should care about, especially considering the latest research published in #Nature, "E-waste Challenges of Generative Artificial Intelligence" by Peng Wang, Ling-Yu Zhang, and Wei-Qiang Chen. This is the final version of a preprint released in March via #ResearchGate (https://lnkd.in/dspdGFVA): the final release of August 2024 can be downloaded upon login, while the first release is directly accessible (I have thought to report both). 🌐 Source: https://lnkd.in/d3TThKgv Authors focus the attention to the growing problem of e-waste generated by the development and use of #ArtificialIntelligence, especially #GenerativeAI models. Even though it's a small fraction of the total global e-waste, #AI could contribute up to 5 million metric tons by 2030: the amount includes discarded hardware like servers, GPUs, and CPUs used in data centers to train and run AI models. witch contain #metals like gold and copper, but also #hazardous materials like lead and mercury, posing risks to human health and the environment if not disposed of properly. The short #lifespan of computing equipment (2-5 years) is one of the driver. 🔴 Of course, what's happening around AI is not the main and primary e-waste reason. Unfortunately, there is still a lot to talk about. AI is only another electronic waste contributor. These values and estimates as well as the model developed to estimate the e-waste coming from data centers powering generative AI, might be even #underestimating the actual e-waste generated due to several factors: ➡️ The AI field is evolving quickly, with new models and technologies emerging constantly, leading to faster hardware #obsolescence; ➡️ #Restrictions on semiconductor imports can force companies to replace hardware more frequently, further contributing to e-waste; ➡️ The study evaluates #LargeLanguageModels, not other types of AI, which also contribute to e-waste. 🌏 Possible mitigation/solutions? 1️⃣ I believe the first is to keep #people aware of what's happening and how each can help on a daily basis, and take minor actions to mitigate that; 2️⃣ Using #equipment for longer periods... can we survive the fashion of changing our devices every year? 3️⃣ #Refurbished or second hand products could be an option or creating hardware that is easier to #recycle and upgrade; 4️⃣ Implementing #policies to promote proper e-waste recycling and disposal, and taking companies responsible for the environmental impact of their AI products. 💡We have only one life, but also only one Earth. #SustainablAI #ResponsibleAI
To view or add a comment, sign in
-
-
SandboxAQ and NVIDIA Forge Partnership to Transform AI-Driven Chemistry SandboxAQ has announced a groundbreaking advancement in the field of computational chemistry, leveraging NVIDIA’s cutting-edge technology to push the boundaries of what is possible in various sectors, including biopharma, chemicals, materials science, and beyond. By integrating Large Quantitative Models (LQMs) with NVIDIA’s CUDA-accelerated Density Matrix Renormalization Group (DMRG) algorithm, SandboxAQ is setting a new standard for accuracy in Quantitative AI simulations, far exceeding the capabilities of current Large Language Models (LLMs) and other AI frameworks. https://is.gd/sAGC32 #ai #artificialintelligence #llm #machinelearning #sandboxaq #science
To view or add a comment, sign in
-
-
Research Update:- Introducing ModernBERT: A Leap in Transformer Efficiency and Performance! Excited to share groundbreaking work on ModernBERT, a state-of-the-art bidirectional encoder that redefines the performance-size tradeoff for retrieval and classification tasks. 🌐 Why ModernBERT? While encoder-only transformers like BERT have been essential for production pipelines, advancements have been limited—until now. ModernBERT integrates modern model optimizations, delivering major Pareto improvements over legacy encoders. Key Highlights: 🔹 Trained on 2 trillion tokens with a native sequence length of 8192 🔹 State-of-the-art results across diverse classification tasks and single/multi-vector retrieval, including domains like code 🔹 Optimized for speed and memory efficiency, making it ideal for inference on common GPUs This collaborative effort across leading organizations like Answer.AI, LightOn, NVIDIA, Johns Hopkins University, and HuggingFace showcases the power of teamwork in advancing AI research. #AI #MachineLearning #ModernBERT #Transformers #DeepLearning #Innovation #Research
To view or add a comment, sign in
-
In the field of artificial intelligence, the latest research results show that a new machine learning algorithm is significantly improving computing efficiency. This algorithm, called "QuantumBoost", was developed by NextGen AI, a technology company based in Silicon Valley. The research team said that QuantumBoost can reduce computing time by up to 50% when processing large-scale data sets, and also significantly improve accuracy. Emily Zhang, chief technology officer of NextGen AI, said: "We introduced the idea of quantum computing in the algorithm design, and by optimizing the data processing path, the algorithm can more efficiently extract useful information from massive data. This breakthrough will have a profound impact on many fields such as financial analysis, medical diagnosis and autonomous driving." The company has begun working with several large technology companies to test the performance of QuantumBoost in practical applications. It is expected that the technology will be officially put into use in the next few months, bringing revolutionary changes to machine learning and artificial intelligence applications. #QuantumBoost: How a new algorithm improves computational efficiency #The Impact of NextGen AI’s QuantumBoost on Big Data #Quantum Computing Breakthroughs in Machine Learning: The Case of QuantumBoost
To view or add a comment, sign in
-
-
🌍 Exciting Advances in Green AI with Tensor Networks! 🌍 Having an independent opinion is always important to validate the value proposition of any business, Esteemed researchers from TU delft in their very recent paper highlight 𝐭𝐡𝐞 𝐩𝐨𝐭𝐞𝐧𝐭𝐢𝐚𝐥 𝐨𝐟 𝐭𝐞𝐧𝐬𝐨𝐫 𝐧𝐞𝐭𝐰𝐨𝐫𝐤𝐬 (𝐓𝐍𝐬) 𝐭𝐨 𝐫𝐞𝐯𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐢𝐳𝐞 𝐀𝐈 𝐞𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲 without compromising accuracy (Position : Tensor Networks are a Valuable Asset for Green AI https://lnkd.in/er6-tC_T) At Multiverse Computing , as leader in the TN field, we're at the forefront of sustainable AI innovation. By drastically reducing computational needs, TNs align perfectly with our mission to deliver state-of-the-art, green AI solutions. In this vein, we're especially proud of 𝐂𝐨𝐦𝐩𝐚𝐜𝐭𝐢𝐟𝐀𝐈, our proprietary tensor network-based LLM compression solution. 𝐂𝐨𝐦𝐩𝐚𝐜𝐭𝐢𝐟𝐀𝐈 aims to maximize efficiency and sustainability. 𝐂𝐨𝐦𝐩𝐚𝐜𝐭𝐢𝐟𝐀𝐈 significantly reduces energy consumption while maintaining top performance. And it's not just us saying this—independent experts agree on its groundbreaking potential. Discover more about compactifAI with our arXiv paper 📝 https://lnkd.in/ex7_UXJ4 and watch our video 🎥https://lnkd.in/ePDmeZUX #CompactifAI #GreenAI #SustainableTech #TensorNetworks #AIInnovation #quantum #LLM
To view or add a comment, sign in
-
-
🌟 Excited to Share My Research Journey! 🌟 I'm thrilled to announce the publication of my research paper, "A Surrogate Approach to Explainable AI for Predictive Maintenance: Techniques and Applications", in the IEEE 2024 5th International Conference on Circuits, Control, Communication, and Computing (I4C). In this work, we explore how Explainable AI (XAI) techniques such as SHAP, LIME, and visualization methods can enhance the transparency and interpretability of machine learning models in critical applications like manufacturing and aviation. These methods not only improve trust and compliance but also provide actionable insights for predictive maintenance decisions. Here are the links to access the paper: #GoogleScholar: https://lnkd.in/gq5euXJz #IEEEXplore: https://lnkd.in/g2-ePWcH This milestone wouldn't have been possible without the guidance and support of my co-authors: 📌 J R Shruti, Indraneel T, Nithya BL,Dheeraj V. Special thanks to Ramaiah Institute Of Technology for fostering a great research environment. Looking forward to collaborating with industry and academic experts on advancing research in XAI and predictive maintenance! #Research #ExplainableAI #PredictiveMaintenance #MachineLearning #Innovation
To view or add a comment, sign in
-
-
Recently, I read a paper that made me wonder about the future of computation, especially in the realm of large language models (LLMs). The paper, titled 'The Era of 1.58-bit LLMs,' introduced a concept that could change how we think about efficiency and scalability in AI. 🔍 Meet BitNet b1.58 – a new approach where every model weight is constrained to just three values: {-1, 0, 1}. This reduces each weight to 1.58 bits, enabling impressive gains in memory and energy efficiency while preserving the performance of traditional full-precision (FP16) models. Imagine this: ⚡ Improved speed and lower resource demands: With significantly reduced latency, memory, and energy consumption, BitNet b1.58 could help scale LLMs to perform better, even on mobile and edge devices. 💡 Adaptability for current and future hardware: This model could support larger batch sizes, and its low-bit structure invites the possibility of custom hardware designed specifically for ultra-efficient AI. 🌍 A step toward sustainable AI: By lowering the computational footprint of LLMs, BitNet b1.58 opens the door to responsible scaling, making powerful models more accessible and environmentally conscious. With developments like these, the future of LLMs looks both powerful and sustainable. Exciting times ahead for AI! Link to the paper: https://lnkd.in/e5i7NBDr #AI #MachineLearning #LLMs #Innovation #FutureOfComputation
To view or add a comment, sign in
-
-
Meta has been sending recipes to a Dutch scaleup called VSParticle for the past few months. These are not food recipes—they’re AI-generated instructions for making new nanoporous materials that could potentially supercharge the green transition. #AI https://lnkd.in/gr9vM5vi
Nanoprinter turns Meta’s AI predictions into potentially game-changing materials
thenextweb.com
To view or add a comment, sign in
-
As my firm and team continues to muscle in Data Science and AI in new products, prior to Fourth of July, this political cartoon truly resonates with the additional human like capacities, machine learning, and comprehension of AI of the decade. Some food for thought for Fourth of July, as Quantum, CleanTech, UrbanTech, and BioTech continue to shape new forces and supply chain innovation across all realms. What do you see in this political cartoon? A good cartoon cover for The New Yorker. Artist: Unknown #DataScience #AI #MachineLearning #Algorithms #SMARTTechnologies #SmartCities #Sensors #MedTech #AI #Quantum #BioTech #UrbanTech #CleanTech
To view or add a comment, sign in
-
-
🔌 When AI Confronts Power Quality: The Hidden Crisis - Part 2 Delve into the critical relationship between artificial intelligence and power quality in our latest technical analysis. Through the unique lens of an AI system processing over 10,000 pages of electrical engineering research, we uncover why even microscopic power fluctuations can compromise critical AI operations. 🧠 Key insights: - How power quality impacts neural network performance - Why traditional solutions fail modern AI needs - The revolutionary role of Software-Defined Electricity (SDE) #AI #PowerQuality #DataCenter #Technology #Innovation #SoftwareDefinedElectricity #Infrastructure #Engineering #ElectricalEngineering #TechInnovation -- Contributing insights: 3DFS Optimized Electricity Follow for Part 3 where we explore the economic revolution possible through optimized power for AI systems.
To view or add a comment, sign in