🏆🔎 On November 19, INESC-ID PhD student André Duarte was honoured with the SPARK Award for the best PhD student article, during the Center for Responsible AI Forum 2024 (CRAI 2024), in Porto. The SPARK Awards, hosted by CRAI, were the first Responsible AI Student Awards and recognised projects on topics ranging from machine learning to computer vision. Congratulations, André! 👏 #CenterforResponsibleAI #SPARKAwards #SPARK #AI #INESCID
INESC-ID’s Post
More Relevant Posts
-
I'm excited to share our latest blog post that delves into the AI Mathematical Olympiad Progress Prize 2. This competition presents a unique opportunity for AI enthusiasts and scholars to showcase their skills in tackling complex mathematical challenges. Join us in exploring the intricacies of the competition, its objectives, and the profound impact it could have on the field of artificial intelligence. To read the full post, please visit the link below: [AI Mathematical Olympiad Progress Prize 2](https://ift.tt/3UiBVDy)
To view or add a comment, sign in
-
John J. Hopfield and Geoffrey Hinton's 🏆 Nobel Prize win in Physics for neural networks is a defining moment for AI and machine learning. Their work continues to inspire the next generation of Data Science domain, including myself! 🚀 #NobelPrize #AI #MachineLearning #DataScience
To view or add a comment, sign in
-
"Delighted to share the successful completion of the 'Bank Loan Approval Prediction With Artificial Neural Nets' project, part of the Woxsen University program on Coursera Project Network! Delving into the realm of data science, we've crafted predictive models using artificial neural networks to forecast loan approval outcomes. Grateful for the guidance from Woxsen University and Coursera, empowering me to harness the power of AI for real-world applications. #DataScience #MachineLearning #ArtificialNeuralNetworks #CourseraProjectNetwork #WoxsenUniversity"
To view or add a comment, sign in
-
🚀 Dive into the world of Generative AI with the NVIDIA DLI course on Diffusion Models! Learn how to harness the power of denoising diffusion models for creative content generation, data augmentation, and much more. Enroll now and take your AI skills to the next level! 💡 More info 👉https://lnkd.in/dsCjCcpe
🚀 Exciting Learning Opportunity: Generative AI with Diffusion Models 🚀 Are you ready to dive into the world of Generative AI? Join this AIDA short course, led by Prof. Dr. Andras Hajdu and Gergo Bogacsovics, and explore the transformative potential of generative AI across various industries! Hosted by Nvidia Deep Learning Institute, Faculty of Informatics, University of Debrecen, Hungary. Don't miss this chance to advance your AI skills! 👉https://lnkd.in/dsCjCcpe
To view or add a comment, sign in
-
🌟 Day 10: Tracing the Origin of VGG16 in the ImageNet Challenge 🌟 Hello everyone! 👋 Today, let's embark on a journey to uncover the remarkable history of VGG16, which traces back to its outstanding performance in the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2014. 🚀 💡 A Landmark Event: The ImageNet Challenge: The ILSVRC is an annual competition that pushes the boundaries of computer vision by challenging participants to develop models capable of classifying and detecting objects within images. In the 2014 challenge, the stage was set for a groundbreaking achievement by the Visual Geometry Group (VGG) at the University of Oxford. 🏆 The Triumph of VGG16: Karen Simonyan and Andrew Zisserman, researchers from the Visual Geometry Group, made waves with their submission titled "VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION." This groundbreaking paper showcased the VGG16 architecture, which not only secured the 1st place in object detection but also clinched the 2nd place in classification tasks. 📜 The Original Paper: The pioneering work of Karen Simonyan and Andrew Zisserman laid the foundation for VGG16's legacy. Their paper, which unveiled the architecture and methodology behind VGG16, provided invaluable insights into the design principles and training strategies of deep convolutional neural networks. You can access the original paper here. 🚀 Impact and Recognition: The triumph of VGG16 in the ImageNet Challenge catapulted it to the forefront of deep learning research and cemented its status as a landmark in the field of computer vision. Its remarkable performance showcased the effectiveness of deep convolutional neural networks in large-scale image recognition tasks. 🔍 Legacy and Beyond: Since its groundbreaking debut, VGG16 has served as a cornerstone for further advancements in deep learning and computer vision. Its legacy continues to inspire researchers and practitioners worldwide, shaping the development of state-of-the-art models and applications. By retracing the origins of VGG16 in the prestigious ImageNet Challenge, we gain a deeper appreciation for its groundbreaking achievements and lasting impact on the field of deep learning. I'm excited to delve further into the world of VGG16 and explore its potential in my projects! 💪✨ #Day10 #VGG16 #ImageNetChallenge #DeepLearning #ComputerVision #AI
To view or add a comment, sign in
-
https://lnkd.in/eXftjcKr More than deserved for these guys who have built the foundations of most modern AI systems and algorithms. Let's not forget Yann LeCun as well
To view or add a comment, sign in
-
🎓Efficient ML (MIT) One of the most important topics in AI today is efficiency. It's an important topic given the large amounts of computational resources required by modern ML systems. This course provides a solid overview of techniques that enable efficient ML systems. Includes lectures on: - Compression - Pruning - Quantization - Neural Architecture Search - Distribute Training - Data/Model Parallelism - On-Device Fine-tuning ... and a whole lot more. https://lnkd.in/eBqF5Nyd ↓ For more, follow my weekly summary of the top AI and LLM papers. Read by 65K+ AI researchers and developers: https://lnkd.in/e6ajg945
To view or add a comment, sign in
-
I believe that It's Time for #AI and #MachineLearning to take center stage in all awards! --- BREAKING NEWS - #NobelPrize to AI & Machine Learning scientist https://lnkd.in/ggRfWzdM The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Physics to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” This year’s physics laureates John Hopfield and Geoffrey Hinton used tools from physics to construct methods that helped lay the foundation for today’s powerful machine learning. Hopfield created a structure that can store and reconstruct information. Hinton invented a method that can independently discover properties in data and which has become important for the large artificial neural networks now in use. Although computers cannot think, machines can now mimic functions such as memory and learning. The 2024 Nobel Prize laureates in physics have helped make this possible. Using fundamental concepts and methods from physics, they have developed technologies that use structures in networks to process information. Press release: https://bit.ly/4diXSfz Popular information: https://bit.ly/4gK57jl Advanced information: https://bit.ly/4egLrly
To view or add a comment, sign in
-
Today is a monumental day for data scientists and AI enthusiasts worldwide. 🎉 Congratulations to John Hopfield and Geoffrey Hinton on winning the Nobel Prize in Physics for their groundbreaking work in machine learning. When I completed my first Deep Learning course on the very last day of the year 2022, I was deeply impressed by the principles behind Boltzmann Machines. I spent countless hours studying to truly grasp this concept. I even had to learn PyTorch 😅 Ludwig Boltzmann's physics principles from the 1800s paved the way for today's neural networks. Geoffrey Hinton's profound understanding of these principles has been instrumental in advancing the field. Another marvel of neural networks is their imitation of the human brain through the Backpropagation method. John Hopfield's pioneering work laid the foundation for this, beginning with his insightful essay in 1986: "We describe a new learning procedure, back-propagation, for networks of neurone-like units." (You can read the full article here: https://lnkd.in/e4F5wBVF . ) To undertand the concept, I once spent three hours with just a notebook and pen during the summer of 2022. 🤐 ...A Notebook and a pen , good old days... These two passionate subjects—Boltzmann Machines and Backpropagation—are not only Nobel-worthy but have also found their way into my latest science fiction novel, The Big Data: If One Day AI Learns Quantum Physics. 📖 Photo: Chapter titles of the science fiction novel, The Big Data 👉 Check out the story here: https://lnkd.in/eTkqVCfU There's one more principle I hope to see recognized next time. Maybe in 2025: Cost Function! #DeepLearning #NobelPrize #AI #BoltzmannMachines #Backpropagation #TheBigData #MachineLearning #QuantumPhysics
To view or add a comment, sign in
-
AI may seem like a relatively new term, but its roots date back several decades to the early days of modern computing in the 1950s and even further in mathematical theory. With recent advancements, AI appears to be expanding at an unprecedented speed. To predict where it is headed, it is essential to understand where it started. Discover the fascinating History of AI in this blog: https://lnkd.in/e8mWzvEX
To view or add a comment, sign in
6,795 followers