Vizuara

Vizuara

Education

Our AI experts from MIT and Purdue host the most comprehensive AI program for high school and middle school students.

About us

We are team Vizuara, a fast-growing Indian startup backed by MIT, that is revolutionizing AI education (www.vizuara.ai). Vizuara is founded by alumni from IIT Madras, MIT, and Purdue University. For questions, please email hello@vizuara.com.

Website
https://www.vizuara.ai
Industry
Education
Company size
11-50 employees
Headquarters
Pune
Type
Privately Held
Founded
2023
Specialties
AI courses, Virtual Laboratory, AR/VR/MR, and Machine Learning

Locations

Employees at Vizuara

Updates

  • Vizuara reposted this

    View profile for Pritam Kudale, graphic

    AI Research Specialist | AI Educator | Data Science | Data Analyst | Oracle Generative AI Certified Professional | Content Creator | 1.5 Million Inpression in 90 Days

    𝗧𝗵𝗲 𝗡𝗲𝗲𝗱 𝗳𝗼𝗿 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝘁𝗵𝗲 𝗥𝗮𝗽𝗶𝗱𝗹𝘆 𝗘𝘃𝗼𝗹𝘃𝗶𝗻𝗴 𝗔𝗜 𝗟𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲 With the recent release of xAI’s 𝗚𝗿𝗼𝗸 𝟯, which has surpassed all previous benchmarks, and the introduction of the 𝗚𝗿𝗼𝗸 𝟯 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗺𝗼𝗱𝗲𝗹, we are witnessing an era of unprecedented advancements in AI. Similarly, models like 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 𝗥𝟭 have demonstrated superior performance, exceeding the benchmarks set by 𝗢𝗽𝗲𝗻𝗔𝗜’𝘀 𝗚𝗣𝗧 models. The pace at which new models are emerging highlights the intense competition and rapid innovation in the field of artificial intelligence. For companies looking to build professional AI solutions, selecting a base model and fine-tuning it for specific use cases is a crucial step. However, with new models being introduced frequently, the 𝗹𝗮𝗰𝗸 𝗼𝗳 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗮𝘁𝗶𝗼𝗻 creates significant challenges in interoperability and integration. While middleware solutions like 𝗟𝗮𝗻𝗴𝗖𝗵𝗮𝗶𝗻 offer some level of compatibility, the industry still lacks a 𝘂𝗻𝗶𝘃𝗲𝗿𝘀𝗮𝗹 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱 that can streamline model selection, fine-tuning, and deployment. Establishing a 𝗰𝗼𝗺𝗺𝗼𝗻 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗳𝗼𝗿 𝗔𝗜 models would enhance efficiency, reduce complexity, and promote a more 𝗰𝗼𝗵𝗲𝘀𝗶𝘃𝗲 𝗔𝗜 𝗲𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺. This would enable organizations to seamlessly adopt and integrate new models as they emerge, without being constrained by compatibility issues. While healthy competition is driving innovation, a standardized approach to model development and deployment would 𝗳𝗼𝘀𝘁𝗲𝗿 𝗰𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻, 𝗶𝗺𝗽𝗿𝗼𝘃𝗲 𝗮𝗰𝗰𝗲𝘀𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗮𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗲 𝗔𝗜 𝗮𝗱𝗼𝗽𝘁𝗶𝗼𝗻 𝗮𝗰𝗿𝗼𝘀𝘀 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝗶𝗲𝘀. As the AI landscape continues to expand, the need for 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝘆-𝘄𝗶𝗱𝗲 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗮𝘁𝗶𝗼𝗻 becomes increasingly urgent. By implementing universal guidelines for interoperability, companies can focus on leveraging AI’s full potential rather than navigating the complexities of integration. For more AI and machine learning insights, explore V𝗶𝘇𝘂𝗿𝗮’𝘀 𝗔𝗜 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dk9sZC4a 

    • No alternative text description for this image
  • Last year, exactly at this time we had 87 subscribers on YouTube. Now Vizuara's channel has become a fast-growing, highly respectable platform for serious AI/ML learners. Not a big deal because we are not yet making any dent in the universe with a small channel. But it feels good to be part of something that many people value. Our 5 core principles while running this channel:- 1) We are running this channel for a selfish purpose - for our own learning. Like Feynman said, when you teach you learn better. 2) We will not operate like professional YouTubers. Thus, we will have multiple playlists running at the same time because we are teaching different courses simultaneously. 3) We will not look for what is trending on YouTube. We will only look for what is interesting for us in the AI/ML space. 4) Focus on depth over fluff. No fancy editing, music, or visual effects. Just pure, good old-school teaching. 5) No short-form, capsule teaching. We spend no time making shorts. Teaching and learning takes time. So the videos will be lengthy. If you want to learn AI/ML from our channel you can learn from playlists in the following order. 1) Foundations for ML: https://lnkd.in/gKz-eybU 2) ML Teach by Doing: https://lnkd.in/gn2dEcE2 3) Decision Trees from Scratch: https://lnkd.in/g3cmj2BR 4) Neural Networks from Scratch: https://lnkd.in/gj8kHe2T 5) LLMs from scratch: https://lnkd.in/gjcyfCcE 6) Hands-on LLMs: https://lnkd.in/gJQ7ryE4 7) DeepSeek from scratch: https://lnkd.in/gvHPGeu7

    • No alternative text description for this image
  • Vizuara reposted this

    View profile for Pritam Kudale, graphic

    AI Research Specialist | AI Educator | Data Science | Data Analyst | Oracle Generative AI Certified Professional | Content Creator | 1.5 Million Inpression in 90 Days

    𝗙𝗿𝗼𝗺 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗿𝗼𝗻 𝘁𝗼 𝗠𝗟𝗣: 𝗔𝗱𝘃𝗮𝗻𝗰𝗶𝗻𝗴 𝗕𝗲𝘆𝗼𝗻𝗱 𝗟𝗼𝗴𝗶𝘀𝘁𝗶𝗰 𝗥𝗲𝗴𝗿𝗲𝘀𝘀𝗶𝗼𝗻 In one of my previous animations, I demonstrated how the 𝗹𝗼𝗴𝗶𝘀𝘁𝗶𝗰 𝗿𝗲𝗴𝗿𝗲𝘀𝘀𝗶𝗼𝗻 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺 can outperform the 𝗽𝗲𝗿𝗰𝗲𝗽𝘁𝗿𝗼𝗻 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺 by leveraging the logistic (sigmoid) function to calculate maximum likelihood. In contrast, the perceptron relies on a simple 𝘀𝘁𝗲𝗽 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻 as its activation function. However, modifying the perceptron algorithm unlocks vast possibilities—paving the way for neural networks. This evolved version, known as the 𝗠𝘂𝗹𝘁𝗶𝗹𝗮𝘆𝗲𝗿 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗿𝗼𝗻 (𝗠𝗟𝗣) 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗲𝗿, supports multiple activation functions, allowing it to classify 𝗻𝗼𝗻-𝗹𝗶𝗻𝗲𝗮𝗿𝗹𝘆 𝘀𝗲𝗽𝗮𝗿𝗮𝗯𝗹𝗲 𝗱𝗮𝘁𝗮—a key limitation of logistic regression. To deepen your understanding, I highly recommend exploring these insightful video explanations: 𝗟𝗼𝗴𝗶𝘀𝘁𝗶𝗰 𝗥𝗲𝗴𝗿𝗲𝘀𝘀𝗶𝗼𝗻: by Pritam Kudale ▶️ 𝗟𝗼𝗴𝗶𝘀𝘁𝗶𝗰 𝗥𝗲𝗴𝗿𝗲𝘀𝘀𝗶𝗼𝗻 𝗦𝗶𝗺𝗽𝗹𝗶𝗳𝗶𝗲𝗱 https://lnkd.in/dfarzwkG ▶️ 𝗟𝗼𝘀𝘀 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 & 𝗡𝗲𝗴𝗮𝘁𝗶𝘃𝗲 𝗟𝗼𝗴 𝗟𝗶𝗸𝗲𝗹𝗶𝗵𝗼𝗼𝗱 https://lnkd.in/dT-vSvqs ▶️ 𝗚𝗿𝗮𝗱𝗶𝗲𝗻𝘁 𝗗𝗲𝘀𝗰𝗲𝗻𝘁 & 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗗𝗲𝗿𝗶𝘃𝗮𝘁𝗶𝗼𝗻 https://lnkd.in/dH8cc-Du 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗿𝗼𝗻 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺: ▶️ 𝗣𝗲𝗿𝗰𝗲𝗽𝘁𝗿𝗼𝗻 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺: The First Step Towards Logistic Regression https://lnkd.in/dVkQqVYe For more AI and machine learning insights, explore Vizura’s AI Newsletter: https://lnkd.in/dk9sZC4a #MachineLearning #AI #DeepLearning #LogisticRegression #Perceptron #MLP #NeuralNetworks #DataScience

    • No alternative text description for this image
  • "We claim to be a cutting-edge AI company to our customers. But they have no idea what our algorithm is. We provide great results, and they are happy. But at the end of the day, most of our models are simple- either regression or XGBoost." A few years ago, I heard the above statement from a friend who was the AI head at a highly-funded startup. At that time, I knew regression very well but wasn’t entirely sure about XGBoost, though I had come across it in various contexts. Following this conversation, I kept hearing from industry experts about how extensively XGBoost is used. So, I decided to take a deep dive. It was a fascinating journey that I will share someday. However, this post is not about XGBoost. It is about a random forest - something that preceded XGBoost. Let us start with a bit of history: Condorcet's jury theorem Marquis de Condorcet, a French philosopher, and mathematician, proposed a fascinating theorem in 1785: "if each person is more than 50% correct, then adding more people to vote increases the probability that the majority is correct." This idea directly relates to random forest. But first, a quick recap on decision trees - the basic building blocks of random forest. Imagine you have a dataset with the height and weight of three animals - cow, dog, and pig. You build a decision tree classifier to predict the animal type. With the right decision tree, you can achieve 100% accuracy. The problem however is that decision trees are highly sensitive to the dataset. Slight changes in data can lead to a completely different tree structure. Even perturbing just 5% of the training dataset with Gaussian noise can drastically alter the decision tree. The solution? Ensemble models Decision trees are interpretable, but they lack robustness. This is where random forest comes in. Instead of a single model, what if multiple ML models vote to take a decision? Random forest is an ensemble learning method where multiple decision trees work together to make predictions. The idea is simple: if individual models are at least 50% accurate, their collective decision will be better than any single model- just like Condorcet’s jury theorem. How random forest works 1️⃣ Create random subsets of data. 2️⃣ Train each tree on a random subset of features. 3️⃣ Each tree makes a prediction, and the majority vote determines the final output. The result? A more robust and accurate model. I have just released a detailed video on Vizuara's YouTube channel on the random forest that covers the theory, a walkthrough of an awesome interactive blog from MLU-explain, and implementation in code: https://lnkd.in/gdnjU-rS I hope you will enjoy this lecture as much as I enjoyed making it :) You will see that in the example dataset we use, individual decision trees can go up to 92% accuracy, but with random forest, for trees of the same depth, you can reach up to 97% accuracy. -Dr Sreedath Panat

    • No alternative text description for this image
  • Vizuara reposted this

    View profile for Pritam Kudale, graphic

    AI Research Specialist | AI Educator | Data Science | Data Analyst | Oracle Generative AI Certified Professional | Content Creator | 1.5 Million Inpression in 90 Days

    𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 𝗥𝟭 has taken the world by storm, positioning China as a 𝗳𝗼𝗿𝗺𝗶𝗱𝗮𝗯𝗹𝗲 𝗰𝗼𝗻𝘁𝗲𝗻𝗱𝗲𝗿 in the AI landscape, traditionally dominated by the US. What’s truly astonishing is that DeepSeek R1 was developed at a fraction of the cost compared to models from OpenAI, Meta, or Google—yet it not only competes but surpasses them in various aspects. The real question isn't just about using DeepSeek R1 for applications or AI agents, but rather understanding how it was built to achieve such a groundbreaking impact. To drive the next wave of innovation, we must deeply grasp the 𝗰𝗼𝗿𝗲 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝗯𝗲𝗵𝗶𝗻𝗱 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 𝗥𝟭 and explore how to develop similar or even superior models, independent of any modifications made elsewhere. One of the most insightful resources to start this journey is "𝗕𝘂𝗶𝗹𝗱 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 𝗳𝗿𝗼𝗺 𝗦𝗰𝗿𝗮𝘁𝗰𝗵" https://lnkd.in/gFh_2Sdp by Raj Abhijit Dandekar. This deep dive into the fundamentals of DeepSeek R1 can serve as a foundation for developing cutting-edge AI models by leveraging high-performance GPUs and optimized architectures. 𝘋𝘰𝘯’𝘵 𝘫𝘶𝘴𝘵 𝘣𝘦 𝘢𝘮𝘢𝘻𝘦𝘥 𝘣𝘺 𝘵𝘩𝘦 𝘥𝘪𝘴𝘳𝘶𝘱𝘵𝘪𝘰𝘯—𝘵𝘢𝘬𝘦 𝘵𝘩𝘦 𝘭𝘦𝘢𝘱 𝘵𝘰 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥, 𝘭𝘦𝘢𝘳𝘯, 𝘢𝘯𝘥 𝘣𝘶𝘪𝘭𝘥 𝘵𝘩𝘦 𝘯𝘦𝘹𝘵 𝘣𝘪𝘨 𝘣𝘳𝘦𝘢𝘬𝘵𝘩𝘳𝘰𝘶𝘨𝘩. For more AI and machine learning insights, explore V𝗶𝘇𝘂𝗿𝗮’𝘀 𝗔𝗜 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dk9sZC4a #AI #DeepSeekR1 #LLM #Innovation #ArtificialIntelligence

    • No alternative text description for this image
  • Vizuara reposted this

    View profile for Pritam Kudale, graphic

    AI Research Specialist | AI Educator | Data Science | Data Analyst | Oracle Generative AI Certified Professional | Content Creator | 1.5 Million Inpression in 90 Days

    𝗘𝗻𝘀𝘂𝗿𝗶𝗻𝗴 𝗦𝗲𝗰𝘂𝗿𝗲 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 𝗼𝗳 𝗟𝗟𝗠𝘀: 𝗥𝘂𝗻𝗻𝗶𝗻𝗴 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 𝗥𝟭 𝗦𝗮𝗳𝗲𝗹𝘆 As organizations increasingly rely on 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 (𝗟𝗟𝗠𝘀) to enhance efficiency and productivity, 𝗱𝗮𝘁𝗮 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 remains a critical concern—especially for enterprises and government agencies handling sensitive information. Recent security incidents, such as 𝗪𝗶𝘇 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵’𝘀 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆 𝗼𝗳 “𝗗𝗲𝗲𝗽𝗟𝗲𝗮𝗸”, where a publicly accessible ClickHouse database exposed secret keys, plaintext chat logs, backend details, and more, highlight the 𝗿𝗶𝘀𝗸𝘀 𝗼𝗳 𝘂𝘀𝗶𝗻𝗴 𝗟𝗟𝗠𝘀 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗽𝗿𝗼𝗽𝗲𝗿 𝗽𝗿𝗲𝗰𝗮𝘂𝘁𝗶𝗼𝗻𝘀. To mitigate these risks, I’ve put together a 𝘀𝘁𝗲𝗽-𝗯𝘆-𝘀𝘁𝗲𝗽 𝗴𝘂𝗶𝗱𝗲 on how to 𝗿𝘂𝗻 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 𝗥𝟭 𝗹𝗼𝗰𝗮𝗹𝗹𝘆 or securely on 𝗔𝗪𝗦 𝗕𝗲𝗱𝗿𝗼𝗰𝗸, ensuring data privacy while leveraging the power of AI. 𝘞𝘢𝘵𝘤𝘩 𝘵𝘩𝘦𝘴𝘦 𝘵𝘶𝘵𝘰𝘳𝘪𝘢𝘭𝘴 𝘧𝘰𝘳 𝘥𝘦𝘵𝘢𝘪𝘭𝘦𝘥 𝘪𝘮𝘱𝘭𝘦𝘮𝘦𝘯𝘵𝘢𝘵𝘪𝘰𝘯: • 𝗥𝘂𝗻 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸-𝗥𝟭 𝗟𝗼𝗰𝗮𝗹𝗹𝘆 (𝗢𝗹𝗹𝗮𝗺𝗮 𝗖𝗟𝗜 & 𝗪𝗲𝗯𝗨𝗜) → https://lnkd.in/dMsimFR8 • 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 𝗥𝟭 𝘄𝗶𝘁𝗵 𝗢𝗹𝗹𝗮𝗺𝗮 𝗔𝗣𝗜 & 𝗣𝘆𝘁𝗵𝗼𝗻 → https://lnkd.in/d4wmWuUV • 𝗗𝗲𝗽𝗹𝗼𝘆 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸 𝗥𝟭 𝗦𝗲𝗰𝘂𝗿𝗲𝗹𝘆 𝗼𝗻 𝗔𝗪𝗦 𝗕𝗲𝗱𝗿𝗼𝗰𝗸 → https://lnkd.in/d5xYJvki by Pritam Kudale Additionally, I’m sharing a detailed PDF guide with a complete step-by-step process to help you implement these solutions seamlessly. For more AI and machine learning insights, subscribe to 𝗩𝗶𝘇𝘂𝗿𝗮’𝘀 𝗔𝗜 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 → https://lnkd.in/dk9sZC4a Let’s build AI solutions with privacy, security, and efficiency at the core. #AI #MachineLearning #LLM #DeepSeek #CyberSecurity #AWS #DataPrivacy #SecureAI #GenerativeAI

  • “If you are not smart enough to deeply understand a phenomenon, then you can use ML models. But where is the science in that? ML is not science.” In 2018, one of the professors at MIT told this to his PhD student- my friend. He wanted to incorporate ML into his research, but the professor was old-school. (I am curious to know what these professors are thinking since the awarding of Nobel Prize in Physics & Chemistry to ML). During COVID, however, I saw an interesting shift happen, not just at MIT but across many universities. More and more experimentalists from mechanical, material science, physics, chemical, etc. were incorporating ML into their research. There could have been a few reasons for this shift. 1) It was hard to do physical experiments during lockdown. Maybe this forced researchers to look at alternative options. 2) Scientific ML (SciML) was getting popular. Everyone started finding intersections between ML & their field. At MIT, labs started studying fluid mechanics, developed better solar cells, built better nano-sensor etc using ML. 3) FOMO (Fear Of Missing Out) because everyone was getting into ML I don’t know the reason for this shift. But it was a good thing. Because science or not machine learning is powerful. No one can deny that. Around the same time, I was a hard-core experimentalist with a strong computational background. I was also fascinated by ML. Initially, it was just FOMO when my peers at IITM ventured into CNN and RL, while I was still working on tolerance of precision grinding machines. There was a time when I felt like I was living in the 1950s whereas my peers were living the modern life. But my FOMO disappeared at MIT. Because my lab had spun off companies that had nothing to do with ML. I was convinced that I could add value to the world without knowing ML. However, I did not want to miss out on the opportunity to learn ML from MIT. So I did 4 things. 1) I enrolled in a graduate-level ML course at MIT. I was sure to get my ass kicked by smart undergrads. 2) I started considering myself a serious ML person. This was my trick to convince my mind to take up hard ML problems. 3) I decided to stay away from toy Kaggle projects. I was not looking for resume points. I was looking for depth. 4) Since SciML was gaining a lot of word of mouth, I decided to dip my feet into it. Venturing into ML has been the best decision I have ever taken. 5 years later, now I run an AI-first company with my co-founders. These are the top 4 things I do daily. 1) Learn AI/ML 2) Work on novel research 3) Build AI products 4) Teach AI/ML Looking back, what helped me was my strong self-belief. I strongly believed and still do that anyone can transition to ML. It doesn’t matter what your department, CGPA, age, job status etc. ***** If you wish to transition to ML, you can join this free webinar today where I will share my view of what should someone be doing: https://lnkd.in/gKKneicd -Dr. Sreedath Panat

    • No alternative text description for this image
  • Vizuara reposted this

    View profile for Pritam Kudale, graphic

    AI Research Specialist | AI Educator | Data Science | Data Analyst | Oracle Generative AI Certified Professional | Content Creator | 1.5 Million Inpression in 90 Days

    𝗧𝗵𝗲 𝗜𝗺𝗽𝗮𝗰𝘁 𝗼𝗳 𝗕𝗶𝗮𝘀𝗲𝗱 𝗗𝗮𝘁𝗮 𝗼𝗻 𝗔𝗜: 𝗞𝗲𝘆 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝗣𝗠 𝗡𝗮𝗿𝗲𝗻𝗱𝗿𝗮 𝗠𝗼𝗱𝗶 𝗮𝘁 𝘁𝗵𝗲 𝗣𝗮𝗿𝗶𝘀 𝗔𝗜 𝗦𝘂𝗺𝗺𝗶𝘁 At the recent 𝗣𝗮𝗿𝗶𝘀 𝗔𝗜 𝗦𝘂𝗺𝗺𝗶𝘁, Prime Minister 𝗡𝗮𝗿𝗲𝗻𝗱𝗿𝗮 𝗠𝗼𝗱𝗶 highlighted a critical challenge in artificial intelligence—𝗯𝗶𝗮𝘀 𝗶𝗻 𝗔𝗜 𝗺𝗼𝗱𝗲𝗹𝘀. As AI systems are trained on vast amounts of internet data, they inevitably inherit existing biases, leading to skewed and sometimes misleading outcomes. PM Modi illustrated this with a simple yet powerful example: If an AI image generator is asked to create an image of a person writing with their left hand, it is more likely to generate an image of someone using their right hand—reflecting the overwhelming bias in available training data. 𝗧𝗼 𝗲𝗻𝘀𝘂𝗿𝗲 𝗔𝗜 𝘀𝗲𝗿𝘃𝗲𝘀 𝗵𝘂𝗺𝗮𝗻𝗶𝘁𝘆 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲𝗹𝘆, 𝗣𝗠 𝗠𝗼𝗱𝗶 𝗰𝗮𝗹𝗹𝗲𝗱 𝗳𝗼𝗿: • Transparent, open-source AI systems • High-quality, bias-free datasets • People-centric AI applications • Stronger global collaboration to address cybersecurity, disinformation, and deepfakes His message is clear—𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 𝗶𝘀 𝗲𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 for a fair and inclusive digital future. The call for ethical AI is not just for policymakers but for researchers, developers, and organizations worldwide. 🔗 𝗪𝗮𝘁𝗰𝗵 𝗵𝗶𝘀 𝗳𝘂𝗹𝗹 𝘀𝗽𝗲𝗲𝗰𝗵 𝗵𝗲𝗿𝗲: https://lnkd.in/dPiFc4ay For more AI and machine learning insights, explore V𝗶𝘇𝘂𝗿𝗮’𝘀 𝗔𝗜 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dkwFvwQn. 𝘞𝘩𝘢𝘵 𝘢𝘳𝘦 𝘺𝘰𝘶𝘳 𝘵𝘩𝘰𝘶𝘨𝘩𝘵𝘴 𝘰𝘯 𝘮𝘪𝘵𝘪𝘨𝘢𝘵𝘪𝘯𝘨 𝘈𝘐 𝘣𝘪𝘢𝘴? 𝘓𝘦𝘵’𝘴 𝘥𝘪𝘴𝘤𝘶𝘴𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘤𝘰𝘮𝘮𝘦𝘯𝘵𝘴. #AI #ArtificialIntelligence #BiasInAI #NarendraModi #EthicalAI #ParisAISummit #ResponsibleAI

    • No alternative text description for this image
  • I first heard “regularization” during MIT’s graduate-level ML course in the fall of 2019. Later, a couple of friends mentioned it during their ML job interview- specifically, they were asked about “Lasso and Ridge regression.” That is when I realized that regularization is a key concept I needed to understand better. For new topics, I usually start by Googling “Topic XYZ visually explained.” So, I typed “Regularization visualized” into Google Images and was overwhelmed by the figures I saw, although the math looked straightforward (apply a penalty term to the loss function). As I learned more about Lasso, I became confused: Why does Lasso force some model parameters to be exactly zero, while Ridge only makes them small? I set that confusing part aside. Today, I truly appreciate that visual intuition - even though for 2 or 3 years I paid little attention to it. So what is regularization? Regularization is used in ML to prevent overfitting. Adding a penalty to the loss function discourages the model from learning overly complex patterns or noise that only fits the training data. The regularization strength, denoted by λ, controls the trade-off between the original loss and the penalty. • Types of Regularization: Ridge (L2) vs. Lasso (L1) Regression Ridge Regression (L2 Regularization): It modifies the linear regression loss function by adding an L2 penalty (the sum of squared weights). When λ is 0, Ridge is just like normal linear regression. As λ increases, the model shrinks all weights closer to 0 to help prevent overfitting. Lasso Regression (L1 Regularization): It uses an L1 penalty, adding the absolute values of the weights. With a small λ, Lasso behaves like linear regression. But when λ is large, Lasso forces some weights to become exactly zero - effectively performing feature selection since features with a weight of zero are not used for making predictions. • Why does Lasso set some weights to zero but not Ridge? This was the million-dollar question that frustrated me for quite some time. Here is the intuition: Ridge: Even when λ is large, Ridge regression only shrinks the parameters, making them small but not exactly zero. Lasso: There are many cases where with a high λ, Lasso can set parameters to exactly zero. While I can’t paste equations and images here, imagine a graphical illustration where the penalty shapes differ: Ridge’s penalty forms a circle, while Lasso’s forms a diamond. The diamond’s corners make it more likely for the optimization process to land on an axis (i.e., setting a parameter to zero), whereas the circular shape of Ridge doesn’t encourage exact zeros. If you are interested in understanding the full beauty and intuition behind regularization, Lasso, and Ridge regression, then this video I just published on Vizuara's YouTube channel is for you. Enjoy, and I am sure you will appreciate these concepts as much as I do! https://lnkd.in/gU2s5vqW

    • No alternative text description for this image
  • Introducing our first ever in-person SciML 2-day bootcamp Vizuara is conducting a 2-day (in-person) SciML bootcamp event at Baner, Pune on March 8th (Sat) and 9th (Sun). Many students and professionals from non-CS fields wish to have a pathway to transition to ML but do not know where to start. From our experience, Scientific ML is a great way to get your foot into the door. We have tested this again and again with great success. This in-person event is to bring the most enthusiastic folks under one roof and do a rigorous 2-day SciML learning + implementation bootcamp. The founders of Vizuara - Raj, Rajat, and Sreedath will be conducting the event. If you are interested please fill out this interest form: https://lnkd.in/gUuT6Mxt Details about the exact location of the event in Baner will be shared with the attendees via email. ✰ Price (including taxes): Rs. 7499/- ✰ Maximum number of attendees: 50 ✰ Deadline: March 1st, 2025 (11:50 pm IST) ✰ Refreshments (not lunch) will be provided ✰ Travel & accommodation should be arranged by the attendees ✰ On both days, the bootcamp will be from 9am-5.00pm with breaks in between ✰ Bring your laptop, charger, and extension cable (might be needed) ✰ For questions, reach out to hello@vizuara.com Please note: Filling this form does not automatically make you eligible for registration. Once we receive your interest our team will reach out to you via email to check if you are a good fit for this bootcamp.

    • No alternative text description for this image

Similar pages

Browse jobs