Effector: A Python-based Machine Learning Library Dedicated to Regional Feature Effects https://lnkd.in/dJcrGm4y “`html Effector: A Python-based Machine Learning Library Dedicated to Regional Feature Effects Global feature effects methods like Partial Dependence Plots (PDP) and SHAP Dependence Plots are commonly used to explain black-box models. However, they fall short when the model exhibits interactions between features or when local effects are heterogeneous. This can lead to misleading interpretations. Effector aims to address these limitations by providing regional feature effect methods, especially in crucial domains like healthcare and finance. Key Features Effector partitions the input space into subspaces to provide a regional explanation within each, reducing aggregation bias and increasing the interpretability and trustworthiness of machine learning models. It offers a comprehensive range of global and regional effect methods, including PDP, derivative-PDP, Accumulated Local Effects (ALE), Robust and Heterogeneity-aware ALE (RHALE), and SHAP Dependence Plots. The library’s modular design allows easy integration of new methods and ensures adaptability to emerging research in the field of XAI. Practical Applications Effector’s performance has been evaluated using both synthetic and real datasets, revealing insights into patterns that were not apparent with global effect methods alone. Its accessibility and ease of use make it a valuable tool for researchers and practitioners in the field of machine learning. Effector’s extensible design encourages collaboration and innovation, allowing researchers to experiment with novel methods and compare them with existing approaches. Value Proposition Effector offers a promising solution to the challenges of explainability in machine learning models. It makes black-box models easier to understand and more reliable by providing regional explanations that take into account heterogeneity and feature interactions. This ultimately speeds up the development and use of AI systems in real-world situations. If you want to evolve your company with AI, stay competitive, and use Effector to redefine your way of work. Connect with us at hello@itinai.com for AI KPI management advice and stay tuned on our Telegram t.me/itinainews or Twitter @itinaicom for continuous insights into leveraging AI. Practical AI Solution Consider the AI Sales Bot from itinai.com/aisalesbot, designed to automate customer engagement 24/7 and manage interactions across all customer journey stages. “` List of Useful Links: AI Lab in Telegram @aiscrumbot – free consultation Effector: A Python-based Machine Learning Library Dedicated to Regional Feature Effects MarkTechPost Twitter – @itinaicom #artificialintelligence #ai #machinelearning #technology #datascience #python #deeplearning #programming #tech #roboti...
Andrew Smith’s Post
More Relevant Posts
-
Mastering O-RAN RIC: Key Considerations for AI/ML Model Selection When developing an application for a specific use case on O-RAN's RIC platform, keep two key points in mind. ◼ First, select the appropriate AI/ML model. Your choice between a simple model and a more sophisticated one, like deep learning, should depend on your goals and the specific environment (refer to the table below). Additionally, you can leverage AutoML tools. AutoML explores a variety of machine learning algorithms and model architectures, employing techniques such as Bayesian optimization, genetic algorithms, or reinforcement learning to identify the top-performing models. Example Scenario: - Scenario: High Variability and Complexity Data: Large, sequential datasets with non-linear patterns. Resources: Ample computational power and storage. Requirements: Real-time adaptation and low latency inferences. Choice: Deep Reinforcement Learning (DRL) with RNN/LSTM. This setup can learn from real-time data and adjust policies dynamically, handling complex patterns in user behavior and traffic conditions. - Scenario: Stable and Predictable Environment Data: Moderate-sized, non-sequential datasets with linear patterns. Resources: Limited computational power. Requirements: High interpretability and quick implementation. Choice: K-Means Clustering or Logistic Regression. These models are easy to implement, require fewer resources, and provide clear insights into user behaviors and traffic patterns. By carefully considering these criteria and conditions, you can select the most appropriate machine learning model for, for example, traffic pattern recognition in your specific O-RAN network use case. ◼ Second, remember that what you are developing is an independent piece of software, comprising an algorithm and one or more AI/ML models. While you can utilize modules and RIC SDK libraries from xApps and rApps, you have the flexibility to use different types of models in your software. For instance, it's common to have multiple models in your software, for example, a time series model for traffic prediction and a reinforcement learning model (or policy) to make decisions based on those predictions. The key advantage of having a O-RAN ALLIANCE RAN Intelligent Controller (RIC) is the ability to efficiently manage and optimize the radio network. By leveraging open interfaces and r/xApps, O-RAN facilitates the implementation of customized algorithms for specific applications. The O-RAN ALLIANCE outlines various use cases and establishes a policy framework to control the algorithms supporting these use cases. The whitepaper by Rimedo Labs provides an overview of the Traffic Steering use case within the O-RAN framework. It details the use case requirements, the operation of O-RAN nodes with specified interfaces, and the scenario as outlined in the O-RAN ALLIANCE specifications. #ORAN #OpenRAN #ORANRIC #TrafficSteering #AutoML #ModelSelectionCriteria #RICUsecases
To view or add a comment, sign in
-
AI/ML selection in wireless communication
Mastering O-RAN RIC: Key Considerations for AI/ML Model Selection When developing an application for a specific use case on O-RAN's RIC platform, keep two key points in mind. ◼ First, select the appropriate AI/ML model. Your choice between a simple model and a more sophisticated one, like deep learning, should depend on your goals and the specific environment (refer to the table below). Additionally, you can leverage AutoML tools. AutoML explores a variety of machine learning algorithms and model architectures, employing techniques such as Bayesian optimization, genetic algorithms, or reinforcement learning to identify the top-performing models. Example Scenario: - Scenario: High Variability and Complexity Data: Large, sequential datasets with non-linear patterns. Resources: Ample computational power and storage. Requirements: Real-time adaptation and low latency inferences. Choice: Deep Reinforcement Learning (DRL) with RNN/LSTM. This setup can learn from real-time data and adjust policies dynamically, handling complex patterns in user behavior and traffic conditions. - Scenario: Stable and Predictable Environment Data: Moderate-sized, non-sequential datasets with linear patterns. Resources: Limited computational power. Requirements: High interpretability and quick implementation. Choice: K-Means Clustering or Logistic Regression. These models are easy to implement, require fewer resources, and provide clear insights into user behaviors and traffic patterns. By carefully considering these criteria and conditions, you can select the most appropriate machine learning model for, for example, traffic pattern recognition in your specific O-RAN network use case. ◼ Second, remember that what you are developing is an independent piece of software, comprising an algorithm and one or more AI/ML models. While you can utilize modules and RIC SDK libraries from xApps and rApps, you have the flexibility to use different types of models in your software. For instance, it's common to have multiple models in your software, for example, a time series model for traffic prediction and a reinforcement learning model (or policy) to make decisions based on those predictions. The key advantage of having a O-RAN ALLIANCE RAN Intelligent Controller (RIC) is the ability to efficiently manage and optimize the radio network. By leveraging open interfaces and r/xApps, O-RAN facilitates the implementation of customized algorithms for specific applications. The O-RAN ALLIANCE outlines various use cases and establishes a policy framework to control the algorithms supporting these use cases. The whitepaper by Rimedo Labs provides an overview of the Traffic Steering use case within the O-RAN framework. It details the use case requirements, the operation of O-RAN nodes with specified interfaces, and the scenario as outlined in the O-RAN ALLIANCE specifications. #ORAN #OpenRAN #ORANRIC #TrafficSteering #AutoML #ModelSelectionCriteria #RICUsecases
To view or add a comment, sign in
-
Guide to Learning AI Agents Want to build AI Agents? Here’s a step-by-step, practical approach to mastering the essentials: Level 1: Foundations in GenAI and RAG 1. Understand Generative AI - What to Do: Watch introductory videos or read articles explaining Generative AI applications and concepts. - Hands-On: Explore tools like ChatGPT or DALL-E to understand AI capabilities. 2. Learn LLM Basics - What to Do: Study transformer architecture and attention mechanisms. - Hands-On: Use OpenAI, Hugging Face, or other LLM libraries to explore embeddings and tokenization. 3. Practice Prompt Engineering - What to Do: Experiment with zero-shot and few-shot prompts. - Hands-On: Test prompts in platforms like GPT-4 or Google Bard to optimize outputs. 4. Work on Data Preparation - What to Do: Learn basic preprocessing techniques like cleaning, tokenization, and normalization. - Hands-On: Process small datasets for training/inference using Python libraries like Pandas and NLTK. 5. Use API Wrappers - What to Do: Learn API basics (REST/GraphQL). - Hands-On: Make simple calls to AI APIs (OpenAI or Cohere) to automate tasks. 6. Explore RAG Basics - What to Do: Understand embedding-based search and vector databases. - Hands-On: Use ChromaDB or Milvus to create simple search workflows. Level 2: Building and Scaling AI Agents 1. Start with AI Agents - What to Do: Learn how agents interact with their environment. - Hands-On: Study simple workflows in tools like LangChain or LangFlow. 2. Experiment with Agent Frameworks - What to Do: Explore LangChain’s building blocks (Chains, Memory, Tools). - Hands-On: Build an agent that can perform a simple search task. 3. Create Your First AI Agent - What to Do: Combine LLM APIs with basic workflows. - Hands-On: Create a chatbot or task automation agent using Python and LangChain. 4. Design Efficient Workflows - What to Do: Learn to split tasks into logical steps and handle errors. - Hands-On: Build a multi-step agent (e.g., data retrieval → processing → response). 5. Implement Agent Memory - What to Do: Study short-term and long-term memory systems. - Hands-On: Integrate vector databases (like Pinecone) to store/retrieve context. 6. Evaluate Your Agent - What to Do: Learn metrics like accuracy and response time. - Hands-On: Test and improve your agent using sample datasets. 7. Enable Multi-Agent Collaboration - What to Do: Understand communication protocols and dependencies. - Hands-On: Create agents that share data and work together on a task. 8. Use Agentic RAG - What to Do: Learn context handling and feedback loops. - Hands-On: Build pipelines that integrate memory, search, and learning for real-world scenarios. Build Something Awesome! By following this roadmap, you’ll gain the skills to create powerful AI Agents. Start small, experiment often, and scale up your projects as you grow. 🚀 GIF Credit: Rakesh Gohel
To view or add a comment, sign in
-
5 Pillars for a Hyper-Optimized AI Workflow: An introduction to a methodology for creating production-ready, extensible & highly optimized AI workflows Credit: Google Gemini, prompt by the Author Intro In the last decade, I carried with me a deep question in the back of my mind in every project I’ve worked on:How (the hell) am I supposed to structure and develop my AI & ML projects? I wanted to know — is there an elegant way to build production-ready code in an iterative way? A codebase that is extensible, optimized, maintainable & reproducible? And if so — where does this secret lie? Who owns the knowledge to this dark art? I searched intensively for an answer over the course of many years — reading articles, watching tutorials and trying out different methodologies and frameworks. But I couldn’t find a satisfying answer. Every time I thought I was getting close to a solution, something was still missing. After about 10 years of trial and error, with a focused effort in the last two years, I think I’ve finally found a satisfying answer to my long-standing quest. This post is the beginning of my journey of sharing what I’ve found. My research has led me to identify 5 key pillars that form the foundation of what I call a hyper-optimized AI workflow. In the post I will shortly introduce each of them — giving you an overview of what’s to come. I want to emphasize that each of the pillars that I will present is grounded in practical methods and tools, which I’ll elaborate on in future posts. If you’re already curious to see them in action, feel free to check out this video from Hamilton’s meetup where I present them live:https://lnkd.in/gxeN3nJR Note: Throughout this post and series, I’ll use the terms Artificial Intelligence (AI), Machine Learning (ML), and Data Science (DS) interchangeably. The concepts we’ll discuss apply equally to all these fields. Now, let’s explore each pillar. 1 — Metric-Based Optimization In every AI project there is a certain goal we want to achieve, and ideally — a set of metrics we want to optimize. These metrics can include: * Predictive quality metrics: Accuracy, F1-Score, Recall, Precision, etc… * Cost metrics: Actual $ amount, FLOPS, Size in MB, etc… * Performance metrics: Training speed, inference speed, etc… We can choose one metric as our “north star” or create an aggregate metric. For example: * 0.7 × F1-Score + 0.3 × (1 / Inference Time in ms) * 0.6 × AUC-ROC + 0.2 × (1 / Training Time in hours) + 0.2 × (1 / Cloud Compute Cost in $) There’s a wonderful short video by Andrew Ng. where here explains about the topic of a Single Number Evaluation Metric. Once we have an agreed-upon metric to optimize and a set of constraints to meet, our goal is to build a workflow that maximizes this metric while satisfying our constraints. 2 — Interactive… #MachineLearning #ArtificialIntelligence #DataScience
5 Pillars for a Hyper-Optimized AI Workflow
towardsdatascience.com
To view or add a comment, sign in
-
What I'd do to get up to speed on Generative AI and LLMs in under 2 hours... Lessons learned. As I create my new AI course for data scientists, I took a step back this week and asked myself, "How I'd get up to speed with AI in under 2 hours." There are 3 challenges that I faced (and you'll face too): 1. AI has a ton of terminology: RAG, prompt engineering, vector dabases, embedding models, document loaders, LLM frameworks, tools and toolkits, agents, etc 2. AI ecosystem is growing fast: All of the tech like hugging face, langchain, llama3, ollama, mistral, transformers, and I could go on and on 3. Applying AI is different than learning AI: The stuff I'm reading up on is way different when you go to apply it. And it's easy to get confused. So I spent a lot of time pondering these 3 challenges. And I came up with a solution. Answer: The AI Fast Track Module In the first part of my course, I'm very focused on speed of building. My time limit is 2 hours. It's broken down like this: 1. Get up to speed on 80/20 AI concepts and terminology (36 minutes). I have a 36 slide deck that breaks down everything you need to know. Literally months of research went into this. 2. Get the AI Python Stack installed (30 minutes). This includes libraries to build AI (langchain, transformers, etc) and interface with common documents like PDF, Quarto, etc. 3. Build your first business AI app (30 minutes). You'll take Nike's Q3 FY24 Earnings Call Transcripts and learn how to summarize the 29 page document into a 1 page summary with an automated streamlit app 4. 1st Challenge (20 minutes). You'll extend your learnings on prompt engineering basics to produce an earnings call summary report automation app. 5. Bonus App (10 minutes). You'll extend your challenge solution by integrating Quarto to convert the summary (markdown) to PDF and save it to the user's Downloads folder. This solves the challenges by giving you 80/20 concepts and getting you to apply a subset of them in under 120 minutes. It's fast. Most importantly. This sets your foundations for the 3 advanced projects, which comprise the core curriculum: 1. Project #1: AI Marketing Strategy Expert Assistant App with RAG and Vector Database 2. Project #2: AI Customer Analyst that integrates SQL, Pandas Tools, and can present data visualizations, tables, and business insights. 3. Project #3: Advanced AI Agents. Combine Marketing Strategy and Customer Analytics and Project Management to collect insights, and solve larger business problems. Why am I teaching you these 3 AI projects? So you can become the AI Expert for your organization. 👉 Register for the AI course waitlist and live launch event (seats are limited to 500): https://lnkd.in/ePcscx4k
To view or add a comment, sign in
-
🚀 Day 8 Powering Through the AI/ML Quest! 🚀 Welcome to Day 8 of my journey to explore and share the wonders of Artificial Intelligence and Machine Learning. Each day, I'll dive into a new topic, from foundational concepts to intermediate-level insights, aiming to demystify AI/ML for everyone. 🚀 Day 8: Essential AI/ML Tools and Libraries – Powering the Future of AI 🚀 In the world of Artificial Intelligence (AI) and Machine Learning (ML), having the right tools and libraries is crucial for developing and deploying effective models. Machine Learning Libraries A library is a collection of pre-written code that users can use to optimize tasks. In machine learning, libraries provide specific functionalities that you can integrate into your code without having to write everything from scratch. Libraries are typically focused on one area of functionality, such as data manipulation, mathematical operations, or specific algorithms. Libraries are used to implement specific machine learning algorithms or to perform specific tasks within a larger project. They are typically more lightweight and focused than frameworks. Machine Learning Frameworks A framework is a comprehensive, often larger, collection of libraries and tools that provide a standardized way to build and deploy machine learning models. Frameworks typically offer a higher level of abstraction and may include everything from data preprocessing and model building to training, testing, and deploying machine learning models. Frameworks provide the scaffolding or the structure for building machine learning models. They help standardize the development process and are typically more extensive than libraries, offering a full-stack solution for machine learning projects. Machine Learning Tools A tool is a software application or platform that supports specific aspects of the machine learning process. Tools can encompass a variety of functions, from data labeling and model evaluation to deployment and monitoring. They often include user interfaces, APIs, or command-line interfaces to make tasks more accessible. Tools are used to facilitate specific tasks or workflows within a machine learning project. They are often designed to integrate with libraries and frameworks to enhance productivity and manage different stages of the machine learning lifecycle. This comprehensive list of AI/ML tools and libraries covers the essential resources needed across various stages of AI and machine learning projects, from data processing and model training to deployment and optimization. 👨💻 About Me Computer Engineering graduate with experience in AI application development and several innovative projects. I'm excited to share my knowledge and insights on AI/ML with you over the next 30 days! 📫 Let's connect: LinkedIn: Shreyas Jadhav GitHub: Shreyas-jdv #ArtificialIntelligence #MachineLearning #AI #ML #TensorFlow #PyTorch #DataScience #TechInnovation #LearningJourney #AIForAll
To view or add a comment, sign in
-
Using Generative AI to Automatically Create a Video Talk from an Article Using Gemini + Text to Speech + MoviePy to create a video, and what this says about what GenAI is becoming rapidly useful for Like most everyone, I was flabbergasted by NotebookLM and its ability to generate a podcast from a set of documents. And then, I got to thinking: “how do they do that, and where can I get some of that magic?” How easy would it be to replicate? Goal: Create a video talk from an article I don’t want to create a podcast, but I’ve often wished I could generate slides and a video talk from my blog posts —some people prefer paging through slides, and others prefer to watch videos, and this would be a good way to meet them where they are. In this article, I’ll show you how to do this. The full code for this article is on GitHub — in case you want to follow along with me. And the goal is to create this video from this article: 1. Initialize the LLM I am going to use Google Gemini Flash because (a) it is the least expensive frontier LLM today, (b) it’s multimodal in that it can read and understand images also, and (c) it supports controlled generation, meaning that we can make sure the output of the LLM matches a desired structure. import pdfkit import os import google.generativeai as genai from dotenv import load_dotenv load_dotenv("../genai_agents/keys.env") genai.configure(api_key=os.environ["GOOGLE_API_KEY"]) Note that I’m using Google Generative AI and not Google Cloud Vertex AI. The two packages are different. The Google one supports Pydantic objects for controlled generation; the Vertex AI one only supports JSON for now. 2. Get a PDF of the article I used Python to download the article as a PDF, and upload it to a temporary storage location that Gemini can read: ARTICLE_URL = "https://lakshmanok.medium...." pdfkit.from_url(ARTICLE_URL, "article.pdf") pdf_file = genai.upload_file("article.pdf") Unfortunately, something about medium prevents pdfkit from getting the images in the article (perhaps because they are webm and not png …). So, my slides are going to be based on just the text of the article and not the images. 3. Create lecture notes in JSON Here, the data format I want is a set of slides each of which has a title, key points, and a set of lecture notes. The lecture as a whole has a title and an attribution also. class Slide(BaseModel): title: str key_points: List[str] lecture_notes: str class Lecture(BaseModel): slides: List[Slide] lecture_title: str based_on_article_by: str Let’s tell Gemini what we want it to do: lecture_prompt = """ You are a university professor who needs to create a lecture to a class of undergraduate students. * Create a 10-slide lecture based on the following article. * Each slide should contain the following information: - title: a single sentence that summarizes the main point - key_points: a list of between 2 and 5 bullet...
Using Generative AI to Automatically Create a Video Talk from an Article Using Gemini + Text to Speech + MoviePy to create a video, and what this says about what GenAI is becoming rapidly useful for Like most everyone, I was flabbergasted by NotebookLM and its ability to generate a podcast from a set of documents. And then, I got to thinking: “how do they do that, and where can I get some of...
towardsdatascience.com
To view or add a comment, sign in
-
5 Pillars for a Hyper-Optimized AI Workflow An introduction to a methodology for creating production-ready, extensible & highly optimized AI workflows Credit: Google Gemini, prompt by the Author Intro In the last decade, I carried with me a deep question in the back of my mind in every project I’ve worked on: How (the hell) am I supposed to structure and develop my AI & ML projects? I wanted to know — is there an elegant way to build production-ready code in an iterative way? A codebase that is extensible, optimized, maintainable & reproducible? And if so — where does this secret lie? Who owns the knowledge to this dark art? I searched intensively for an answer over the course of many years — reading articles, watching tutorials and trying out different methodologies and frameworks. But I couldn’t find a satisfying answer. Every time I thought I was getting close to a solution, something was still missing. After about 10 years of trial and error, with a focused effort in the last two years, I think I’ve finally found a satisfying answer to my long-standing quest. This post is the beginning of my journey of sharing what I’ve found. My research has led me to identify 5 key pillars that form the foundation of what I call a hyper-optimized AI workflow. In the post I will shortly introduce each of them — giving you an overview of what’s to come. I want to emphasize that each of the pillars that I will present is grounded in practical methods and tools, which I’ll elaborate on in future posts. If you’re already curious to see them in action, feel free to check out this video from Hamilton’s meetup where I present them live: Note: Throughout this post and series, I’ll use the terms Artificial Intelligence (AI), Machine Learning (ML), and Data Science (DS) interchangeably. The concepts we’ll discuss apply equally to all these fields. Now, let’s explore each pillar. 1 — Metric-Based Optimization In every AI project there is a certain goal we want to achieve, and ideally — a set of metrics we want to optimize. These metrics can include: Predictive quality metrics: Accuracy, F1-Score, Recall, Precision, etc… Cost metrics: Actual $ amount, FLOPS, Size in MB, etc… Performance metrics: Training speed, inference speed, etc… We can choose one metric as our “north star” or create an aggregate metric. For example: 0.7 × F1-Score + 0.3 × (1 / Inference Time in ms) 0.6 × AUC-ROC + 0.2 × (1 / Training Time in hours) + 0.2 × (1 / Cloud Compute Cost in $) There’s a wonderful short video by Andrew Ng. where here explains about the topic of a Single Number Evaluation Metric. Once we have an agreed-upon metric to optimize and a set of constraints to meet, our goal is to build a workflow that maximizes this metric while satisfying our constraints. 2 — Interactive Developer Experience In the world of Data Science and AI development — interactivity is key. As AI Engineers (or whatever title we Data Scientists go by these days),...
5 Pillars for a Hyper-Optimized AI Workflow An introduction to a methodology for creating production-ready, extensible & highly optimized AI workflows Credit: Google Gemini, prompt by the Author Intro In the last decade, I carried with me a deep question in the back of my mind in every project I’ve worked on: How \(the hell\) am I supposed to structure and develop my AI & ML projects? I...
towardsdatascience.com
To view or add a comment, sign in
-
How to effectively apply AI/ML Skills in real-World projects (Even as a beginner) The truth is, the gap between theory and practice can feel overwhelming—especially when it comes to AI and machine learning. But here's the good news: applying what you’ve learned isn’t as difficult as you think. In fact, you can start today—even if you're a beginner. Imagine building a project that not only showcases your skills but also aligns with your passions or expertise. Whether it's predicting stock prices, automating a personal task, or creating an AI model based on your cultural background—AI/ML projects can be deeply meaningful, not just technical. Here's how to make the leap: 1. Start with a Small Problem Choose a project that excites you, but keep it simple at first. Think about something you care about: analyzing social media trends, building a chatbot, or creating an image classifier. Don't worry about making it perfect—just focus on finishing. 2. Use Familiar Tools Start with user-friendly platforms like Google Colab, which lets you run Python code without needing to set up complex environments. Use pre-built libraries like Pytorch , Keras, or Scikit-learn to ease the coding process. 3. Break It Down into Steps Simplify the project into smaller tasks—data collection, preprocessing, model selection, training, and evaluation. Tackle one piece at a time. Progress, even if slow, is progress. 4. Look for Real-World Datasets Websites like Kaggle offer real-world data sets, which can help you gain practical experience. Choose data that resonates with your interests or background. For example, if you're into healthcare, work on medical datasets; if you're a finance enthusiast, dive into stock data. 5. Iterate and Improve The first version of your project won’t be perfect—and that’s okay. Focus on getting a result, then iterate. Fine-tune your models, tweak your data, and improve with every round. 6. Showcase Your Work Share your project on GitHub or write about it on a blog or LinkedIn. You'll not only build a portfolio but also open doors for feedback and networking with AI/ML communities. Final tip: The magic happens when you align your AI/ML skills with a problem you care about. That’s when the real-world application becomes meaningful, not just technical. Start small. Stay consistent. Build meaningful AI/ML projects that matter to you—and the world will notice. --- I help AI/ML enthusiasts bridge the gap between technical skills and personal fulfillment. By simplifying complex topics and integrating your cultural background, experience, and passions, I guide you to create meaningful projects that truly make an impact. #data #MachineLearning #DeepLearning #AI #DataScience
To view or add a comment, sign in
-
So, I am Curious! Chris Dowsett built an AI-driven ML experiment demonstrates a shift: he built a working retention model in under an hour using only AI-generated code. While impressive, his key insight was that technical expertise remained crucial - not for writing code, but for guiding the AI effectively. This validates my deja reslou framework's core premise: we're moving from writing novel code to intelligently remixing established patterns. However, this requires a new skill set combining domain knowledge with prompt engineering. So I wonder if every possible code pattern has already been written and stored within LLMs' knowledge (for Anthropic's Claude Sonnet 3.5 since June of 2024)? Read more about Chris's experiment https://lnkd.in/eDcVJj3z and the pattern recognition revolution https://lnkd.in/e7JPWmrt #datascience #AI #Machinelearning #promptengineering
I Built an ML Model with AI — No Human Code Required
chrisdowsett.medium.com
To view or add a comment, sign in