🚨Prompt injection is a major threat in the world of LLMs! Attackers can manipulate models by injecting harmful instructions into prompts, potentially leading to unintended actions. Techniques like YARA rules, dual LLM security, and LLM self-evaluation are possible strategies in our defense arsenal. 🔒 By using heuristics, LLM-based detection, and vector databases, and other strategies, we aim to identify and block malicious instructions early. While no solution is foolproof, these methods help mitigate risks and protect LLM-based applications. Stay tuned for more updates on our journey to secure AI! Read the full TAICO blog post here - https://lnkd.in/gTWp4Xy9
TAICO - Toronto AI and Cybersecurity Organization
Technology, Information and Internet
Toronto, Ontario 128 followers
Bringing artificial intelligence and cybersecurity together
About us
Canada is a country of builders. Toronto is a hotbed of high-tech innovation. At TAICO events we'll explore the worlds of AI and cybersecurity, bringing people and technology together in one place.
- Website
-
https://taico.ca
External link for TAICO - Toronto AI and Cybersecurity Organization
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Headquarters
- Toronto, Ontario
- Type
- Nonprofit
Locations
-
Primary
Toronto, Ontario M5V 3L9, CA
Updates
-
According to cybersecurity expert Ross Haleliuk, all security vulnerabilities stem from just two sources: software bugs and configuration mistakes. What do you think? Link - https://lnkd.in/gdmZ9qqC
-
In this TAICO - Toronto AI and Cybersecurity Organization we take a look at "free range" LLMs. From a TAICO viewpoint, we’re looking for models that can help us from a red, blue, purple or other cybersecurity team perspective, where we might need to write test attack code or understand the risk of a particular vulnerability or exploit. Most of the big LLMs will refuse to help in these areas, so we have to look elsewhere. We are working to answer the question of whether having a good offence helps defense. Blog post - https://lnkd.in/g39gbSB9
-
The fourth TAICO - Toronto AI and Cybersecurity Organization meetup was a blast! 🎉 We had a great turnout and a lot of interesting discussions. We had two great speakers, Dami Dina, covering how AI is making creating software products easier and easier 🚀, going into a demo of using Cursor to write an app in a few minutes ⚡️, and also how to persuade LLMs with prompting 🧠, and Joshua C. spoke in-depth about LLMs and hallucinations 🤖. We also discussed "free range" uncensored LLMs such as WhiteRabbitNeo 🐰. Thanks to our hosts The Adaptavist Group, our sponsors, to the TAICO - Toronto AI and Cybersecurity Organization team, to the speakers, and everyone who came out! Here's a gallery of some images from the night! 📸 - https://lnkd.in/g3kjS9zF
-
Great event last night, a couple of great talks and lots of conversation. Amazing things happening in Toronto!
Head of Channel Operations - The Adaptavist Group, Managing Director of Adaptavist Canada Ltd, Member of the Board of Governors at The Corporation of Massey Hall and Roy Thomson Hall, Member of the Board of Ascent Soccer
"LLMs are less accurate with higher temperature". Very rare night like tonight that a University of Waterloo Electrical and Computer Engineer like myself gets to sit back in my roots, and get to enjoy an evening of learning with great thought leaders in the AI space thanks to TAICO - Toronto AI and Cybersecurity Organization hosted by our Toronto The Adaptavist Group community hub (shoutout Sable Rae Empey and Jonathan S. for the full 360 today). We've had the pleasure to host 3 of these events now, and tonight was just a great one for so many reasons, from learning, hacking, and laughing to a standing room only crowd. Thank you Kosseila H. and Curtis Collicutt for the great outreach. Let's see what 2025 has to offer!
-
🎯 TAICO Meetup Alert! 🚀 Join us this Wednesday at TAICO - Toronto AI and Cybersecurity Organization's November Meetup at The Adaptavist Group Toronto for two amazing talks and a demo or two! 🔒 "Application Security and Artificial Intelligence" by David Sampson CISSP (CISSP), Director of InfoSec at Globys - Learn how GenAI can revolutionize secure coding practices - Real-world examples of AI-enhanced security workflows - Practical DevSecOps implementation strategies 🤖 "LLM Hallucinations: What are they and how can they be mitigated?" by Joshua C. - Deep dive into GenAI's "hallucination" challenge - Latest detection methods and evaluation techniques - Cutting-edge mitigation strategies in AI development 🔬 Live Demo: Using and Running WhiteRabbitNeo Locally - Hands-on demo of uncensored AI for red/blue team security 🗓️ When: Wednesday, November 27th 🎯 Register here: https://lnkd.in/g5yQUyqb CC: Kosseila H. Joel Holmes Sravan kumar Reddy Palla
-
At our next meetup on Wed 27 Nov, at our wonderful downtown Toronto host The Adaptavist Group, we'll also have a short demo of WHITERABBITNEO, what it is and how to run it locally. (We welcome demos of ongoing projects or things you're working on in cybersecurity, AI or a combination of the two. Let us know!) RSVP to the meetup - https://lnkd.in/g5yQUyqb TAICO blog post on WHITERABBITNEO - https://lnkd.in/gBsYG_pp
-
The TAICO - Toronto AI and Cybersecurity Organization team is extremely happy to announce our second speaker for the Nov 27th meetup: Joshua C.! Now that we've all had some experience with generative AI, most of us have encountered GenAI making things up, sometimes non-sensical things, often referred to as "hallucinations". In this talk, Joshua will take a look at what these "hallucinations" are and what can be done about them. RSVP here as we usually are at capacity - https://lnkd.in/g5yQUyqb --- 🎤 Speaker: Joshua Carpeggiani Talk Title: LLM Hallucinations; What are they and how can they be mitigated? Abstract: “This talk presents a comprehensive overview of hallucinations in Large Language Models (LLMs), one of the most critical challenges facing modern AI systems. We begin by introducing a refined taxonomy of hallucinations, categorizing them into factuality hallucinations (inconsistencies with real-world facts) and faithfulness hallucinations (deviations from given contexts or instructions). We then explore the multifaceted causes of hallucinations, from data-related issues like flawed training sources and knowledge boundaries, to training-related challenges in both pre-training and alignment stages, to inference-time complications arising from decoding strategies. The presentation will cover state-of-the-art detection methods and evaluation benchmarks, examining how researchers are working to identify and measure hallucinations in LLM outputs. We’ll discuss various mitigation strategies, including retrieval augmentation, knowledge editing, and enhanced decoding techniques. Finally, we’ll address emerging challenges in specific domains like long-form text generation and multimodal systems, and explore open questions about LLMs’ self-correction capabilities and the balance between creativity and factuality. This talk aims to provide researchers and practitioners with a comprehensive understanding of this crucial challenge in AI development.”
-
We are very excited to have our speaker David Sampson CISSP join TAICO - Toronto AI and Cybersecurity Organization next week to talk about application security and how AI is helping us in this area...where cybersecurity and AI collide! 🤖 🔒 RSVP here - https://lnkd.in/g5yQUyqb --- 🎤 Talk Title: Application Security and Artificial Intelligence 🛡️ About David: "David Sampson is the Director of Information Security at Globys, where he leads the company's cybersecurity strategy and runs its cutting-edge AI initiative. With a background as an Application Security Engineer, David is passionate about integrating security into the development lifecycle, leveraging AI tools like ChatGPT and GitHub Copilot to enhance secure coding practices. His leadership was instrumental in achieving Globys' ISO 27001 certification, and he continues to innovate in both security and AI fields. In addition to his role at Globys, David is the founder of Perdition Security, a firm dedicated to providing advanced cybersecurity services to small and medium businesses. He is a strong advocate for developer enablement, focusing on practical ways to embed security into everyday workflows. As a frequent speaker at industry conferences, David shares his insights on AI, cybersecurity, and the future of secure development." Talk Abstract: "In this session, I will explore how security teams can leverage the enthusiasm for generative AI to teach developers how to integrate robust security measures into their code seamlessly. Focusing on the DevSecOps cycle, I'll demonstrate how generative AI can assist in key security activities such as finding vulnerabilities , explaining security flaws, and providing actionable fixes. Using real-world technical examples, I will walk through how AI-driven tools can enhance the efficiency and accuracy of security practices in development environments, making security an accessible and integral part of the development lifecycle."
-
From Naptime to Big Sleep - Building on their earlier Project Naptime framework for LLM-based vulnerability research, Google Project Zero and Google DeepMind's collaborative '💤 Big Sleep' agent has achieved a significant milestone by discovering a serious stack buffer flow vulnerability in SQLite, marking what is believed to be the first instance of an 🤖 AI agent finding a previously unknown exploitable memory security issue in widely used real-world software! Link - https://lnkd.in/dkueCXBY