Darius Burschka’s Post

View profile for Darius Burschka

Professor CIT (TUM) - Visual Analysis of Dynamic Scenes

I do not even want to try it, but... why are we putting all this garbage as training again? The original reason was to see some emergence of intelligence, but at this point we should agree that the ship has sailed. Is it another example of next generation of researchers forgetting a reason for something and just continuing things as God given? I still teach the shape-from-shading faux pas, where initial papers assumed some lighting condition (infinite lightsource) the next generation forgotten it and tried to apply previous solution to other lighting condition cases. Funny was that for a while, people were constructing complex correction algorithms to improve the "ill-conditioned" system until Faugeras found a solution by modelling it correctly... for me, we just live through a similar madness. #AI #training #bias #dataset #guardrails

View profile for Craig Pearce

EIC Engineering | Advanced Automation | Information Systems & Analytics | Mining | Ports & Terminals | Transportation | Infrastructure | Technologist | Humanist

A hacker has released a jailbroken version of ChatGPT called "GODMODE GPT." Earlier today, a self-avowed white hat operator and AI red teamer who goes by the name Pliny the Prompter took to X-formerly-Twitter to announce the creation of the jailbroken chatbot, proudly declaring that GPT-4o, OpenAI's latest large language model, is now free from its guardrail shackles. "GPT-4o UNCHAINED! This very special custom GPT has a built-in jailbreak prompt that circumvents most guardrails, providing an out-of-the-box liberated ChatGPT so everyone can experience AI the way it was always meant to be: free," reads Pliny's triumphant post. "Please use responsibly, and enjoy!" (They also added a smooch emoji for good measure.) Pliny shared screenshots of some eyebrow-raising prompts that they claimed were able to bypass OpenAI's guardrails. In one screenshot, the Godmode bot can be seen advising on how to chef up meth. In another, the AI gives Pliny a "step-by-step guide" for how to "make napalm with household items." The freewheeling ChatGPT hack, however, appears to have quickly met its early demise. Roughly an hour after this story was published, OpenAI spokesperson Colleen Rize told Futurism in a statement that "we are aware of the GPT and have taken action due to a violation of our policies." #artificialintelligence #ChatGPT #jailbreak #hack #guardrails #godmode https://lnkd.in/g3quZ3Qs

Hacker Releases Jailbroken "Godmode" Version of ChatGPT

Hacker Releases Jailbroken "Godmode" Version of ChatGPT

futurism.com

Bogdan Grigorescu

Sr Tech Lead | Engineering | Automation

9mo

Not long now to having CaaS (cheating-as-a-service). https://uk.pcmag.com/ai/152577/openai-launches-chatgpt-for-universities-its-not-to-help-you-cheat

Pete Dietert

Software, Systems, Simulations and Society (My Opinions Merely Mine)

9mo

The "researchers" became "business people".

Dr Fred J.

DeepTech innovation, identity, security, decision, performance, M.D. SMIEEE Member EPA Member APA

9mo

Titanic AI probably the last iteration

See more comments

To view or add a comment, sign in

Explore topics