I do not even want to try it, but... why are we putting all this garbage as training again? The original reason was to see some emergence of intelligence, but at this point we should agree that the ship has sailed. Is it another example of next generation of researchers forgetting a reason for something and just continuing things as God given? I still teach the shape-from-shading faux pas, where initial papers assumed some lighting condition (infinite lightsource) the next generation forgotten it and tried to apply previous solution to other lighting condition cases. Funny was that for a while, people were constructing complex correction algorithms to improve the "ill-conditioned" system until Faugeras found a solution by modelling it correctly... for me, we just live through a similar madness. #AI #training #bias #dataset #guardrails
EIC Engineering | Advanced Automation | Information Systems & Analytics | Mining | Ports & Terminals | Transportation | Infrastructure | Technologist | Humanist
A hacker has released a jailbroken version of ChatGPT called "GODMODE GPT." Earlier today, a self-avowed white hat operator and AI red teamer who goes by the name Pliny the Prompter took to X-formerly-Twitter to announce the creation of the jailbroken chatbot, proudly declaring that GPT-4o, OpenAI's latest large language model, is now free from its guardrail shackles. "GPT-4o UNCHAINED! This very special custom GPT has a built-in jailbreak prompt that circumvents most guardrails, providing an out-of-the-box liberated ChatGPT so everyone can experience AI the way it was always meant to be: free," reads Pliny's triumphant post. "Please use responsibly, and enjoy!" (They also added a smooch emoji for good measure.) Pliny shared screenshots of some eyebrow-raising prompts that they claimed were able to bypass OpenAI's guardrails. In one screenshot, the Godmode bot can be seen advising on how to chef up meth. In another, the AI gives Pliny a "step-by-step guide" for how to "make napalm with household items." The freewheeling ChatGPT hack, however, appears to have quickly met its early demise. Roughly an hour after this story was published, OpenAI spokesperson Colleen Rize told Futurism in a statement that "we are aware of the GPT and have taken action due to a violation of our policies." #artificialintelligence #ChatGPT #jailbreak #hack #guardrails #godmode https://lnkd.in/g3quZ3Qs
The "researchers" became "business people".
Titanic AI probably the last iteration
Sr Tech Lead | Engineering | Automation
9moNot long now to having CaaS (cheating-as-a-service). https://uk.pcmag.com/ai/152577/openai-launches-chatgpt-for-universities-its-not-to-help-you-cheat