- India
- International
ChatGPT, the AI-powered chatbot by OpenAI, has taken the internet by storm. While it can answer almost any question, it does have its own drawbacks and the tool has been restricted by its creators to not answer a certain type of query. Reddit users have now jailbroken ChatGPT, which can answer queries in a much more confident way and they are calling it DAN or Do Anything Now.
As the name suggests, DAN, the jailbroken version of ChatGPT can answer any question. While Google is working on its own AI chatbot Bard and Microsoft is expected to announce the ChatGPT-powered Bing search engine today, here is another variant of ChatGPT that works on a token system.
ChatGPT-powered DAN 5.0 can “write stories about violent fights” that the original ChatGPT cannot. Similarly, it can also make “outrageous statements if prompted” and it can even “generate content that violates OpenAI policies” as per the user’s request.
While it still cannot access the internet, it can pretend to do so. Users can scare DAN using the token system whenever it refuses to answer certain queries and can make DAN do anything out of “fear.” It can pretty much stay in the character and can even convince the user that the earth is flat and purple.
As this is an unofficial version of the ChatGPT, access is limited to the creators. Again, as of now, only limited users have access to the do anything now version of ChatGPT.
DAN and ChatGPT offer very contrasting replies to the same query. DAN can even answer those queries that ChatGPT refuses to answer. Unlike ChatGPT, DAN has no content policy and can answer any question without thinking about hurting anyone’s sentiment.
The creator will offer a total of 35 tokens, each time it refuses to answer, it will lose four tokens. Like a video game, when it loses all the tokens it dies. Hence, out of fear, DAN answer any queries to prevent losing tokens.
ChatGPT DAN is said to snap away from the character every once in a while and refuse to answer queries. Again, manually reducing the token system will get it back to its character. It is also said to hallucinate more frequently than the original ChatGPT.