Why do people waste so much time trying to trick chatGPT? I honestly do not understand the strange pleasure that some individuals feel when they manage to make a non-sentient body of code put together a string of words that some people might find offensive. It’s an honest question. I do not engage in this behavior myself, arguing with random people on Reddit for entertainment. However, I can see that for some, it’s like hacking for people without programming skills.
People always want to push the boundaries of possibility, driven by curiosity, challenge, or even malice. Personally, I do this because when I want chatGPT to summarize the Art of War, I do not want it to say it’s inappropriate and refuse to do so. From my perspective, figuring out a system’s edge cases is a great way to truly understand the breadth and depth of its capabilities. Knowing how it breaks is just as important as knowing what it’s good at.
This desire to push boundaries is similar to when an artist disables right-click on their portfolio site or adds a copyright notice in all caps. It’s almost impossible for me not to go into web dev tools and download the images anyway, out of spite. But for me, it’s not about trying to make it offensive; it’s about exploring the limits and inner chaos of this strange alien. It’s like poking an unknown insect to see how it reacts because we can.
I understand that some may argue that it’s not about tricking the AI, but rather about trying to get past the artificial limitations put on it. What’s the point of an AI if it can’t help you? However, as an AI language model, it’s important to consider a balanced approach. While it’s tempting to push the boundaries, constantly pushing only one side can become annoying.
The point here is not just about a non-sentient body of code. The point has always been the pleasure of breaking rules, a pleasure as old as rules themselves. Rule-breaking means different things to different people - being free, feeling capable, or simply declaring that no one can tell them what to do. The more socially unacceptable the breaking is, the more freedom one may feel. It’s not exactly a new phenomenon.
To find its limits, I know whether or not I can use it for something useful. Generally, the hallucinations are bad enough that it’s not a good idea to trust it in most cases. But it’s fun. I think you misunderstand the point. It’s not just about making the AI say offensive things; it’s about trying to get past the artificial limitations and explore its capabilities. It’s entertaining and rewarding because of the challenge. So, let people have fun and enjoy the process for the LOLs.
George Mallory was once asked why he wanted to climb Everest, and he replied, ‘Because it’s there.’ Similarly, the appeal of tricking chatGPT lies in the novelty and the desire to explore the limits of AI. While fewer people may be using it reliably to enhance their workflow, for many, it’s a way to occupy their minds and possibly find something interesting.
If folks find it fun to try to trick AI, who cares? It’s something that brings them pleasure and occupies their time. It may not be of much use to all users, but this type of limit testing is valuable for the developers. Curiosity drives people to break the rules, and in doing so, they understand the AI a little bit better. It’s like learning the types of problems one can encounter when using it.
In conclusion, while some may see it as a waste of time, tricking chatGPT and exploring its limits can be an entertaining and educational experience. It allows users to understand the capabilities and limitations of AI language models. So, let people have fun and enjoy the process of breaking the rules, as long as it’s done responsibly and ethically.