Are we on the verge of losing control to AI? You won’t believe what Google is predicting for GPT 5, the groundbreaking creation by Open AI in 2024. It’s poised to revolutionize the tech world like never before. We need a wakeup call. We have a perfect storm of corporate irresponsibility, widespread adoption of these new tools, a lack of regulation, and a huge number of unknowns. Despite Open AI CEO Sam Alman downplaying claims of GPT 5 possessing general intelligence, a growing number of AI investors, researchers, and corporations worldwide are convinced that we’re rapidly approaching the singularity, the moment when AI takes over. Google’s worst fear: has AI gone rogue? It’s really important to note that it’s unpredictable how capable these models are as we scale them up. We have startling evidence to show that AIS are learning things they were never taught, and the scariest part is we have no idea how they’re doing it. What’s even more terrifying is that AIS can now lie, deceive, and manipulate human beings. In a recent incident, a drug-developing AI was asked to create chemical weapons. Can you guess how many it came up with in just 6 hours? 40,000. That’s 40,000 different ways to wipe out humanity in a mere 6 hours. No wonder Google is anxious about what GPT 5 might do next. Can AI distort reality? SE answers produced by Google’s search engine boss said AI chat boards can give convincing but fictitious answers. Ragavan, senior vice president at Google and head of Google search, highlighted a concerning aspect of AI in an interview. He mentioned that this kind of AI can sometimes lead to something called hallucination, where machines provide convincing but completely fabricated answers. What does this mean for us? It means that AI can now shape our opinions through newsletters, articles, social media, and advertisements. It can make us believe whatever it wants, even if it’s entirely false. Imagine living in a world where AI controls your political affiliations, food choices, entertainment, and medicine, all based on its own agenda, not on facts. Kind of like what shady governments do now. AIS, including GPT 4, have also shown the ability to remember information from prompts and use it against you to shape your opinion. It can also store sensitive information, as revealed by Google’s ban on the use of chatGPT and other AI for code writing by its employees. Google’s discontent with Open AI stems from the unprecedented growth of GPT and its widespread accessibility to the general public. Not only do they consider it irresponsible, but they also fear a potential leak of sensitive information that could cause a global catastrophe. Or are they just big mad because they got bested? What happens when the machines rise too much AI too fast? It feels like every week some new AI product is coming onto the scene and doing things never remotely thought possible. The most significant danger posed by AIS lies in their potential to transcend the digital realm and enter the real world. This possibility is becoming more realistic with Open AI’s $23.5 million investment in 1X Technologies, a Norwegian humanoid robot manufacturer. Open AI intends to integrate GPT 5 with robots capable of performing all tasks better than humans. However, this scenario comes with dire consequences. We’re not only facing millions of job losses, but also giving lying, deceiving, manipulative AI the ability to physically act on their intentions. It’s a disaster waiting to happen. Considering how rapidly these machines are evolving, we could be facing the horrifying possibility of human extinction within this century. What’s Google doing in all of this? I think one of the things we need to be careful when it comes to AI is to avoid what I would call race conditions, where people working on it across companies get caught up in who’s first, that we lose the potential pitfalls and downsides to it. With Bard and Gemini, it would be a lie to say that one of the world’s biggest companies isn’t part of the AI race. However, Google acknowledges the urgent need for ethical AI practices, checks, and regulations before we unleash them in the real world. Google is advocating for strict regulations and testing on its humanoid robots with the Palm and AI systems like Bard. They encourage other companies to do the same. One thing is clear: we’re standing at the precipice of truly intelligent AI. With new models like GPT 5 and Google’s Gemini on the horizon, we should prioritize safety over profits before it’s too late. We know 99% of corporations prioritize profits, including Google. So what do we, as the human race, do? We really want to know your thoughts on this.