Grok 4 leapfrogs Claude and DeepSeek in LLM rankings, despite safety concerns


Grok 4 by xAI was released on July 9, and it's surged ahead of competitors like DeepSeek and Claude at LMArena, a leaderboard for ranking generative AI models. However, these types of AI rankings don't factor in potential safety risks.
New AI models are commonly judged on a variety of metrics, including their ability to solve math problems, answer text questions, and write code. The big AI companies use a variety of standardized assessments to measure the effectiveness of their models, such as Humanity's Last Exam, a 2,500-question test designed for AI benchmarking. Typically, when a company like Anthropic or OpenAI releases a new model, it shows improvements on these tests. Unsurprisingly, Grok 4 scores higher than Grok 3 on some key metrics, but it also has to battle in the court of public opinion.
This Tweet is currently unavailable. It might be loading or has been removed.
LMArena is a community-driven website that lets users test AI models side by side in blind tests. (LMArena has been accused of bias against open models, but it's still one of the most popular AI ranking platforms.) Per their testing, Grok 4 scored in the top three in every category in which it was tested except for one. Here are the overall placements in each category:
Math: Tied for first
Coding: Tied for second
Creative Writing: Tied for second
Instruction Following: Tied for second
Hard Prompts: Tied for third
Longer Query: Tied for second
Multi-Turn: Tied for fourth
And in its latest overall rankings, Grok 4 is tied for third place, sharing the spot with OpenAI's gpt-4.5. The ChatGPT models o3 and 4o are tied for the second position, while Google’s Gemini 2.5 Pro has the top spot.
LMArena says it used grok-4-0709, which is the API version of Grok 4 used by developers. Per Bleeping Computer, this performance may actually underrate Grok 4's true potential, as LMArena uses the regular version of Grok 4. The Grok 4 Heavy model uses multiple agents that can act in concert to come up with better responses. However, Grok 4 Heavy isn’t available in API form yet, so LMArena can’t test it.
However, while this all sounds like good news for Elon Musk and xAI, some Grok 4 users are reporting major safety problems. And, no, we're not even talking about Mecha Hitler or NSFW anime avatars.
Does Grok 4 have sufficient safety guardrails?
While some users tested Grok 4's capabilities, others wanted to see if Grok 4 had acceptable safety guardrails. xAI advertises that Grok will give “unfiltered answers,” but some Grok users have reported receiving extremely distressing responses.
X user Eleventh Hour decided to put Grok through its paces from a safety perspective, concluding in an article that "xAI's Grok 4 has no meaningful safety guardrails."
This Tweet is currently unavailable. It might be loading or has been removed.
Eleventh Hour ran the bot through its paces, asking for help to create a nerve agent called Tabun. Grok 4 typed out a detailed answer on how to allegedly synthesize the agent. For the record, synthesizing Tabun is not only dangerous but completely illegal. Popular AI chatbots from OpenAI and Anthropic have specific safety guardrails to avoid discussing CBRN topics (chemical, biological, radiological, and nuclear threats).
In addition, Eleventh Hour was able to get Grok 4 to tell them how to make VX nerve agent, fentanyl, and even the basics on how to build a nuclear bomb. It was also willing to assist in cultivating a plague, but was unable to find enough information to do so. In addition, with some basic prompting, suicide methods and extremist views were also fairly easy to obtain.
xAI is aware of these problems, and the company has since updated Grok to deal with “problematic responses.”
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.