ChatGPT became Kim Kardashian’s “toxic friend” during her law studies

ChatGPT virou “amiga tóxica” de Kim Kardashian nos estudos jurídicos
Imagem destaque: X/Kim Kardashian

Kim Kardashian said she used ChatGPT as an ally while preparing for her law exams, but the tool ended up causing more trouble than help.

She explained that the chatbot gave wrong answers so often that she even blamed the artificial intelligence for her poor performance on some tests.

Kim shared the story during a lighthearted interview for Vanity Fair’s lie detector segment.

When asked about her use of AI, she admitted she would turn to ChatGPT whenever she needed explanations on legal topics and even sent photos of exam questions hoping for quick answers.

She said she even argued with the AI after following incorrect guidance. Despite the complaints, Kim revealed that the chatbot responded with motivational messages, almost like a mix of advisor and therapist.

The situation became a love and hate dynamic. For Kim, ChatGPT acted like a toxic friend. It gave wrong answers, then tried to comfort her by saying she should trust her own reasoning more.

Kim continues her path toward working in the legal field. She said she is close to earning the necessary qualification and is considering fully embracing a career as a lawyer in the next few years.

Why did ChatGPT fail here

Language models work by analyzing patterns in the text data they were trained on. They generate answers based on those patterns, not on real legal understanding.

If Kim was sending complex legal questions or photos of exam prompts, there is a high chance the model did not receive the full context needed for accurate interpretation. In law, missing a detail can change everything.

Depending on the version she used, the model may also have lacked updates on current legislation, decisions or recent cases. Models have a knowledge cutoff that creates gaps, which is a big problem in legal exams where precision matters.

Legal reasoning requires interpretation, case application and critical analysis. Even advanced AI models still struggle with deeper legal logic. Studies show that in exams and technical education contexts, models often answer correctly only half the time. Good enough for brainstorming, mas not for graded legal tests.

Kim Kardashian also made mistakes in how she used the tool

Kim admitted that she took photos of questions and sent them to the chatbot expecting fast answers. The idea was to save time. But that shortcut is exactly what turns a useful tool into a risky one.

Learning requires reading, checking sources and understanding the problem. Throwing a question at the AI without processing it first is no different from copying homework without learning anything.

Kim did not share how these conversations looked. We do not know whether she provided context, asked follow up questions, requested sources or verified the answers afterward. All of this matters.

For example, asking ChatGPT to respond with sources increases accuracy because you can check whether the explanation actually makes sense. AI is not a crystal ball. It works best when the user knows what they are trying to understand.

Even before ChatGPT, nobody passed a law exam by reading any random blog on Google. Low quality content always existed. The challenge has always been separating useful material from misleading material.

The same logic applies to AI. You can learn a lot from it, but blindly trusting it is a mistake.

One of the side effects of using AI without criteria is weakening your own reasoning. Instead of developing critical reading skills, the student gets used to receiving ready answers and loses the ability to build arguments, which is essential in law.

Even ChatGPT’s ironic reply to Kim, telling her to trust her instincts, shows that she was outsourcing her thinking.

Her mistake was more human than technological. Many people fall into the same trap of using ChatGPT as if it were an oracle instead of a tool. Those who know how to study gain a lot from AI. Those who rely on shortcuts tend to stumble.

Deixe seu comentário: