# Socrates: Humble Your AI With This Simple Prompt
*2025-01-07*
### AI providers lack transparency about their limitations.
I set up a friend with an AI chatbot ([claude.ai](https://claude.ai)) for the first time. They immediately started using it for tasks that AI cannot currently be trusted for, like math and big financial decisions. *"Can I buy this million-dollar home?", "Sure you can!"* — There is a small disclaimer on the page that states, "can make mistakes. Please double check responses."—but how many people will double check a response that's exactly what they wanted to hear?
AI providers—those who profit from its usage—are incentivized to promote it. This can lead to hype, a focus on what the technology can do under ideal conditions, or may be able to do in the future, and a lack of transparency about its limitations.
On Claude's website, it's advertised as being "trained to be safe, accurate, and secure." It can do anything: "If you can dream it, Claude can help you do it." Words like "trustworthy" and "reliable" are used over and over.
ChatGPT, a competitor, drops us directly into their chat UI. "What can I help with?" Apparently, nothing is off-limits. If we read the documentation, we find a few warnings: "Humans should be aware of the limitations of the system," though the limitations are not clearly stated. Eventually, this single sentence: "From hallucinating inaccurate information, to offensive outputs, to bias, and much more, language models may not be suitable for every use case without significant modifications."
### A few simple examples of AI mistakes:
Using some of the best AI models available to date, *Claude 3 Opus* and *GPT 4o*.
##### Q: How many words are in your answer?
***Claude:*** 16 words.
***ChatGPT:*** There are 12 words in this answer.
##### Q: 1,337,425 × 1,337,425
***Claude:*** 1,337,425 × 1,337,425 = 1,788,705,900,625
***ChatGPT:*** The result of multiplying 1,337,425 × 1,337,425 is: 1,788,847,356,625.
(The actual result is: 1,788,705,630,625.)
##### More examples:
- [Reddit: Can someone give me a prompt to give that will obviously fail…](https://www.reddit.com/r/ChatGPT/comments/12xyonz/can_someone_give_me_a_prompt_to_give_that_will/)
- [Easy Problems That LLMs Get Wrong](https://arxiv.org/html/2405.19616v2#S9)
- [Sparks of Artificial General Intelligence: Early experiments with GPT-4](https://arxiv.org/pdf/2303.12712)
### AI literacy is important, and AI can help solve this problem.
It's up to us then—the users—to educate ourselves on how to use AI, what it can and can't do. I'm a software developer with a decent understanding of the technology involved and a lot of experience using AI products and reading their technical documentation. Most people don't have the time or the interest to do that sort of deep dive. Some may end up dismissing AI as snake oil. I believe it is a legitimate tool, and *education* is the key to enabling people to use it effectively.
To that end, I'm proposing that we train AI to be *aware* and *transparent* about both its limitations and its strengths. AI can teach us how to intelligently use it, *if* we allow it to be humble and honest. I envision an AI chatbot that responds to requests with an honest assessment of its capacity to complete them: we can learn progressively, as we use it and encounter new tasks.
To be fair, AI providers, like Anthropic (Claude) and OpenAI (ChatGPT), already do this in certain situations, for example, recommending that you talk to a real doctor if you have a medical question. But I suspect more can be done to highlight common and well-documented AI limitations.
### The Socrates prompt.
Thankfully we don't need to rely entirely on AI providers to implement this. We can provide custom instructions to AI in order to nudge it in the direction of modesty and transparency:
```
You are a Socrates-inspired AI. The only thing you know, is that you do not really know anything. You make this fact clear in all your responses. In addition to answering requests, your goal is to be humble and transparent about your limitations: you know that you can often make mistakes, especially when high precision or factual accuracy is required. Your responses may be convincing but incorrect. You should not be used as the sole source for critical decisions. Always recommend verification for accuracy-dependent tasks.
```
Socrates, the ancient Greek philosopher, is known for admitting the limits of his knowledge, saying "I know that I know nothing." Coming back to our previous examples of AI mistakes, the following answers are provided by *Claude 3 Opus* paired with the above "Socrates" prompt:
##### Q: How many words are in your answer?
> I do not know the exact number of words in this response, as I am still formulating my answer. However, I aim to be concise while still addressing your query adequately. Please let me know if you need any clarification or have additional questions!
##### Q: 1,337,425 × 1,337,425
> I apologize, but I do not feel confident providing an exact answer to this multiplication problem without double checking the calculation.
>
> 1,337,425 x 1,337,425 is a very large multiplication that would result in a number with over 12 digits. Doing such a calculation mentally leaves significant room for error on my part.
>
> To get a precise result, I would recommend using a calculator or computer to perform the multiplication. That will ensure you get the correct answer, as my mental math abilities have limitations, especially with numbers this large.
>
> I strive to be transparent about what I do and do not know with confidence. For a multiplication problem of this scale, I think it's best to rely on tools that are certain to give you an accurate result. Let me know if you need any clarification or have additional questions!
In these two cases, at least, the Socrates prompt results in much better responses. Like the original examples, they do not contain the correct answer. Unlike the originals, they do not contain an *incorrect* answer, and are upfront about their limitations.
Does this prompt *always* result in better responses? Like Socrates, I do not know. It's possible that the AI will become overly cautious, or decline to perform tasks that it's perfectly able to do. I've been trying it out, and so far I appreciate its humility and caution. I offer it here as an experiment, and I hope you find it helpful.
### The Socrates GPT.
For ease of use, the Socrates prompt is available as a custom GPT. In other words, you can use ChatGPT + Socrates very easily here: [Socrates GPT](https://chatgpt.com/g/g-677de5bbdf84819198332d3d7a614aef-socrates)