# 2025 AI Clients Compared *2025-01-09* ### AI can be a friend or foe. Be careful with your data. I am a frequent user of AI and have found it to be very helpful in various situations. But I worry that its capacity to consume large amounts of information and produce human speech may be used to influence rather than to assist. For example, a malicious AI could use information about me to generate highly persuasive and personalized messages. Think Nigerian Prince scams on steroids. The best defense is to protect our personal information, and that starts with being mindful about how AI services use the data we send them. If an AI aggregated all the questions and documents I've sent them over the last couple years, they could probably compile a more comprehensive profile of me than a thorough FBI investigation. With that in mind, I've compared the most interesting AI clients that I've used, with a focus on ethics and privacy. I hope you find it informative. | | [[#^chatgpt\|ChatGPT]] | [[#^claude\|Claude]] | [[#^perplexity\|Perplexity]] | [[#^open-webui\|Open WebUI]] | [[#^kagi\|Kagi]] | | ----------------------------------- | :--------------------: | :------------------: | :---------------------------------: | :------------------------------------------------------------------: | :----------------------------------------------------------------------: | | References included | ✅ | ❌ | ✅ | ❌ | ✅ | | CEO did **NOT** give $ to Trump | 👎 | ✅ | ✅ | ✅ | ✅ | | User data **NOT** used for training | Opt out | ❌ | Opt out | ✅ | ✅ | | Data **NOT** used for advertising | ✅ | ✅ | ❌ | ✅ | ✅ | | Data stays local | ❌ | ❌ | ❌ | ✅ | ❌ | | Data retention | 30 days | 30 days | 30 days | N/A | 0-30 d[^kagi-retention] | | AI model choices | OpenAI | Anthropic | OpenAI<br>Anthropic<br>Meta<br>x-AI | Meta<br>Qwen<br>Microsoft<br>Mistral AI<br>Google<br>Alibaba<br>etc… | Anthropic<br>OpenAI<br>Mistral AI<br>Google<br>Meta<br>Alibaba<br>Amazon | [^kagi-retention]: With Kagi data is retained between 0 and 30 days depending on the AI model selected ([Kagi: LLMs & Privacy](https://help.kagi.com/kagi/ai/llms-privacy.html)). ## ChatGPT ^chatgpt ChatGPT used to be my go-to AI client. For years, I paid a monthly subscription to get access to their latest features and models. However, upon learning that Sam Altman was contributing $1 million to Trump's inauguration fund, **I decided that I did not trust and did not want to patronize an organization lead by someone who participates in corruption and celebrates violent political leadership.**[^sam-altman] [^sam-altman]: [https://apnews.com/article/sam-altman-donald-trump-openai…](https://apnews.com/article/sam-altman-donald-trump-openai-3b7a87037f3718eb3edc73e94be8a61a) Furthermore, given the non-zero chance we’re headed towards a totalitarian regime—where opposition may be dangerous and illegal—it seems unwise to send personal information to an organization that is actively collaborating. From ChatGPT's privacy policy: > A note about accuracy: Services like ChatGPT generate responses by reading a user’s request and, in response, predicting the words most likely to appear next. In some cases, the words most likely to appear next may not be the most factually accurate. For this reason, you should not rely on the factual accuracy of output from our models.[^chatgpt-privacy] > ==no Internet or email transmission is ever fully secure or error free. Therefore, you should take special care in deciding what information you provide to the Services.== In addition, we are not responsible for circumvention of any privacy settings or security measures contained on the Service, or third-party websites.[^chatgpt-privacy] [^chatgpt-privacy]: https://openai.com/policies/privacy-policy/ This is an important warning, and a fact I like to keep in mind whenever I use an Internet-based AI service: **the transmission is never fully secure.** If content is very private or sensitive, it's best to rely on a "local" client like [[#^open-webui|Open WebUI]], meaning data never leaves one's own computer. ## [Claude](https://claude.ai) ^claude > We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our [Usage Policy](https://www.anthropic.com/legal/aup), including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) you've otherwise explicitly opted in to the use of your Inputs and Outputs for training purposes.[^claude-privacy] So it's opt in most of the time, except if a conversation gets flagged, which could happen at any point. As a result, ==there is no guarantee that input data won't be used for training models.== I don’t like that. It’s my data, I don’t want it to be part of a training dataset unless I explicitly agree. [^claude-privacy]: https://www.anthropic.com/legal/privacy ## [Perplexity](https://perplexity.ai) ^perplexity This company is integrating advertising with AI chatbots. Normally I would hate anything that includes advertising, but I'm actually intrigued by the way they surface promoted content. It seems unobtrusive and may be shown only in highly relevant situations. [Here's an example search for headphone recommendations](https://www.perplexity.ai/search/what-are-the-best-headphones-r-B0o3dOlQR4eTRuzoOFn37w), which is not directly sponsored, but feels promising for cases where I'm consciously trying to make a purchase. ## [Open WebUI](https://github.com/open-webui/open-webui) ^open-webui **Open WebUI is a free, open-source AI client.** It is intended for advanced users as its installation is currently performed through the command line. With the right configuration, Open WebUI runs completely locally, meaning entirely on one's computer—no Internet required. ==**This is the only really private and secure solution.**== One big drawback is that the most powerful models can’t run on consumer hardware, so the results may be of lower quality and may appear slower than cloud services (all the other clients featured here are on the cloud). ## [Kagi](https://kagi.com) ^kagi I first started using Kagi's search engine before its recent AI Assistant. I was looking for an alternative to ad-supported search engines like Google, because I hate having ads shoved in my face for the sole purpose of influencing me and taking my money. Kagi has no ads, instead relying on paid subscriptions. **This means that it can focus on being the best product for its users, rather than being primarily an ad-delivery service.** For example, one can promote or exclude certain websites from search results, or customize the appearance of the search results. Also, the search results themselves are great and allow me to reliably find what I'm looking for. I haven't looked back. Since deciding to avoid [[#ChatGPT]], I've mostly been using [Kagi's AI Assistant](https://help.kagi.com/kagi/ai/assistant.html). It's slightly more expensive than ChatGPT's premium plan ($25/month rather than $20), but that includes unlimited usage of Kagi's search engine. Like **Perplexity** and **Open WebUI**, Kagi let's one use a range of AI models—GPT, Claude, Llama, etc. Importantly, Kagi takes data privacy very seriously. Requests to third-party AI services never include personally identifiable information. Inputs are never used to train future AI models. By default, conversations with the Assistant are deleted after 24 hours, unless saved. Kagi also makes it possible to create custom assistants. For example, I have one based on the [[Socrates Prompt]] which is upfront about the limitations inherent to AI. ==**I highly recommend Kagi if you care about privacy and want to experience a product that's built for you, respects you, and gives you control.**== I wish more organizations adopted their philosophy.