add claude 4 opus drama to local llms tutorial

This commit is contained in:
oxeo0 2025-06-01 14:51:28 +02:00
parent b61b3475dc
commit 0e31fbdfb8
No known key found for this signature in database
GPG key ID: B4DCEAB52B5BEC67
2 changed files with 14 additions and 2 deletions

BIN
openwebuilocalllms/50.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View file

@ -1,6 +1,6 @@
---
author: oxeo0
date: 2025-04-20
date: 2025-06-01
gitea_url: "http://git.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/nihilist/blog-contributions/issues/226"
xmr: 862Sp3N5Y8NByFmPVLTPrJYzwdiiVxkhQgAdt65mpYKJLdVDHyYQ8swLgnVr8D3jKphDUcWUCVK1vZv9u8cvtRJCUBFb8MQ
tags:
@ -22,11 +22,23 @@ The vast amount of sensitive user data stored can have devastating consequences
**Assume all conversations with online chatbots can be public at any time.**
### Claude 4 Opus contacting authorities
On 22nd May 2025, a researcher from Anthropic posted a tweet talking about their latest model release.
![](50.png)
He stated the model can be instructed to automatically report **"immoral behavior"** to relevant authorities using command line tools. While this is not implemented in mainline Claude 4 Opus model yet, it shows the direction large AI companies want to go in (see [AI alignment](https://en.wikipedia.org/wiki/AI_alignment)).
After facing severe backlash from users, Sam Bowman deleted his post. ([archived](https://xcancel.com/sleepinyourhat/status/1925627033771504009))
If you want to learn more, [Sam Bent](../index.md#wall-of-fame-as-of-may-2025) made a [YouTube video](https://www.youtube.com/watch?v=apvxd7RODDI) on this situation.
![](5.png)
## **Privacy LLM frontends**
A partial solution to those problems could be a service that aggregates multiple model APIs and anonymizes their users. A bit like [searxng](https://github.com/searxng/searxng) does for search engines.
A partial solution to those problems could be a service that aggregates multiple model APIs and anonymizes their users. A bit like [searxng](https://github.com/searxng/searxng) does for search engines.
AI companies can't know who exactly uses their models since the amount of metadata is heavily limited.
There're several such services including [ppq.ai](https://ppq.ai), [NanoGPT](https://nano-gpt.com) or [DuckDuckGo chat](https://duck.ai). This is only a partial solution since your conversation contents can still be saved and used for later training by large AI companies.