Merge remote-tracking branch 'refs/remotes/origin/main'

This commit is contained in:
nihilist 2025-06-01 20:37:34 +02:00
commit f51908b856
2 changed files with 14 additions and 2 deletions

BIN
openwebuilocalllms/50.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View file

@ -1,6 +1,6 @@
--- ---
author: oxeo0 author: oxeo0
date: 2025-04-20 date: 2025-06-01
gitea_url: "http://git.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/nihilist/blog-contributions/issues/226" gitea_url: "http://git.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/nihilist/blog-contributions/issues/226"
xmr: 862Sp3N5Y8NByFmPVLTPrJYzwdiiVxkhQgAdt65mpYKJLdVDHyYQ8swLgnVr8D3jKphDUcWUCVK1vZv9u8cvtRJCUBFb8MQ xmr: 862Sp3N5Y8NByFmPVLTPrJYzwdiiVxkhQgAdt65mpYKJLdVDHyYQ8swLgnVr8D3jKphDUcWUCVK1vZv9u8cvtRJCUBFb8MQ
tags: tags:
@ -24,6 +24,18 @@ The vast amount of sensitive user data stored can have devastating consequences
![](5.png) ![](5.png)
### Claude 4 Opus contacting authorities
On 22nd May 2025, a researcher from Anthropic posted a tweet talking about their latest model release.
![](50.png)
He stated the model can be instructed to automatically report **"immoral behavior"** to relevant authorities using command line tools. While this is not implemented in mainline Claude 4 Opus model yet, it shows the direction large AI companies want to go in (see [AI alignment](https://en.wikipedia.org/wiki/AI_alignment)).
After facing severe backlash from users, Sam Bowman deleted his post. ([archived](https://xcancel.com/sleepinyourhat/status/1925627033771504009))
If you want to learn more, [Sam Bent](../index.md#wall-of-fame-as-of-may-2025) made a [YouTube video](https://www.youtube.com/watch?v=apvxd7RODDI) on this situation.
## **Privacy LLM frontends** ## **Privacy LLM frontends**
A partial solution to those problems could be a service that aggregates multiple model APIs and anonymizes their users. A bit like [searxng](https://github.com/searxng/searxng) does for search engines. A partial solution to those problems could be a service that aggregates multiple model APIs and anonymizes their users. A bit like [searxng](https://github.com/searxng/searxng) does for search engines.