mirror of
http://git.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/nihilist/opsec-blogposts.git
synced 2025-06-07 20:39:35 +00:00
add claude 4 opus drama to local llms tutorial
This commit is contained in:
parent
b61b3475dc
commit
0e31fbdfb8
2 changed files with 14 additions and 2 deletions
BIN
openwebuilocalllms/50.png
Normal file
BIN
openwebuilocalllms/50.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 18 KiB |
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
author: oxeo0
|
||||
date: 2025-04-20
|
||||
date: 2025-06-01
|
||||
gitea_url: "http://git.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/nihilist/blog-contributions/issues/226"
|
||||
xmr: 862Sp3N5Y8NByFmPVLTPrJYzwdiiVxkhQgAdt65mpYKJLdVDHyYQ8swLgnVr8D3jKphDUcWUCVK1vZv9u8cvtRJCUBFb8MQ
|
||||
tags:
|
||||
|
@ -22,11 +22,23 @@ The vast amount of sensitive user data stored can have devastating consequences
|
|||
|
||||
**Assume all conversations with online chatbots can be public at any time.**
|
||||
|
||||
### Claude 4 Opus contacting authorities
|
||||
|
||||
On 22nd May 2025, a researcher from Anthropic posted a tweet talking about their latest model release.
|
||||
|
||||

|
||||
|
||||
He stated the model can be instructed to automatically report **"immoral behavior"** to relevant authorities using command line tools. While this is not implemented in mainline Claude 4 Opus model yet, it shows the direction large AI companies want to go in (see [AI alignment](https://en.wikipedia.org/wiki/AI_alignment)).
|
||||
|
||||
After facing severe backlash from users, Sam Bowman deleted his post. ([archived](https://xcancel.com/sleepinyourhat/status/1925627033771504009))
|
||||
|
||||
If you want to learn more, [Sam Bent](../index.md#wall-of-fame-as-of-may-2025) made a [YouTube video](https://www.youtube.com/watch?v=apvxd7RODDI) on this situation.
|
||||
|
||||

|
||||
|
||||
## **Privacy LLM frontends**
|
||||
|
||||
A partial solution to those problems could be a service that aggregates multiple model APIs and anonymizes their users. A bit like [searxng](https://github.com/searxng/searxng) does for search engines.
|
||||
A partial solution to those problems could be a service that aggregates multiple model APIs and anonymizes their users. A bit like [searxng](https://github.com/searxng/searxng) does for search engines.
|
||||
AI companies can't know who exactly uses their models since the amount of metadata is heavily limited.
|
||||
|
||||
There're several such services including [ppq.ai](https://ppq.ai), [NanoGPT](https://nano-gpt.com) or [DuckDuckGo chat](https://duck.ai). This is only a partial solution since your conversation contents can still be saved and used for later training by large AI companies.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue