mirror of
http://git.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/nihilist/blog-contributions.git
synced 2025-07-02 11:56:40 +00:00
added a desired setup diagram, changed docker installation method
This commit is contained in:
parent
644cd8214c
commit
59a0cf22d7
4 changed files with 533 additions and 296 deletions
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
BIN
opsec/openwebuilocalllms/9.png
Normal file
BIN
opsec/openwebuilocalllms/9.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 50 KiB |
|
@ -104,7 +104,7 @@
|
|||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2">
|
||||
<a href="../index.html">Previous Page</a></br></br><p><img src="../../assets/img/user.png" width="50px" height="50px"> <ba>oxeo0 - 2025 / 04 / 18</ba></p>
|
||||
<a href="../index.html">Previous Page</a></br></br><p><img src="../../assets/img/user.png" width="50px" height="50px"> <ba>oxeo0 - 2025 / 04 / 20</ba></p>
|
||||
<h1>Anonymity - Self-Hosted LLM Hidden Service</h1>
|
||||
<img src="0.png" style="width:250px">
|
||||
<img src="1.png" style="width:250px">
|
||||
|
@ -308,9 +308,19 @@ Personally, I was interested in Open LLMs since their inception - when ollama pr
|
|||
<p>To follow this tutorial, you'll need an AMD64 system running Debian 12. Although ollama can work on CPU only, the performance will be much worse than having model that fits in GPU's VRAM.<br>
|
||||
To comfortably use an 8B model, it's strongly advised to have a dedicated GPU with at least 6GB of VRAM. You can check the supported GPU models <a href="https://github.com/ollama/ollama/blob/main/docs/gpu.md">here</a>.</p>
|
||||
<p>This tutorial showcases ollama setup with Nvidia drivers, but AMD GPUs are also supported.</p>
|
||||
<p>If you want to expose Open WebUI via Tor to access it remotely, you should have an <a href="../torwebsite/index.html">onion v3 vanity address and Tor installed</a>.</p>
|
||||
<p>It's also possible to set this up inside a Proxmox VE or any KVM based VM. You just need to PCI passthrough appropriate GPU inside the <b>Hardware tab</b>:</p>
|
||||
<img src="6.png" class="imgRz">
|
||||
|
||||
<p>
|
||||
Here is how your setup may look like in the end of this tutorial:
|
||||
</p>
|
||||
<img src="9.png" style="width:400px">
|
||||
|
||||
<p><br>
|
||||
In case you decide to go with Proxmox VE hypervisor, you just need to PCI passthrough appropriate GPU inside the <b>Hardware > Add > PCI device</b>:</p>
|
||||
<img src="6.png">
|
||||
|
||||
<p><br>
|
||||
If you want to expose Open WebUI via Tor to access it remotely, you should have an <a href="../torwebsite/index.html">onion v3 vanity address and Tor installed</a>.</p>
|
||||
|
||||
</div>
|
||||
</div><!-- /row -->
|
||||
</div> <!-- /container -->
|
||||
|
@ -321,8 +331,9 @@ To comfortably use an 8B model, it's strongly advised to have a dedicated GPU wi
|
|||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2">
|
||||
<h2><b>Docker Setup</b></h2>
|
||||
<p>To install Docker, follow the official guide: <a href="https://docs.docker.com/engine/install/debian/">Install Docker Engine on Debian</a>. After installation, add your user to the docker group:</p>
|
||||
<pre><code class="nim">oxeo@andromeda:~$ sudo /sbin/usermod -aG docker oxeo
|
||||
<p>Install Docker from Debian repositories using apt. After installation, add your user to the docker group:</p>
|
||||
<pre><code class="nim">oxeo@andromeda:~$ sudo apt install docker.io
|
||||
oxeo@andromeda:~$ sudo /sbin/usermod -aG docker oxeo
|
||||
oxeo@andromeda:~$ sudo systemctl enable docker
|
||||
</code></pre>
|
||||
<p>This ensures you can manage Docker without needing sudo privileges. Finally, reboot your system.</p>
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue