[{"content":"What\u0026rsquo;s the Point of This? Recently, schools across the US were hit by a breach of the education software Canvas by the ShinyHunters. The group\u0026rsquo;s ransom note included an interesting .onion url: Normally a .onion address is randomly generated characters with no meaning. The site\u0026rsquo;s name being tied to the keys generated when the node joins the network. However, ShinyHunters and the CIA have been able to generate custom TOR keys to at least get a partially human readable url.\nciadotgov4sjwlzihbbgxnqg3xiyrg7so2r2o3lt5wz5ypk4sxyjstad.onion So Can I have One? Yes.\nHow? Using a tool built by cathugger on github - we can generate vanity keys for a tor site.\nSee the tor site post for details on getting the site up and running. Using mkp224o to generate the new site keys.\nRequirements: Follow the build guide from the github readme.\nTake a look at the optimization.txt too for some good tips to increase speed (generating the keys you want can take a while).\nUsage: Run the binary using ./mkp224o [YOUR CUSTOM FILTER]. The longer your string, the longer it will take to find a matching key.\nExample:\n./mkp224o shnyhnt Take the generated keys and move them into your torrc folder. You\u0026rsquo;ll likely have to update the file permissions (see Tor Hidden Services).\nReferences: mkp224o\n","permalink":"https://new.cloud.nobodyhome.dev/posts/named-tor-site/","summary":"\u003ch3 id=\"whats-the-point-of-this\"\u003eWhat\u0026rsquo;s the Point of This?\u003c/h3\u003e\n\u003cp\u003eRecently, schools across the US were hit by a breach of the education software Canvas by the ShinyHunters. The group\u0026rsquo;s ransom note included an interesting .onion url:\n\u003cimg alt=\"ransom note\" loading=\"lazy\" src=\"/assets/named_tor_site/note.png\"\u003e\u003c/p\u003e\n\u003cp\u003eNormally a .onion address is randomly generated characters with no meaning. The site\u0026rsquo;s name being tied to the keys generated when the node joins the network. However, ShinyHunters and the CIA have been able to generate custom TOR keys to at least get a partially human readable url.\u003c/p\u003e","title":"Named Tor Site"},{"content":"Hugo Site Example: Introduction This guide is not all inclusive. RTFM. Hugo is a static site generator, converting your .md text files, and a chosen theme into a modern looking website (like this one). There are a staggering number of themes to give you the look and feel that your site needs.\nEverything Up Front It all starts with the hugo.yaml file (you can use .toml too, but that\u0026rsquo;s beyond my expertise, consult the hugo documentation). Here is the configuration for this site:\nhugo.yaml baseURL: https://nobodyhome.dev/ languageCode: en-us title: Nobody\u0026#39;s Home theme: [\u0026#34;PaperMod\u0026#34;] enableRobotsTXT: true buildDrafts: false buildFuture: false ShowReadingTime: true ShowCodeCopyButtons: true UseHugoToc: true minify: disableXML: true minifyOutput: true menu: main: - identifier: search name: Search url: /search/ weight: 1 - identifier: Tags name: Tags url: /tags/ weight: 2 - identifier: Posts name: Posts url: /posts/ weight: 3 params: title: nobodyhome.dev description: \u0026#34;Documentation and lessons learned from my homelab services.\u0026#34; author: welcome-welcome-2themachine DateFormat: \u0026#34;January 2, 2006\u0026#34; assets: favicon: \u0026#34;/img/favicon.ico\u0026#34; ShowBreadCrumbs: true ShowPostNavLinks: true profileMode: enabled: true title: Nobody\u0026#39;s Home subtitle: \u0026#34;Documentation and lessons learned from my homelab services.\u0026#34; buttons: - name: Posts url: /posts/ style: primary - name: Search url: /search/ style: primary - name: Tags url: /tags/ style: primary imageUrl: \u0026#34;https://avatars.githubusercontent.com/u/11509172?v=4\u0026#34; #imageUrl: \u0026#34;/assets/iamroot.png\u0026#34; imageTitle: \u0026#34;I\u0026#39;m Root\u0026#34; imageWidth: 300 imageHeight: 300 socialIcons: - name: \u0026#34;Github\u0026#34; url: \u0026#34;https://github.com/welcome-2themachine\u0026#34; outputs: home: - HTML - RSS - JSON # necessary for search Directory Structure Here\u0026rsquo;s how everything is stored in the hugo directory. The hugo.yaml file resides in the root directory:\n├──hugo.yaml #This is the yaml file above ├──README.md ├──archetypes ├──content │ └──Posts #this is where your posts go | └──posts.md | └──search.md ├──PaperMod ├──public │ ├──[DO NOT TOUCH THIS - HUGO GENERATED] ├──static │ ├──assets #this is where I\u0026#39;m storing the images for my posts │ │ ├──[A FOLDER PER POST WITH IMAGES] │ └──img └──themes └──PaperMod Breaking It Down That was a dump of information, so here\u0026rsquo;s the context and some basics.\nGetting Started Set up the project with hugo new project PROJECT-NAME Get the site up and running: hugo serve. Edit the site and posts to your heart\u0026rsquo;s content, then finalize it with hugo build. Move the site contents from the public folder to your chosen web hosting service (nginx, apache, caddy, etc). My workflow Build the post using a standard setting for each page: --- title: \u0026#34;Hugo\u0026#34; tags: [\u0026#34;web\u0026#34;,\u0026#34;service\u0026#34;] author: welcome-2themachine draft: false canonicalURL: \u0026#34;https://nobodyhome.dev/posts/hugo\u0026#34; showToc: true ShowCodeCopyButtons: true date: 2026-04-24 --- Build the post:\nlinks: [text to display](link url)\nimages: ![a short name](image location from the site root directory)\ncode snippets using the \u0026ldquo;`\u0026rdquo; character, use three to open and close code blocks \u0026ldquo;```\u0026rdquo;\nRegenerate the site contents:\nhugo build --minify --cleanDestinationDir or\nhugo build --minify --cleanDestinationDir -d [DESTINATION DIR] Move the contents of the public folder to your hosting service. Additional Configuration Search See above \u0026lsquo;hugo.yaml\u0026rsquo; - the search.md file contains the following contents:\n--- title: \u0026#34;Search\u0026#34; # in any language you want layout: \u0026#34;search\u0026#34; # necessary for search url: \u0026#34;/search\u0026#34; description: \u0026#34;Find what you\u0026#39;re looking for\u0026#34; summary: \u0026#34;search\u0026#34; placeholder: \u0026#34;what are you looking for?\u0026#34; --- Posts Again, here\u0026rsquo;s the .md file:\n--- title: \u0026#34;Posts\u0026#34; layout: \u0026#34;archives\u0026#34; url: \u0026#34;/posts/\u0026#34; summary: \u0026#34;posts\u0026#34; --- Conclusion Hugo is incredibly flexible, and endlessly customizable. This little walkthough barely scratches the surface, but shows how I\u0026rsquo;ve implemented it onto my website.\nReferences Hugo Christian Lempa\u0026rsquo;s Tutorial Another Lempa Tutorial Chris Titus Tech Hut ","permalink":"https://new.cloud.nobodyhome.dev/posts/hugo/","summary":"\u003ch2 id=\"hugo-site-example\"\u003eHugo Site Example:\u003c/h2\u003e\n\u003cp\u003e\u003cimg alt=\"nobodyhome\" loading=\"lazy\" src=\"/assets/hugo/nobodyhome.png\"\u003e\u003c/p\u003e\n\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eThis guide is not all inclusive. \u003ca href=\"https://gohugo.io/getting-started/quick-start/\"\u003eRTFM\u003c/a\u003e. Hugo is a static site generator, converting your .md text files, and a chosen theme into a modern looking website (like this one). There are a staggering number of \u003ca href=\"https://themes.gohugo.io/\"\u003ethemes\u003c/a\u003e to give you the look and feel that your site needs.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"themes\" loading=\"lazy\" src=\"/assets/hugo/hugo_themes.png\"\u003e\u003c/p\u003e\n\u003ch2 id=\"everything-up-front\"\u003eEverything Up Front\u003c/h2\u003e\n\u003cp\u003eIt all starts with the \u003ccode\u003ehugo.yaml\u003c/code\u003e file (you can use .toml too, but that\u0026rsquo;s beyond my expertise, consult the hugo \u003ca href=\"https://gohugo.io/documentation/\"\u003edocumentation\u003c/a\u003e). Here is the configuration for this site:\u003c/p\u003e","title":"Hugo Static Site Generator"},{"content":"References: Man Page\nInstall Docker Tutorial\nDocker Containers Can Do Too Much Your containers can do too much. Look at all the capabilities a Docker container gets by default:\n- SYS_ADMIN - NET_ADMIN - NET_RAW - FOWNER - SETGID - SETUID - CHOWN - AUDIT_CONTROL - AUDIT_READ - AUDIT_WRITE - BLOCK_SUSPEND - BPF - CHECKPOINT_RESTORE - DAC_READ_SEARCH - DAC_OVERRIDE - FSETID - IPC_LOCK - KILL - LEASE - LINUX_IMMUTABLE - MAC_ADMIN - MAC_OVERRIDE - MKNOD - NET_ADMIN - NET_BIND_SERVICE - NET_BROADCAST - PERFMON - SETFCAP - SETPCAP - SYS_BOOT - SYS_CHROOT - SYS_NICE - SYS_PACCT - SYS_PTRACE - SYS_RAWIO - SYS_RESOURCE - SYS_TIME - SYS_TTY_CONFIG - SYSLOG - WAKE_ALARM This should clearly be limited. Containers share functions of the host kernel, it\u0026rsquo;s how they cut down on overhead. Giving unecessary permissions violates the security principle of least privilege. So, how go about it?\nShort answer: wing it.\nLong answer: you\u0026rsquo;re going to have to troubleshoot which permissions make your container work. Here\u0026rsquo;s what has worked for me:\nDrop All Removes all kernel capabilities:\ncap_drop: - ALL A Python based Discord bot Cloudflare tunnel container Dockhand Hawser AdGuardHome Adguard Home\ncap_drop: - ALL cap_add: - SETGID - SETUID - CHOWN - NET_BIND_SERVICE - SYS_CHROOT CraftyController cap_drop: - ALL cap_add: - SETGID - SETUID - CHOWN IT-Tools cap_drop: - ALL cap_add: - CHOWN - SETGID - SETUID Nginx Container + Database cap_drop: - ALL cap_add: - SETGID - SETUID - CHOWN - DAC_OVERRIDE Nginx Container cap_drop: - ALL cap_add: - CHOWN - SETGID - SETUID Tor Tor Hidden Service\ncap_drop: - ALL cap_add: - NET_BIND_SERVICE Uptime Kuma cap_drop: - ALL cap_add: - SYS_ADMIN - NET_ADMIN - NET_RAW - FOWNER - SETGID - SETUID - CHOWN ","permalink":"https://new.cloud.nobodyhome.dev/posts/docker-permissions/","summary":"\u003ch3 id=\"references\"\u003eReferences:\u003c/h3\u003e\n\u003cp\u003e\u003ca href=\"https://man7.org/linux/man-pages/man7/capabilities.7.html\"\u003eMan Page\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/posts/install-docker/\"\u003eInstall Docker Tutorial\u003c/a\u003e\u003c/p\u003e\n\u003ch3 id=\"docker-containers-can-do-too-much\"\u003eDocker Containers Can Do Too Much\u003c/h3\u003e\n\u003cp\u003eYour containers can do too much. Look at all the capabilities a Docker container gets by default:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e  - SYS_ADMIN\n  - NET_ADMIN\n  - NET_RAW\n  - FOWNER\n  - SETGID\n  - SETUID\n  - CHOWN\n  - AUDIT_CONTROL\n  - AUDIT_READ\n  - AUDIT_WRITE\n  - BLOCK_SUSPEND\n  - BPF\n  - CHECKPOINT_RESTORE\n  - DAC_READ_SEARCH\n  - DAC_OVERRIDE\n  - FSETID\n  - IPC_LOCK\n  - KILL\n  - LEASE\n  - LINUX_IMMUTABLE\n  - MAC_ADMIN\n  - MAC_OVERRIDE\n  - MKNOD\n  - NET_ADMIN\n  - NET_BIND_SERVICE\n  - NET_BROADCAST\n  - PERFMON\n  - SETFCAP\n  - SETPCAP\n  - SYS_BOOT\n  - SYS_CHROOT\n  - SYS_NICE\n  - SYS_PACCT\n  - SYS_PTRACE\n  - SYS_RAWIO\n  - SYS_RESOURCE\n  - SYS_TIME\n  - SYS_TTY_CONFIG\n  - SYSLOG\n  - WAKE_ALARM\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eThis should \u003cstrong\u003eclearly\u003c/strong\u003e be limited. Containers share functions of the host kernel, it\u0026rsquo;s how they cut down on overhead. Giving unecessary permissions violates the security principle of least privilege. So, how go about it?\u003c/p\u003e","title":"Docker Permissions"},{"content":"References AdGuardHome Download AdGuardHome Fix systemd-resolved\nWhy AdGuardHome? AdGuard has become a key service in my homelab. I\u0026rsquo;m so used to having ads blocked across my network, it\u0026rsquo;s a surprise loading a site away from home and seeing the broken hellscape of ads everywhere. Get a network level adblocker and learn how to use it. The less tech savvy folks in your home will thank you.\nInstallation Download the latest version of AdGuardHome Extract using tar -xf AdGuardHome_linux_amd64.tar.gz Move the folder to the destination: mv AdGuardHome [DESTINATION] Fedora: /usr/local/bin/ Ubuntu: /opt/ Install using sudo ./AdGuardHome -s install Set up your account at http://ADGUARD-SERVER:3000 Set your router\u0026rsquo;s DNS server to point at your AdGuardHome server (steps will vary by router) Set your AdGuard Block Lists Upstream Providers DNS Rewrites Allow Lists Custom Rules Back up you AdGuardHome.yaml Deploy with Docker Compose: services: adguardhome: image: adguard/adguardhome container_name: adguardhome volumes: #place AdGuardHome.yaml here if you already have a configured instance - [map to your /conf directory]:/opt/adguardhome/conf - [map to your /work directory]:/opt/adguardhome/work deploy: mode: global ports: - \u0026#34;53:53/udp\u0026#34; # \u0026lt;Host Port\u0026gt;:\u0026lt;Container Port\u0026gt; - \u0026#34;53:53/tcp\u0026#34; - \u0026#34;67:67/udp\u0026#34; # - \u0026#34;68:68/udp\u0026#34; - \u0026#34;80:80/tcp\u0026#34; - \u0026#34;443:443/tcp\u0026#34; - \u0026#34;443:443/udp\u0026#34; - \u0026#34;3000:3000/tcp\u0026#34; - \u0026#34;853:853/tcp\u0026#34; - \u0026#34;853:853/udp\u0026#34; - \u0026#34;8853:8853/udp\u0026#34; - \u0026#34;784:784/udp\u0026#34; - \u0026#34;5443:5443/tcp\u0026#34; - \u0026#34;5443:5443/udp\u0026#34; restart: unless-stopped Troubleshooting Systemd-Resolved Reference: Fix systemd-resolved Us these steps when systemd is using port 53:\nCreate a config folder inside /etc/systemd sudo mkdir -p /etc/systemd/resolved.conf.d Create a file called adguardhome.conf in /etc/systemd/resolved.conf.d/ with the following contents: [Resolve] DNS=127.0.0.1 DNSStubListener=no Activate the new resolved file sudo mv /etc/resolv.conf /etc/resolv.conf.backup sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf Restart systemd-resolved sudo systemctl reload-or-restart systemd-resolved SE-Linux Your configuration may vary, I found that using Fedora\u0026rsquo;s Cockpit SE-Linux GUI was very helpful to identify and resolve errors. The following commands are what worked for me:\nsudo ausearch -c \u0026#39;(uardHome)\u0026#39; --raw | audit2allow -M my-uardHome semodule -X 300 -i my-uardHome.pp ","permalink":"https://new.cloud.nobodyhome.dev/posts/adguardhome/","summary":"\u003ch4 id=\"references\"\u003eReferences\u003c/h4\u003e\n\u003cp\u003e\u003ca href=\"https://adguard-dns.io/kb/adguard-home/getting-started/\"\u003eAdGuardHome\u003c/a\u003e\n\u003ca href=\"https://github.com/AdguardTeam/AdGuardHome/releases/tag/v0.107.73\"\u003eDownload AdGuardHome\u003c/a\u003e\n\u003ca href=\"https://adguard-dns.io/kb/adguard-home/faq/\"\u003eFix systemd-resolved\u003c/a\u003e\u003c/p\u003e\n\u003ch4 id=\"why-adguardhome\"\u003eWhy AdGuardHome?\u003c/h4\u003e\n\u003cp\u003eAdGuard has become a key service in my homelab. I\u0026rsquo;m so used to having ads blocked across my network, it\u0026rsquo;s a surprise loading a site away from home and seeing the broken hellscape of ads everywhere. Get a network level adblocker and learn how to use it. The less tech savvy folks in your home will thank you.\u003c/p\u003e\n\u003ch4 id=\"installation\"\u003eInstallation\u003c/h4\u003e\n\u003col\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/AdguardTeam/AdGuardHome/releases/tag/v0.107.73\"\u003eDownload\u003c/a\u003e the latest version of AdGuardHome\u003c/li\u003e\n\u003cli\u003eExtract using \u003ccode\u003etar -xf AdGuardHome_linux_amd64.tar.gz\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003eMove the folder to the  destination: \u003ccode\u003emv AdGuardHome [DESTINATION]\u003c/code\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eFedora\u003c/strong\u003e: \u003ccode\u003e/usr/local/bin/\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eUbuntu\u003c/strong\u003e: \u003ccode\u003e/opt/\u003c/code\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eInstall using \u003ccode\u003esudo ./AdGuardHome -s install\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003eSet up your account at \u003ccode\u003ehttp://ADGUARD-SERVER:3000\u003c/code\u003e\u003c/li\u003e\n\u003cli\u003eSet your router\u0026rsquo;s DNS server to point at your AdGuardHome server (steps will vary by router)\n\u003cimg alt=\"dns settings\" loading=\"lazy\" src=\"/assets/adguardhome/dns_settings.png\"\u003e\u003c/li\u003e\n\u003cli\u003eSet your AdGuard\n\u003cul\u003e\n\u003cli\u003eBlock Lists\n\u003cimg alt=\"dns blocklists\" loading=\"lazy\" src=\"/assets/adguardhome/dns_blocklists.png\"\u003e\u003c/li\u003e\n\u003cli\u003eUpstream Providers\n\u003cimg alt=\"dns providers\" loading=\"lazy\" src=\"/assets/adguardhome/dns_providers.png\"\u003e\u003c/li\u003e\n\u003cli\u003eDNS Rewrites\u003c/li\u003e\n\u003cli\u003eAllow Lists\u003c/li\u003e\n\u003cli\u003eCustom Rules\n\u003cimg alt=\"dns custom rules\" loading=\"lazy\" src=\"/assets/adgaurdhome/dns_custom_rules.png\"\u003e\u003c/li\u003e\n\u003cli\u003eBack up you AdGuardHome.yaml\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch5 id=\"deploy-with-docker-compose\"\u003eDeploy with Docker Compose:\u003c/h5\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eservices:\n  adguardhome:\n    image: adguard/adguardhome\n    container_name: adguardhome\n    volumes:\n\t    #place AdGuardHome.yaml here if you already have a configured instance\n      - [map to your /conf directory]:/opt/adguardhome/conf \n      - [map to your /work directory]:/opt/adguardhome/work\n    deploy: \n      mode: global\n    ports:\n      - \u0026#34;53:53/udp\u0026#34;  # \u0026lt;Host Port\u0026gt;:\u0026lt;Container Port\u0026gt;\n      - \u0026#34;53:53/tcp\u0026#34;\n      - \u0026#34;67:67/udp\u0026#34;\n#      - \u0026#34;68:68/udp\u0026#34;\n      - \u0026#34;80:80/tcp\u0026#34;\n      - \u0026#34;443:443/tcp\u0026#34;\n      - \u0026#34;443:443/udp\u0026#34;\n      - \u0026#34;3000:3000/tcp\u0026#34;\n      - \u0026#34;853:853/tcp\u0026#34;\n      - \u0026#34;853:853/udp\u0026#34;\n      - \u0026#34;8853:8853/udp\u0026#34;\n      - \u0026#34;784:784/udp\u0026#34;\n      - \u0026#34;5443:5443/tcp\u0026#34;\n      - \u0026#34;5443:5443/udp\u0026#34;\n    restart: unless-stopped\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"troubleshooting\"\u003eTroubleshooting\u003c/h4\u003e\n\u003ch5 id=\"systemd-resolved\"\u003eSystemd-Resolved\u003c/h5\u003e\n\u003cp\u003eReference: \u003ca href=\"https://adguard-dns.io/kb/adguard-home/faq/\"\u003eFix systemd-resolved\u003c/a\u003e\nUs these steps when systemd is using port 53:\u003c/p\u003e","title":"AdGuardHome"},{"content":"References: Proton Email Filters Proton Sieve Filters Why Sieve Filters? Rather than a long list of email filter rules that become unmanageable, Proton encourages the use of sieve filters - and limits users to 250 filters total. Sieve allows a user to combine what might by over a dozen filter rules down into one logical, legible, flexible flow.\nThis little blog post is specific to Proton and how they do email filters with Sieve. This post is not all encompassing, RTFM.\nLessons Learned (so far) Sieve filters will need tweaking and testing to get right, I\u0026rsquo;d recommend setting up a git repository so filters can be updated in a code editor vice directly in the Proton Mail settings. Proton treats personal tags and folders the same from a filtering perspective. Separate filters into categories, it\u0026rsquo;ll make management easier when you want to make updates / changes. Be willing to update filters - once you get the hang of how they work, updates are easy, and usually just a line or two in the relevant filter. If you\u0026rsquo;re confused on how to do something, make the filter via the GUI, save it, then click the drop down to edit the new filter in sieve. This will show you how Proton wants their filters constructed. Filter terms are case agnostic: deliver matches with DeLivErY. 20FEB26: addflag needs to come before fileinto. Filters Require Think of this as an #include or import statement:\nrequire [\u0026#34;include\u0026#34;, \u0026#34;environment\u0026#34;, \u0026#34;variables\u0026#34;, \u0026#34;relational\u0026#34;, \u0026#34;comparator-i;ascii-numeric\u0026#34;, \u0026#34;spamtest\u0026#34;, \u0026#34;fileinto\u0026#34;, \u0026#34;imap4flags\u0026#34;]; Spam Blocking Proton includes a spam blocker to prevent your filter from being run against spam messages at the top of each rule.\n# Generated: Do not run this script on spam messages if allof (environment :matches \u0026#34;vnd.proton.spam-threshold\u0026#34; \u0026#34;*\u0026#34;, spamtest :value \u0026#34;ge\u0026#34; :comparator \u0026#34;i;ascii-numeric\u0026#34; \u0026#34;${1}\u0026#34;) { return; } My Filters (examples) I store each of the rules below in their own filter for easy editing/troubleshooting later. Note: Not all filters are included below for privacy reasons (I\u0026rsquo;m using Proton Mail, you don\u0026rsquo;t get to see everything)!\nTag Only Filter Use: Tags all emails coming from a Proton Alias, Google, and Ebay\n# Alias if anyof ( allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;To\u0026#34; *passmail.net\u0026#34;, \u0026#34;*passinbox.com\u0026#34;, \u0026#34;*passmail.com\u0026#34;, \u0026#34;*passfwd.com\u0026#34;]), allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :is \u0026#34;To\u0026#34; \u0026#34;stuff@customdomain.com\u0026#34;, \u0026#34;something@customdomain.com\u0026#34;])){ fileinto \u0026#34;Alias\u0026#34;; } # ebay elsif allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; \u0026#34;*ebay.com\u0026#34;) { fileinto \u0026#34;ebay\u0026#34;; } # Google elsif allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; \u0026#34;*google.com\u0026#34;) { fileinto \u0026#34;Google\u0026#34;; } Amazon Use: Orders go into my purchases folder, account info into my subscriptions folder, and regular ads into a shopping folder AND marks them as read.\n# Amazon if allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; \u0026#34;*amazon.com\u0026#34;) { fileinto \u0026#34;Amazon\u0026#34;; # Purchases if allof ( header :contains \u0026#34;subject\u0026#34; \u0026#34;order\u0026#34;, header :contains \u0026#34;subject\u0026#34; [\u0026#34;confirmation\u0026#34;, \u0026#34;confirmed\u0026#34;, \u0026#34;deliver\u0026#34;]){ fileinto \u0026#34;Shopping/Purchases\u0026#34;; stop;} # Subscriptions elsif allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :is \u0026#34;From\u0026#34; \u0026#34;account-update@amazon.com\u0026#34;){ fileinto \u0026#34;Family/Subscriptions\u0026#34;; stop; } # Shopping else { addflag \u0026#34;\\\\Seen\u0026#34;; fileinto \u0026#34;Shopping\u0026#34;; stop; } } Credit Cards Use: Emails from credit card providers go directly into my credit card folder.\n# Credit Cards if allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; [\u0026#34;*americanexpress.com\u0026#34;, \u0026#34;*chase.com\u0026#34;, \u0026#34;*citi.com\u0026#34;, \u0026#34;*capitalone.com\u0026#34;]) { fileinto \u0026#34;Finance/Credit Cards\u0026#34;; stop; } Homelab Use: Emails from Cloudflare get tagged as Cloudflare and filed into my Homelab folder. Note: This is an example of how Proton treats folders and tags equally.\n# Cloudflare elsif allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; \u0026#34;*cloudflare.com\u0026#34;) { fileinto \u0026#34;Cloudflare\u0026#34;; fileinto \u0026#34;Homelab\u0026#34;; stop; } Sign-Ups Use: Moves any email from a random sign-up service directly into a folder and then marks as read.\n# Signups elsif allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; [\u0026#34;*rover.com\u0026#34;, \u0026#34;*nextdoor.com\u0026#34;, \u0026#34;*facebook.com\u0026#34;, \u0026#34;*.twitch.tv\u0026#34;, \u0026#34;*linkedin.com\u0026#34;]) { addflag \u0026#34;\\\\Seen\u0026#34;; fileinto \u0026#34;Shopping/Signups\u0026#34;; stop; } Subscriptions Use: Moves all the emails from subscription services into the subscriptions folder.\n# Subscriptions if allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; [\u0026#34;no-reply@costco.com\u0026#34;, \u0026#34;noreply@google.com\u0026#34;, \u0026#34;netflix.com\u0026#34;, \u0026#34;hulu.com\u0026#34;, \u0026#34;*calendar.proton.me\u0026#34;, \u0026#34;*notify.proton.me\u0026#34;, \u0026#34;*uber.com\u0026#34;, \u0026#34;*paramountplus.com\u0026#34;, \u0026#34;*hbomax.com\u0026#34;, \u0026#34;*disneyplus.com\u0026#34;]){ # Proton if allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; [\u0026#34;*calendar.proton.me\u0026#34;, \u0026#34;*notify.proton.me\u0026#34;]) { fileinto \u0026#34;Proton\u0026#34;; } # Streaming elsif allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; [\u0026#34;*paramountplus.com\u0026#34;, \u0026#34;*hbomax.com\u0026#34;, \u0026#34;*hulu.com\u0026#34;, \u0026#34;*netflix.com\u0026#34;, \u0026#34;disneyplus.com\u0026#34;]) { addflag \u0026#34;\\\\Seen\u0026#34;; fileinto \u0026#34;Streaming\u0026#34;; } # Uber elsif allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; \u0026#34;*uber.com\u0026#34;) { addflag \u0026#34;\\\\Seen\u0026#34;; fileinto \u0026#34;Uber\u0026#34;; } fileinto \u0026#34;Family/Subscriptions\u0026#34;; stop; } Travel Use: find and moves the emails from travel related services.\n# Travel elsif allof (address :all :comparator \u0026#34;i;unicode-casemap\u0026#34; :matches \u0026#34;From\u0026#34; [\u0026#34;*delta.com\u0026#34;, \u0026#34;*amtrak.com\u0026#34;, \u0026#34;*@hilton.com\u0026#34;, \u0026#34;*ihg.com\u0026#34;, \u0026#34;*marriott.com\u0026#34;]) { fileinto \u0026#34;Travel\u0026#34;; stop; } Conclusion Sieve filters can get far more complex, such as sending auto replies depending on the sender address, the time an email was sent, the email size etc. This is just an overview of how some of mine are set up (and mine still need a little work).\n","permalink":"https://new.cloud.nobodyhome.dev/posts/sieve-filters/","summary":"\u003ch2 id=\"references\"\u003eReferences:\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://proton.me/support/email-inbox-filters\"\u003eProton Email Filters\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://proton.me/support/sieve-advanced-custom-filters\"\u003eProton Sieve Filters\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"why-sieve-filters\"\u003eWhy Sieve Filters?\u003c/h2\u003e\n\u003cp\u003eRather than a long list of email filter rules that become unmanageable, Proton encourages the use of sieve filters - and \u003cstrong\u003elimits\u003c/strong\u003e users to 250 filters total. Sieve allows a user to combine what might by over a dozen filter rules down into one logical, legible, flexible flow.\u003c/p\u003e\n\u003cp\u003eThis little blog post is specific to Proton and how they do email filters with Sieve. This post is not all encompassing, RTFM.\u003c/p\u003e","title":"Proton Sieve Filters"},{"content":"References TechHut Dockhand Dockhand Documentation Setup Docker Compose: services: dockhand: image: fnsys/dockhand:latest container_name: dockhand restart: unless-stopped ports: - \u0026#34;3000:3000\u0026#34; volumes: - /var/run/docker.sock:/var/run/docker.sock - ./data:/app/data - /home/mechanicus/Code/compose:/mnt/compose Notes: using a separate data directory instead of a volume mount will make the container easier to manage and transfer if necessary\nAdding Environments My preferred method is to use the hawser connector:\ndocker run -d --name hawser --restart unless-stopped \\ -v /var/run/docker.sock:/var/run/docker.sock \\ -v /home/mechanicus/code/docker-compose/:/mnt/compose \\ -p 2376:2376 -e TOKEN==[SECURE TOKEN] \\ ghcr.io/finsys/hawser:latest Note: Include the location of compose files for easier management\nFeatures Multi Node Monitoring via Hawser Git Repositories: set up a git repository that can be tracked by Dockhand for automatic stack redeployment\nNotifications: discord, slack, telegram, ntfy\u0026hellip; Multi-user support - but user permission types are locked behind the enterprise license\nStack management Easy log reviews logs\nIssues: Does not handle Docker Swarms - there are no features for collective swarm management, everything is treated as a single node\n","permalink":"https://new.cloud.nobodyhome.dev/posts/dockhand/","summary":"\u003ch3 id=\"references\"\u003eReferences\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.youtube.com/watch?v=dwFktbtuTFQ\"\u003eTechHut\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://dockhand.pro/\"\u003eDockhand\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://dockhand.pro/manual/\"\u003eDockhand Documentation\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cimg alt=\"dashboard\" loading=\"lazy\" src=\"/assets/dockhand/dockhand-dashboard.png\"\u003e\u003c/p\u003e\n\u003ch3 id=\"setup\"\u003eSetup\u003c/h3\u003e\n\u003ch4 id=\"docker-compose\"\u003eDocker Compose:\u003c/h4\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eservices:\n  dockhand:\n    image: fnsys/dockhand:latest\n    container_name: dockhand\n    restart: unless-stopped\n    ports:\n      - \u0026#34;3000:3000\u0026#34;\n    volumes:\n      - /var/run/docker.sock:/var/run/docker.sock\n      - ./data:/app/data\n      - /home/mechanicus/Code/compose:/mnt/compose\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eNotes: using a separate data directory instead of a volume mount will make the container easier to manage and transfer if necessary\u003c/p\u003e\n\u003ch4 id=\"adding-environments\"\u003eAdding Environments\u003c/h4\u003e\n\u003cp\u003eMy preferred method is to use the hawser connector:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run -d --name hawser --restart unless-stopped \\\n-v /var/run/docker.sock:/var/run/docker.sock \\\n-v /home/mechanicus/code/docker-compose/:/mnt/compose \\\n-p 2376:2376 -e TOKEN==[SECURE TOKEN] \\\nghcr.io/finsys/hawser:latest\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eNote: Include the location of compose files for easier management\u003c/p\u003e","title":"Dockhand"},{"content":"References Tailscale Tailscale Admin Console Overview This walkthough is the very basics of setting up a Tailscale VPN for travel.\nScenario: You like to travel, but have trouble accessing your accounts (banking, social media, entertainment) while you\u0026rsquo;re abroad. You travel with a laptop, but you also have a desktop device back home. Wouldn\u0026rsquo;t it be great if you could just access your accounts and services like you were sitting at your desktop?\nSetup This assumes you have two devices. One that we\u0026rsquo;ll call server will serve as our Exit Node. We\u0026rsquo;ll call the traveling device(s) client.\nAn Exit Node will be the exit point for our VPN (Virtual Private Network). We\u0026rsquo;ll have an encrypted connection from our client devices back through the Exit Node, and out to the open internet. To external sites and services, it will just look like you\u0026rsquo;re accessing the internet from your home network.\nClient \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;-\u0026gt; Tailscale Server \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;-\u0026gt; Exit Node \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;-\u0026gt; Internet\nInstall Tailscale Download Tailscale from their site, or install if from your app store. This step is the same on both client and server devices. Configure Tailscale Create / Login to your Tailscale account via the app on both server and client devices. On the server, configure tailscale to always run at startup, and to serve as an Exit Node. In the Tailscale Admin Cosole, select your server and enable it to serve as an Exit Node for your Tailnet. On your client device(s), select your server as your exit node. (Optional) You can disable key expiration on your server device. This is a security feature; however, if you\u0026rsquo;re going to be traveling for a while / don\u0026rsquo;t want to have to think about it again - disabling key expiration will ensure that your server will remain logged into your Tailnet. Conclusion Tailscale has many more features not covered here - you can dive deep into Tailscale to configure DNS, service exposure, remote SSH access, access control rules\u0026hellip; this tutorial barely scratches the surface.\n","permalink":"https://new.cloud.nobodyhome.dev/posts/tailscale-easy-vpn/","summary":"\u003ch3 id=\"references\"\u003eReferences\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://tailscale.com\"\u003eTailscale\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://login.tailscale.com/admin/machines\"\u003eTailscale Admin Console\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"overview\"\u003eOverview\u003c/h3\u003e\n\u003cp\u003eThis walkthough is the very basics of setting up a Tailscale VPN for travel.\u003c/p\u003e\n\u003cp\u003eScenario: You like to travel, but have trouble accessing your accounts (banking, social media, entertainment) while you\u0026rsquo;re abroad. You travel with a laptop, but you also have a desktop device back home. Wouldn\u0026rsquo;t it be great if you could just access your accounts and services like you were sitting at your desktop?\u003c/p\u003e","title":"Tailscale: Easy VPN"},{"content":"Directory Setup Set up the files and directories: mkdir -p tor-site/keys tor-site/html tor-site/logs touch tor-site/torrc Set permissions: chmod 700 tor-site/keys chmod 600 tor-site/logs sudo chown root:root tor-site/keys tor-site/logs Content Setup Add the files for your website into the tor-site/html folder: example:\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;Hello from the Onion Router!\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt;This site is hosted inside Docker.\u0026lt;/p\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Docker Setup [[Install Docker]] Docker Compose File compose.yaml\nservices: nginx: container_name: nginx image: nginx cap_drop: - ALL cap_add: - CHOWN - SETGID - SETUID volumes: - ./html:/usr/share/nginx/html:ro - ./logs:/var/log/nginx networks: - tor_network tor: container_name: tor volumes: - ./torrc:/etc/tor/torrc:ro - ./keys:/var/lib/tor/hidden_service/ image: alpine:latest entrypoint: sh -c \u0026#34;apk add --no-cache tor \u0026amp;\u0026amp; tor -f /etc/tor/torrc\u0026#34; security_opt: - no-new-privileges:true cap_drop: - ALL cap_add: - NET_BIND_SERVICE networks: - tor_network depends_on: - nginx networks: tor_network: nginx is the name of your web server container - this is important for the torrc file. :ro sets the volume to read only networks: tor_network means all the traffic stays inside the tor network security_opt: - no-new-privileges:true prevents the user from running as root through setuid or setgid cap_drop: -All removes all default Linux capabilities granted to a container cap_add: - NET_BIND_SERVICE will allow tor to work with only the necessary capabilities networks ensures that all traffic stays inside the docker network with a custom bridge tor_network to access the tor relays See Docker Permissions Create torrc: # Standard Tor config DataDirectory /var/lib/tor # Define the Hidden Service HiddenServiceDir /var/lib/tor/hidden_service/ HiddenServicePort 80 nginx:80 note: the name nginx should be the same as you name your web server container in the compose.yaml (see [[#Docker Setup]]). Notes: Did you know you can make a custom tor site name? See the Named Tor Site. The docker service setup: Dockhand Portainer services: nginx: container_name: nginx image: nginx volumes: - /home/mechanicus/code/tor-site/html:/usr/share/nginx/html:ro - /home/mechanicus/code/tor-site/logs:/var/log/nginx networks: - tor_network deploy: mode: replicated replicas: 1 labels: - \u0026#34;com.centurylinklabs.watchtower.enable=true\u0026#34; - \u0026#34;label=shepherd.autodeploy=true\u0026#34; tor: container_name: tor volumes: - /home/mechanicus/code/tor-site/torrc:/etc/tor/torrc:ro - /home/mechanicus/code/tor-site/keys:/var/lib/tor/hidden_service/ image: alpine:latest entrypoint: sh -c \u0026#34;apk add --no-cache tor \u0026amp;\u0026amp; tor -f /etc/tor/torrc\u0026#34; security_opt: - no-new-privileges:true cap_drop: - ALL cap_add: - NET_BIND_SERVICE networks: - tor_network depends_on: - nginx deploy: mode: replicated replicas: 1 labels: - \u0026#34;com.centurylinklabs.watchtower.enable=true\u0026#34; - \u0026#34;label=shepherd.autodeploy=true\u0026#34; networks: tor_network: ","permalink":"https://new.cloud.nobodyhome.dev/posts/tor-hidden-services/","summary":"\u003ch3 id=\"directory-setup\"\u003eDirectory Setup\u003c/h3\u003e\n\u003col\u003e\n\u003cli\u003eSet up the files and directories:\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003emkdir -p tor-site/keys tor-site/html tor-site/logs\ntouch tor-site/torrc\n\u003c/code\u003e\u003c/pre\u003e\u003col start=\"2\"\u003e\n\u003cli\u003eSet permissions:\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003echmod 700 tor-site/keys \nchmod 600 tor-site/logs\nsudo chown root:root tor-site/keys tor-site/logs\n\u003c/code\u003e\u003c/pre\u003e\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch3 id=\"content-setup\"\u003eContent Setup\u003c/h3\u003e\n\u003cp\u003eAdd the files for your website into the \u003ccode\u003etor-site/html\u003c/code\u003e folder:\nexample:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e\u0026lt;!DOCTYPE html\u0026gt;\n\u0026lt;html\u0026gt;\n\u0026lt;body\u0026gt;\n    \u0026lt;h1\u0026gt;Hello from the Onion Router!\u0026lt;/h1\u0026gt;\n    \u0026lt;p\u0026gt;This site is hosted inside Docker.\u0026lt;/p\u0026gt;\n\u0026lt;/body\u0026gt;\n\u0026lt;/html\u0026gt;\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"docker-setup\"\u003eDocker Setup\u003c/h3\u003e\n\u003cp\u003e[[Install Docker]]\nDocker  Compose File\n\u003ccode\u003ecompose.yaml\u003c/code\u003e\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eservices:\n  nginx:\n    container_name: nginx\n    image: nginx\n    cap_drop:\n      - ALL\n    cap_add:\n      - CHOWN\n      - SETGID\n      - SETUID\n    volumes:\n      - ./html:/usr/share/nginx/html:ro\n      - ./logs:/var/log/nginx\n    networks:\n      - tor_network\n  tor:\n    container_name: tor\n    volumes:\n      - ./torrc:/etc/tor/torrc:ro\n      - ./keys:/var/lib/tor/hidden_service/\n    image: alpine:latest\n    entrypoint: sh -c \u0026#34;apk add --no-cache tor \u0026amp;\u0026amp; tor -f /etc/tor/torrc\u0026#34;\n    security_opt:\n      - no-new-privileges:true\n    cap_drop:\n      - ALL\n    cap_add:\n      - NET_BIND_SERVICE\n    networks:\n      - tor_network\n    depends_on:\n      - nginx\n\nnetworks:\n  tor_network:\n\u003c/code\u003e\u003c/pre\u003e\u003cul\u003e\n\u003cli\u003e\u003ccode\u003enginx\u003c/code\u003e is the name of your web server container - this is important for the \u003ccode\u003etorrc\u003c/code\u003e file.\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003e:ro\u003c/code\u003e sets the volume to read only\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003enetworks: tor_network\u003c/code\u003e means all the traffic stays inside the tor network\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003esecurity_opt: - no-new-privileges:true\u003c/code\u003e  prevents the user from running as root through \u003ccode\u003esetuid\u003c/code\u003e or \u003ccode\u003esetgid \u003c/code\u003e\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003ecap_drop: -All\u003c/code\u003e removes all default Linux capabilities granted to a container\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003ecap_add: - NET_BIND_SERVICE\u003c/code\u003e will allow tor to work with only the necessary capabilities\u003c/li\u003e\n\u003cli\u003e\u003ccode\u003enetworks\u003c/code\u003e ensures that all traffic stays inside the docker network with a custom bridge \u003ccode\u003etor_network\u003c/code\u003e to access the tor relays\nSee \u003ca href=\"/posts/docker-permissions/\"\u003eDocker Permissions\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"create-torrc\"\u003eCreate \u003ccode\u003etorrc\u003c/code\u003e:\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e# Standard Tor config\nDataDirectory /var/lib/tor\n\n# Define the Hidden Service\nHiddenServiceDir /var/lib/tor/hidden_service/\nHiddenServicePort 80 nginx:80\n\u003c/code\u003e\u003c/pre\u003e\u003cul\u003e\n\u003cli\u003enote: the name \u003ccode\u003enginx\u003c/code\u003e should be the same as you name your web server container in the \u003ccode\u003ecompose.yaml\u003c/code\u003e (see [[#Docker Setup]]).\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"notes\"\u003eNotes:\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eDid you know you can make a custom tor site name? See the \u003ca href=\"/posts/named-tor-site/\"\u003eNamed Tor Site\u003c/a\u003e.\u003c/li\u003e\n\u003cli\u003eThe docker service setup:\n\u003ca href=\"/posts/dockhand/\"\u003eDockhand\u003c/a\u003e\n\u003ca href=\"/posts/portainer/\"\u003ePortainer\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e\nservices:\n  nginx:\n    container_name: nginx\n    image: nginx\n    volumes:\n      - /home/mechanicus/code/tor-site/html:/usr/share/nginx/html:ro\n      - /home/mechanicus/code/tor-site/logs:/var/log/nginx\n    networks:\n      - tor_network\n    deploy: \n      mode: replicated\n      replicas: 1\n    labels:\n      - \u0026#34;com.centurylinklabs.watchtower.enable=true\u0026#34;\n      - \u0026#34;label=shepherd.autodeploy=true\u0026#34;\n  tor:\n    container_name: tor\n    volumes:\n      - /home/mechanicus/code/tor-site/torrc:/etc/tor/torrc:ro\n      - /home/mechanicus/code/tor-site/keys:/var/lib/tor/hidden_service/\n    image: alpine:latest\n    entrypoint: sh -c \u0026#34;apk add --no-cache tor \u0026amp;\u0026amp; tor -f /etc/tor/torrc\u0026#34;\n    security_opt:\n      - no-new-privileges:true\n    cap_drop:\n      - ALL\n    cap_add:\n      - NET_BIND_SERVICE\n    networks:\n      - tor_network\n    depends_on:\n      - nginx\n    deploy: \n      mode: replicated\n      replicas: 1\n    labels:\n      - \u0026#34;com.centurylinklabs.watchtower.enable=true\u0026#34;\n      - \u0026#34;label=shepherd.autodeploy=true\u0026#34;\n\nnetworks:\n  tor_network:\n\u003c/code\u003e\u003c/pre\u003e","title":"Tor Site"},{"content":"Install Docker Install Docker Tutorial\nSetup Buildx Environment docker buildx create \\ --name container-builder \\ --driver docker-container \\ --bootstrap --use Build the Container docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 \\ -t [repository]/[containername]:[tag] . --push The -t flag sets the naming convention for the container, . tells docker where to build the container (where the Dockerfile is located), and --push sends it to the Docker Hub repository.\nTag a Docker Container docker tag [name]:[tag] [new-name]:[new-tag] Save and Transfer a Docker Container docker save -o [name] [name]:[tag] rsync -P [name] [target]:[location] docker load -i [name] ","permalink":"https://new.cloud.nobodyhome.dev/posts/building-docker-containers/","summary":"\u003ch3 id=\"install-docker\"\u003eInstall Docker\u003c/h3\u003e\n\u003cp\u003e\u003ca href=\"/posts/install-docker/\"\u003eInstall Docker Tutorial\u003c/a\u003e\u003c/p\u003e\n\u003ch3 id=\"setup-buildx-environment\"\u003eSetup Buildx Environment\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker buildx create \\\n  --name container-builder \\\n  --driver docker-container \\\n  --bootstrap --use\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"build-the-container\"\u003eBuild the Container\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 \\\n-t [repository]/[containername]:[tag] . --push\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eThe \u003ccode\u003e-t\u003c/code\u003e flag sets the naming convention for the container, \u003ccode\u003e.\u003c/code\u003e tells docker where to build the container (where the Dockerfile is located), and \u003ccode\u003e--push\u003c/code\u003e sends it to the \u003ca href=\"https://hub.docker.com/\"\u003eDocker Hub\u003c/a\u003e repository.\u003c/p\u003e\n\u003ch3 id=\"tag-a-docker-container\"\u003eTag a Docker Container\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker tag [name]:[tag] [new-name]:[new-tag]\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"save-and-transfer-a-docker-container\"\u003eSave and Transfer a Docker Container\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker save -o [name] [name]:[tag]\n\u003c/code\u003e\u003c/pre\u003e\u003cpre tabindex=\"0\"\u003e\u003ccode\u003ersync -P [name] [target]:[location]\n\u003c/code\u003e\u003c/pre\u003e\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker load -i [name]\n\u003c/code\u003e\u003c/pre\u003e","title":"Building Docker Containers"},{"content":"References Docker\nSetup: Debian Based # Add Docker\u0026#39;s official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \\ \u0026#34;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \\ $(. /etc/os-release \u0026amp;\u0026amp; echo \u0026#34;$VERSION_CODENAME\u0026#34;) stable\u0026#34; | \\ sudo tee /etc/apt/sources.list.d/docker.list \u0026gt; /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin Setup: Arch Based # Pamac (manjaro) sudo pamac install docker docker-compose # Arch sudo pacman -Syu docker docker-compose Enable the docker service\nsudo systemctl enable docker --now Test sudo docker run hello-world User Setup sudo usermod -aG docker $USER ","permalink":"https://new.cloud.nobodyhome.dev/posts/install-docker/","summary":"\u003ch4 id=\"references\"\u003eReferences\u003c/h4\u003e\n\u003cp\u003e\u003ca href=\"https://docs.docker.com/engine/install/ubuntu/\"\u003eDocker\u003c/a\u003e\u003c/p\u003e\n\u003ch4 id=\"setup-debian-based\"\u003eSetup: Debian Based\u003c/h4\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e# Add Docker\u0026#39;s official GPG key:\nsudo apt-get update\nsudo apt-get install ca-certificates curl\nsudo install -m 0755 -d /etc/apt/keyrings\nsudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc\nsudo chmod a+r /etc/apt/keyrings/docker.asc\n\n# Add the repository to Apt sources:\necho \\\n  \u0026#34;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \\\n  $(. /etc/os-release \u0026amp;\u0026amp; echo \u0026#34;$VERSION_CODENAME\u0026#34;) stable\u0026#34; | \\\n  sudo tee /etc/apt/sources.list.d/docker.list \u0026gt; /dev/null\nsudo apt-get update\n\u003c/code\u003e\u003c/pre\u003e\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"setup-arch-based\"\u003eSetup: Arch Based\u003c/h4\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e# Pamac (manjaro)\nsudo pamac install docker docker-compose\n\u003c/code\u003e\u003c/pre\u003e\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e# Arch\nsudo pacman -Syu docker docker-compose\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eEnable the docker service\u003c/p\u003e","title":"Install Docker"},{"content":"References Kasm Documentation Kasm System Requirements Kasm GPU Install\nPrerequisites Install Docker Tutorial Swap Space Installation NOTE: check for the latest version cd /tmp curl -O https://kasm-static-content.s3.amazonaws.com/kasm_release_1.18.1.tar.gz tar -xf kasm_release_1.18.1.tar.gz sudo bash kasm_release/install.sh --accept-eula --swap-size 8192 GPU Setup The Nvidia container setup instructions, and standard GPU driver installation threw an error: nvidia runtime not found. The script on Kasm\u0026rsquo;s site worked. #!/bin/bash # Check for NVIDIA cards if ! lspci | grep -i nvidia \u0026gt; /dev/null; then echo \u0026#34;No NVIDIA GPU detected\u0026#34; exit 0 fi add-apt-repository -y ppa:graphics-drivers/ppa curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \\ \u0026amp;\u0026amp; curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \\ sed \u0026#39;s#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g\u0026#39; | \\ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list apt update apt install -y ubuntu-drivers-common # Run ubuntu-drivers and capture the output DRIVER_OUTPUT=$(ubuntu-drivers list 2\u0026gt;/dev/null) # Extract server driver versions using grep and regex # Pattern looks for nvidia-driver-XXX-server SERVER_VERSIONS=$(echo \u0026#34;$DRIVER_OUTPUT\u0026#34; | grep -o \u0026#39;nvidia-driver-[0-9]\\+-server\u0026#39; | grep -o \u0026#39;[0-9]\\+\u0026#39; | sort -n) # Check if any server versions were found if [ -z \u0026#34;$SERVER_VERSIONS\u0026#34; ]; then echo \u0026#34;Error: No NVIDIA server driver versions found.\u0026#34; \u0026gt;\u0026amp;2 exit 1 fi # Find the highest version number LATEST_VERSION=$(echo \u0026#34;$SERVER_VERSIONS\u0026#34; | tail -n 1) # Validate that the version is numeric if ! [[ \u0026#34;$LATEST_VERSION\u0026#34; =~ ^[0-9]+$ ]]; then echo \u0026#34;Error: Invalid version number: $LATEST_VERSION\u0026#34; \u0026gt;\u0026amp;2 exit 2 fi # Output only the version number echo \u0026#34;Latest version is: $LATEST_VERSION\u0026#34; ubuntu-drivers install \u0026#34;nvidia:$LATEST_VERSION-server\u0026#34; apt install -y \u0026#34;nvidia-utils-$LATEST_VERSION-server\u0026#34; # Install NVIDIA toolkit + configure for docker apt-get install -y nvidia-container-toolkit nvidia-ctk runtime configure --runtime=docker Egress Setup: NordVPN Get service credentials for the VPN: Available on the VPN dashboard Download desired OpenVPN configuration files: Available on the VPN dashboard On Kasm Administrator dashboard, select Egress (Infrastructure \u0026gt; Egress) Add the egress provider: Configure VPN type: Add egress gateways: On the Workspaces \u0026gt; Workspace page, select the workspace to allow it to use the VPN, click edit and add the egress provider on the Egress tab. On the Egress Credentials tab, add in the service credentials for the VPN ","permalink":"https://new.cloud.nobodyhome.dev/posts/kasm-workspace/","summary":"\u003ch4 id=\"references\"\u003eReferences\u003c/h4\u003e\n\u003cp\u003e\u003ca href=\"https://kasmweb.com/docs/latest/install/single_server_install.html\"\u003eKasm Documentation\u003c/a\u003e\n\u003ca href=\"https://kasmweb.com/docs/latest/install/system_requirements.html\"\u003eKasm System Requirements\u003c/a\u003e\n\u003ca href=\"https://kasmweb.com/docs/latest/how_to/gpu.html\"\u003eKasm GPU Install\u003c/a\u003e\u003c/p\u003e\n\u003ch4 id=\"prerequisites\"\u003ePrerequisites\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"/posts/install-docker/\"\u003eInstall Docker Tutorial\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eSwap Space\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4 id=\"installation\"\u003eInstallation\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eNOTE: check for the latest version\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003ecd /tmp\ncurl -O https://kasm-static-content.s3.amazonaws.com/kasm_release_1.18.1.tar.gz\ntar -xf kasm_release_1.18.1.tar.gz\nsudo bash kasm_release/install.sh --accept-eula --swap-size 8192\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"gpu-setup\"\u003eGPU Setup\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eThe Nvidia container setup instructions, and standard GPU driver installation threw an error: \u003ccode\u003envidia runtime not found\u003c/code\u003e. The script on Kasm\u0026rsquo;s site worked.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e#!/bin/bash\n\n# Check for NVIDIA cards\nif ! lspci | grep -i nvidia \u0026gt; /dev/null; then\n    echo \u0026#34;No NVIDIA GPU detected\u0026#34;\n    exit 0\nfi\n\nadd-apt-repository -y ppa:graphics-drivers/ppa\n\ncurl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \\\n  \u0026amp;\u0026amp; curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \\\n    sed \u0026#39;s#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g\u0026#39; | \\\n    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list\n\napt update\napt install -y ubuntu-drivers-common\n\n# Run ubuntu-drivers and capture the output\nDRIVER_OUTPUT=$(ubuntu-drivers list 2\u0026gt;/dev/null)\n# Extract server driver versions using grep and regex\n# Pattern looks for nvidia-driver-XXX-server\nSERVER_VERSIONS=$(echo \u0026#34;$DRIVER_OUTPUT\u0026#34; | grep -o \u0026#39;nvidia-driver-[0-9]\\+-server\u0026#39; | grep -o \u0026#39;[0-9]\\+\u0026#39; | sort -n)\n# Check if any server versions were found\nif [ -z \u0026#34;$SERVER_VERSIONS\u0026#34; ]; then\n    echo \u0026#34;Error: No NVIDIA server driver versions found.\u0026#34; \u0026gt;\u0026amp;2\n    exit 1\nfi\n# Find the highest version number\nLATEST_VERSION=$(echo \u0026#34;$SERVER_VERSIONS\u0026#34; | tail -n 1)\n# Validate that the version is numeric\nif ! [[ \u0026#34;$LATEST_VERSION\u0026#34; =~ ^[0-9]+$ ]]; then\n    echo \u0026#34;Error: Invalid version number: $LATEST_VERSION\u0026#34; \u0026gt;\u0026amp;2\n    exit 2\nfi\n# Output only the version number\necho \u0026#34;Latest version is: $LATEST_VERSION\u0026#34;\nubuntu-drivers install \u0026#34;nvidia:$LATEST_VERSION-server\u0026#34;\napt install -y \u0026#34;nvidia-utils-$LATEST_VERSION-server\u0026#34;\n# Install NVIDIA toolkit + configure for docker\napt-get install -y nvidia-container-toolkit\nnvidia-ctk runtime configure --runtime=docker\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"egress-setup-nordvpn\"\u003eEgress Setup: NordVPN\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eGet service credentials for the VPN: \u003ca href=\"https://my.nordaccount.com/dashboard/nordvpn/manual-configuration/service-credentials/\"\u003eAvailable on the VPN dashboard\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eDownload desired OpenVPN configuration files: \u003ca href=\"https://my.nordaccount.com/dashboard/nordvpn/manual-configuration/service-credentials/\"\u003eAvailable on the VPN dashboard\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eOn Kasm Administrator dashboard, select Egress (Infrastructure \u0026gt; Egress)\n\u003cul\u003e\n\u003cli\u003eAdd the egress provider:\n\u003cimg alt=\"egress\" loading=\"lazy\" src=\"/assets/kasm/egress_provider.png\"\u003e\u003c/li\u003e\n\u003cli\u003eConfigure VPN type:\n\u003cimg alt=\"provider\" loading=\"lazy\" src=\"/assets/kasm/add_provider.png\"\u003e\u003c/li\u003e\n\u003cli\u003eAdd egress gateways:\n\u003cimg alt=\"add route\" loading=\"lazy\" src=\"/assets/kasm/add_egress.png\"\u003e\n\u003cimg alt=\"add egress\" loading=\"lazy\" src=\"/assets/kasm/egress_setup.png\"\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eOn the Workspaces \u0026gt; Workspace page, select the workspace to allow it to use the VPN, click \u003ccode\u003eedit\u003c/code\u003e and add the egress provider on the \u003ccode\u003eEgress\u003c/code\u003e tab.\u003c/li\u003e\n\u003cli\u003eOn the \u003ccode\u003eEgress Credentials\u003c/code\u003e tab, add in the service credentials for the VPN\u003c/li\u003e\n\u003c/ul\u003e","title":"Kasm Workspaces"},{"content":"Resources askubuntu.com redhat.com Instructions Identify partitions with the lsblk command Determine the volume group you want to extend using the vgs and vgdisplay commands Determine the logical volumes using lvs command Determine the mapping of the logical volume (/dev/[VG-NAME]/[lv name]) Extend the partition (cfdisk) Extend the physical volume: pvresize /dev/sd[your partition] Extend the logical volume :``` lvextend -r -l +100%FREE /dev/mapper/VG-NAME --lv NAME (Possibly) Extend the file system (varies by file system type): XFS: xfs_growfs /dev/mapper/VG-NAME --lv name Extend a Proxmox VM Disk In the virtual machine hardware tab, select the disk you wish to resize and click \u0026ldquo;Disk Action\u0026rdquo; then \u0026ldquo;Resize\u0026rdquo; ","permalink":"https://new.cloud.nobodyhome.dev/posts/extend-lvm/","summary":"\u003ch2 id=\"resources\"\u003eResources\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://askubuntu.com/questions/1489128/need-help-extending-an-lvm-volume\"\u003easkubuntu.com\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.redhat.com/en/blog/resize-lvm-simple\"\u003eredhat.com\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"instructions\"\u003eInstructions\u003c/h2\u003e\n\u003col\u003e\n\u003cli\u003eIdentify partitions with the \u003ccode\u003elsblk\u003c/code\u003e command\u003c/li\u003e\n\u003cli\u003eDetermine the volume group you want to extend using the \u003ccode\u003evgs\u003c/code\u003e and \u003ccode\u003evgdisplay\u003c/code\u003e commands\u003c/li\u003e\n\u003cli\u003eDetermine the logical volumes using \u003ccode\u003elvs\u003c/code\u003e command\u003c/li\u003e\n\u003cli\u003eDetermine the mapping of the logical volume (/dev/[VG-NAME]/[lv name])\u003c/li\u003e\n\u003cli\u003eExtend the partition (\u003ccode\u003ecfdisk\u003c/code\u003e)\u003c/li\u003e\n\u003cli\u003eExtend the physical volume:\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003epvresize /dev/sd[your partition]\n\u003c/code\u003e\u003c/pre\u003e\u003col\u003e\n\u003cli\u003eExtend the logical volume :```\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003elvextend -r -l +100%FREE /dev/mapper/VG-NAME --lv NAME\n\u003c/code\u003e\u003c/pre\u003e\u003col start=\"6\"\u003e\n\u003cli\u003e(Possibly) Extend the file system (varies by file system type):\u003c/li\u003e\n\u003c/ol\u003e\n\u003cul\u003e\n\u003cli\u003eXFS:\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003exfs_growfs /dev/mapper/VG-NAME --lv name\n\u003c/code\u003e\u003c/pre\u003e\u003ch2 id=\"extend-a-proxmox-vm-disk\"\u003eExtend a Proxmox VM Disk\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003eIn the virtual machine hardware tab, select the disk you wish to resize and click \u0026ldquo;Disk Action\u0026rdquo; then \u0026ldquo;Resize\u0026rdquo;\u003c/li\u003e\n\u003c/ul\u003e","title":"Extend LVM"},{"content":"Description Portainer is a web-based Docker management interface that allows users to easily manage their Docker containers, networks, and volumes. It provides a simple and intuitive way to view and interact with your Docker environment.\nInstallation Install Docker Create the Portainer server database: docker volume create portainer_data Download and install Portainer-CE docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest Things I\u0026rsquo;ve Learned To update the container\u0026rsquo;s name in the yaml file, use the container_name: variable If a stack is unable to be deleted, it\u0026rsquo;s likely because the /var/lib/docker/volumes/portiner_data/_data/compose file is missing. You\u0026rsquo;ll have to recreate that numbered file and a docker-compose.yml in the directory in order to delete the stack. After Setup remove the 8000 port bind docker run -d -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest See also: Setup automatic updates with [[Watchtower]] or [[Shepherd]]. References Portainer-CE Container Names\n","permalink":"https://new.cloud.nobodyhome.dev/posts/portainer/","summary":"\u003ch3 id=\"description\"\u003eDescription\u003c/h3\u003e\n\u003cp\u003ePortainer is a web-based Docker management interface that allows users to easily manage their Docker containers, networks, and volumes. It provides a simple and intuitive way to view and interact with your Docker environment.\u003c/p\u003e\n\u003ch3 id=\"installation\"\u003eInstallation\u003c/h3\u003e\n\u003chr\u003e\n\u003col\u003e\n\u003cli\u003e\u003ca href=\"/posts/install-docker/\"\u003eInstall Docker\u003c/a\u003e\u003c/li\u003e\n\u003c/ol\u003e\n\u003chr\u003e\n\u003col start=\"2\"\u003e\n\u003cli\u003eCreate the Portainer server database:\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker volume create portainer_data\n\u003c/code\u003e\u003c/pre\u003e\u003chr\u003e\n\u003col start=\"3\"\u003e\n\u003cli\u003eDownload and install Portainer-CE\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest\n\u003c/code\u003e\u003c/pre\u003e\u003chr\u003e\n\u003ch4 id=\"things-ive-learned\"\u003eThings I\u0026rsquo;ve Learned\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eTo update the container\u0026rsquo;s name in the yaml file, use the \u003ccode\u003econtainer_name:\u003c/code\u003e variable\u003c/li\u003e\n\u003cli\u003eIf a stack is unable to be deleted, it\u0026rsquo;s likely because the \u003ccode\u003e/var/lib/docker/volumes/portiner_data/_data/compose\u003c/code\u003e file is missing. You\u0026rsquo;ll have to recreate that numbered file and a docker-compose.yml in the directory in order to delete the stack.\u003c/li\u003e\n\u003cli\u003eAfter Setup remove the 8000 port bind\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run -d -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"see-also\"\u003eSee also:\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eSetup automatic updates with [[Watchtower]] or [[Shepherd]].\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4 id=\"references\"\u003eReferences\u003c/h4\u003e\n\u003cp\u003e\u003ca href=\"https://docs.portainer.io/start/install-ce/server/docker/linux\"\u003ePortainer-CE\u003c/a\u003e\n\u003ca href=\"https://forums.docker.com/t/custom-container-name-for-docker-compose/48089/2\"\u003eContainer Names\u003c/a\u003e\u003c/p\u003e","title":"Portainer"},{"content":"This guide is a quick copy/paste on how to update Fedora Lunux.\nUpdate the Latest Packages sudo dnf upgrade --refresh Download the System Update sudo dnf system-upgrade download --releasever={LATEST RELEASE} Note: releasever can be changed easily, and incremented by 2\nReboot sudo dnf system-upgrade reboot (Optional) Further Updates sudo dnf install rpmconf \u0026amp;\u0026amp; sudo rpmconf -a \u0026amp;\u0026amp; sudo dnf install remove-retired-packages remove-retired-packages Clean and remove duplicate packages sudo dnf repoquery --duplicates \u0026amp;\u0026amp; sudo dnf remove --duplicates \u0026amp;\u0026amp; sudo dnf autoremove Clean gpg keys sudo dnf install clean-rpm-gpg-pubkey sudo clean-rpm-gpg-pubkey Clean symlinks Step 1:\nsudo dnf install symlinks \u0026amp;\u0026amp; sudo symlinks -r /usr | grep dangling Step 2:\nsudo symlinks -r -d /usr ","permalink":"https://new.cloud.nobodyhome.dev/posts/update-fedora/","summary":"\u003cp\u003eThis guide is a quick copy/paste on how to update Fedora Lunux.\u003c/p\u003e\n\u003ch2 id=\"update-the-latest-packages\"\u003eUpdate the Latest Packages\u003c/h2\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo dnf upgrade --refresh\n\u003c/code\u003e\u003c/pre\u003e\u003ch2 id=\"download-the-system-update\"\u003eDownload the System Update\u003c/h2\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo dnf system-upgrade download --releasever={LATEST RELEASE}\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eNote: releasever can be changed easily, and incremented by 2\u003c/p\u003e\n\u003ch2 id=\"reboot\"\u003eReboot\u003c/h2\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo dnf system-upgrade reboot\n\u003c/code\u003e\u003c/pre\u003e\u003ch2 id=\"optional-further-updates\"\u003e(Optional) Further Updates\u003c/h2\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo dnf install rpmconf \u0026amp;\u0026amp; sudo rpmconf -a \u0026amp;\u0026amp; sudo dnf install remove-retired-packages\nremove-retired-packages\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"clean-and-remove-duplicate-packages\"\u003eClean and remove duplicate packages\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo dnf repoquery --duplicates \u0026amp;\u0026amp; sudo dnf remove --duplicates \u0026amp;\u0026amp; sudo dnf autoremove\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"clean-gpg-keys\"\u003eClean gpg keys\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo dnf install clean-rpm-gpg-pubkey\nsudo clean-rpm-gpg-pubkey\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"clean-symlinks\"\u003eClean symlinks\u003c/h3\u003e\n\u003cp\u003eStep 1:\u003c/p\u003e","title":"Update Fedora"},{"content":"Explanation: vrrp_instance: a virtual router state: master or backup priority: higher priority means that router gets chosen more often authentication: auth_type: can integrate with other ticket based authentication protocols auth_pass: IPv4 sub 8 character passwords, IPv6 allows for longer passwords virtual_ipaddress: the shared IP ranges for the virtual router (can be more than one) Setup Examples Manager: vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 10 advert_int 1 authentication { auth_type AH auth_pass adguard } virtual_ipaddress { 10.133.7.11/24 } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 52 priority 10 advert_int 1 authentication { auth_type PASS auth_pass adgaurdhome } virtual_ipaddress { fd48:fb0a:cb3a:b8d4::1234/64 } } Backup: vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 1 advert_int 1 authentication { auth_type AH auth_pass adguard } virtual_ipaddress { 10.133.7.11/24 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 52 priority 1 advert_int 1 authentication { auth_type PASS auth_pass adgaurdhome } virtual_ipaddress { fd48:fb0a:cb3a:b8d4::1234/64 } } Notes: Separate setup for IPv4 and IPv6 addresses Can have multiple setup for different interfaces References: redhat arch wiki keepalived documentation ","permalink":"https://new.cloud.nobodyhome.dev/posts/keepalived/","summary":"\u003ch4 id=\"explanation\"\u003eExplanation:\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003evrrp_instance: a virtual router\u003c/li\u003e\n\u003cli\u003estate: master or backup\u003c/li\u003e\n\u003cli\u003epriority: higher priority means that router gets chosen more often\u003c/li\u003e\n\u003cli\u003eauthentication:\n\u003cul\u003e\n\u003cli\u003eauth_type: can integrate with other ticket based authentication protocols\u003c/li\u003e\n\u003cli\u003eauth_pass: IPv4 sub 8 character passwords, IPv6 allows for longer passwords\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003evirtual_ipaddress: the shared IP ranges for the virtual router (can be more than one)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4 id=\"setup-examples\"\u003eSetup Examples\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eManager:\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003evrrp_instance VI_1 {\n        state MASTER\n        interface eth0\n        virtual_router_id 51\n        priority 10\n        advert_int 1\n        authentication {\n                auth_type AH\n                auth_pass adguard\n        }\n        virtual_ipaddress {\n                10.133.7.11/24\n        }\n}\nvrrp_instance VI_2 {\n        state MASTER\n        interface eth0\n        virtual_router_id 52\n        priority 10\n        advert_int 1\n        authentication {\n                auth_type PASS\n                auth_pass adgaurdhome\n        }\n        virtual_ipaddress {\n\t       fd48:fb0a:cb3a:b8d4::1234/64\n        }\n}\n\u003c/code\u003e\u003c/pre\u003e\u003cul\u003e\n\u003cli\u003eBackup:\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003evrrp_instance VI_1 {\n\tstate BACKUP\n\tinterface eth0\n     virtual_router_id 51\n     priority 1\n     advert_int 1\n     authentication {\n         auth_type AH\n         auth_pass adguard\n     }\n     virtual_ipaddress {\n         10.133.7.11/24\n     }\n}\n vrrp_instance VI_2 {\n     state BACKUP\n     interface eth0\n     virtual_router_id 52\n     priority 1\n     advert_int 1\n     authentication {\n         auth_type PASS\n         auth_pass adgaurdhome\n     }       \n     virtual_ipaddress {\n         fd48:fb0a:cb3a:b8d4::1234/64\n     }\n}\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"notes\"\u003eNotes:\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eSeparate setup for IPv4 and IPv6 addresses\u003c/li\u003e\n\u003cli\u003eCan have multiple setup for different interfaces\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4 id=\"references\"\u003eReferences:\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://www.redhat.com/sysadmin/keepalived-basics\"\u003eredhat\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://wiki.archlinux.org/title/Keepalived\"\u003earch wiki\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://keepalived.readthedocs.io/en/latest/introduction.html\"\u003ekeepalived documentation\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e","title":"Keepalived"},{"content":"References: Shepherd Docker Compose Examples Shepherd Github Shepherd on hub.docker.com About Shepherd is a Docker swarm service for automatically updating your services whenever their base image is refreshed.\nVariables Default check time is every 5 minutes. Change this with the SLEEP_TIME variable. Control which services aren\u0026rsquo;t updated with the IGNORELIST_SERVICES variable. Ignored services should be in a space separated list of service names. As an alternative to ignore, use FILTER_SERVICES to specify which services you want updated. Notifications can be enabled through the appraise micro service and the APPRISE_SIDECAR_URL variable. Set the timezone with the TZ variable. Note, do not include quotations for the timezone. Clean up old services with IMAGE_AUTOCLEAN_LIMIT, the variable set keeps that number of old images. Setup: Docker Compose version: \u0026#34;3\u0026#34; services: app: image: containrrr/shepherd environment: APPRISE_SIDECAR_URL: notify:5000 TZ: Pacific/Honolulu IMAGE_AUTOCLEAN_LIMIT: 2 SLEEP_TIME: \u0026#39;360m\u0026#39; FILTER_SERVICES: \u0026#34;label=shepherd.autodeploy\u0026#34; VERBOSE: \u0026#39;true\u0026#39; volumes: - /var/run/docker.sock:/var/run/docker.sock networks: - notification deploy: placement: constraints: - node.role == manager notify: image: mazzolino/apprise-microservice:latest environment: NOTIFICATION_URLS: discord:[add your URL here] networks: - notification networks: notification: Docker Run docker service create --name shepherd --constraint \u0026#34;node.role==manager\u0026#34; --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock,ro containrrr/shepherd Notes: Notifications runs through the apprise microservice which runs on Apprise. The format for discord notifications is: discord://webhook_id/webhook_token or discord://avatar@webhook_id/webhook_token. ","permalink":"https://new.cloud.nobodyhome.dev/posts/shepherd/","summary":"\u003ch4 id=\"references\"\u003eReferences:\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/containrrr/shepherd/tree/master/examples\"\u003eShepherd Docker Compose Examples\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/containrrr/shepherd\"\u003eShepherd Github\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://hub.docker.com/r/containrrr/shepherd\"\u003eShepherd on hub.docker.com\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4 id=\"about\"\u003eAbout\u003c/h4\u003e\n\u003cp\u003eShepherd is a Docker swarm service for automatically updating your services whenever their base image is refreshed.\u003c/p\u003e\n\u003ch4 id=\"variables\"\u003eVariables\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eDefault check time is every 5 minutes. Change this with the \u003ccode\u003eSLEEP_TIME\u003c/code\u003e variable.\u003c/li\u003e\n\u003cli\u003eControl which services aren\u0026rsquo;t updated with the \u003ccode\u003eIGNORELIST_SERVICES\u003c/code\u003e variable. Ignored services should be in a space separated list of service names.\u003c/li\u003e\n\u003cli\u003eAs an alternative to ignore, use \u003ccode\u003eFILTER_SERVICES\u003c/code\u003e to specify which services you want updated.\u003c/li\u003e\n\u003cli\u003eNotifications can be enabled through the \u003ca href=\"https://github.com/djmaze/apprise-microservice\"\u003eappraise micro service\u003c/a\u003e and the \u003ccode\u003eAPPRISE_SIDECAR_URL\u003c/code\u003e variable.\u003c/li\u003e\n\u003cli\u003eSet the timezone with the \u003ccode\u003eTZ\u003c/code\u003e variable. Note, do not include quotations for the timezone.\u003c/li\u003e\n\u003cli\u003eClean up old services with \u003ccode\u003eIMAGE_AUTOCLEAN_LIMIT\u003c/code\u003e, the variable set keeps that number of old images.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4 id=\"setup\"\u003eSetup:\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eDocker Compose\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eversion: \u0026#34;3\u0026#34;\n\nservices:\n  app:\n    image: containrrr/shepherd\n    environment:\n      APPRISE_SIDECAR_URL: notify:5000\n      TZ: Pacific/Honolulu\n      IMAGE_AUTOCLEAN_LIMIT: 2\n      SLEEP_TIME: \u0026#39;360m\u0026#39;\n      FILTER_SERVICES: \u0026#34;label=shepherd.autodeploy\u0026#34;\n      VERBOSE: \u0026#39;true\u0026#39;\n    volumes:\n      - /var/run/docker.sock:/var/run/docker.sock\n    networks:\n      - notification\n    deploy:\n      placement:\n        constraints:\n          - node.role == manager\n\n  notify:\n    image: mazzolino/apprise-microservice:latest\n    environment:\n      NOTIFICATION_URLS: discord:[add your URL here]\n    networks:\n      - notification\n\nnetworks:\n  notification:\n\u003c/code\u003e\u003c/pre\u003e\u003cul\u003e\n\u003cli\u003eDocker Run\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker service create --name shepherd --constraint \u0026#34;node.role==manager\u0026#34; --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock,ro containrrr/shepherd\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"notes\"\u003eNotes:\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eNotifications runs through the \u003ca href=\"https://github.com/djmaze/apprise-microservice/tree/master\"\u003eapprise microservice\u003c/a\u003e which runs on \u003ca href=\"https://github.com/caronc/apprise\"\u003eApprise\u003c/a\u003e. The format for discord notifications is: \u003ccode\u003ediscord://webhook_id/webhook_token\u003c/code\u003e or \u003ccode\u003ediscord://avatar@webhook_id/webhook_token\u003c/code\u003e.\u003c/li\u003e\n\u003c/ul\u003e","title":"Shepherd"},{"content":"References Watchtower Docs Watchtower Notifications Watchtower Configuration - smarthomebeginner Watchtower Docker Compose Examples All Arguments A Good Reddit Thread\\ A Tutorial Setup Docker Compose: version: \u0026#34;3\u0026#34; services: watchtower: image: nickfedor/watchtower container_name: watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock environment: # - WATCHTOWER_LABEL_ENABLE=true - WATCHTOWER_NOTIFICATIONS=shoutrrr - WATCHTOWER_NOTIFICATION_URL=discord:[add discord url] - WATCHTOWER_POLL_INTERVAL=21600 - WATCHTOWER_CLEANUP=true # labels: # - \u0026#34;com.centurylinklabs.watchtower.enable=true\u0026#34; command: homepage portainer hostname: watchtower restart: unless-stopped deploy: mode: global Docker Run: docker run -d --name watchtower --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower [NAMES OF THE CONTAINERS TO UPDATE] Notes Watchtower does not work with docker swarms, for that use case see Shepherd. ","permalink":"https://new.cloud.nobodyhome.dev/posts/watchtower/","summary":"\u003ch4 id=\"references\"\u003eReferences\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://containrrr.dev/watchtower/\"\u003eWatchtower Docs\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://containrrr.dev/watchtower/notifications/\"\u003eWatchtower Notifications\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.smarthomebeginner.com/watchtower-docker-compose-2024/\"\u003eWatchtower Configuration - smarthomebeginner\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/containrrr/watchtower/blob/main/docker-compose.yml\"\u003eWatchtower Docker Compose Examples\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://containrrr.dev/watchtower/arguments/\"\u003eAll Arguments\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.reddit.com/r/selfhosted/comments/18kzbie/watchtower_notifications_via_shoutrrr_howto/\"\u003eA Good Reddit Thread\u003c/a\u003e\\\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://linuxiac.com/watchtower-automatically-update-docker-container-images/\"\u003eA Tutorial\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4 id=\"setup\"\u003eSetup\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eDocker Compose:\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eversion: \u0026#34;3\u0026#34;\nservices:\n  watchtower:\n    image: nickfedor/watchtower\n    container_name: watchtower\n    volumes:\n      - /var/run/docker.sock:/var/run/docker.sock\n    environment:\n#      - WATCHTOWER_LABEL_ENABLE=true \n      - WATCHTOWER_NOTIFICATIONS=shoutrrr\n      - WATCHTOWER_NOTIFICATION_URL=discord:[add discord url]\n      - WATCHTOWER_POLL_INTERVAL=21600\n      - WATCHTOWER_CLEANUP=true\n#    labels:\n#      - \u0026#34;com.centurylinklabs.watchtower.enable=true\u0026#34;\n    command: homepage portainer\n    hostname: watchtower\n    restart: unless-stopped\n    deploy: \n      mode: global\n\u003c/code\u003e\u003c/pre\u003e\u003cul\u003e\n\u003cli\u003eDocker Run:\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run -d --name watchtower --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower [NAMES OF THE CONTAINERS TO UPDATE]\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"notes\"\u003eNotes\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eWatchtower does not work with docker swarms, for that use case see \u003ca href=\"/posts/shepherd/\"\u003eShepherd\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e","title":"Watchtower"},{"content":"\nLinks dash.cloudflare.com one.dash.cloudflare.com\nInstalling the service Ubuntu curl -L --output cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb \u0026amp;\u0026amp; sudo dpkg -i cloudflared.deb \u0026amp;\u0026amp; sudo cloudflared service install [TUNNEL KEY] Red Hat curl -L --output cloudflared.rpm https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-x86_64.rpm \u0026amp;\u0026amp; sudo yum localinstall -y cloudflared.rpm \u0026amp;\u0026amp; sudo cloudflared service install [TUNNEL KEY] Docker docker run cloudflare/cloudflared:latest tunnel --no-autoupdate run --token [TUNNEL KEY] Docker Compose version: \u0026#34;3.8\u0026#34; services: cloudflared: image: cloudflare/cloudflared:latest restart: unless-stopped command: tunnel run network_mode: host environment: - \u0026#34;TUNNEL_TOKEN=[TUNNEL KEY]\u0026#34; deploy: mode: global placement: constraints: [node.platform.os == linux] Cloudflare as a Docker Sidecar Cloudflare can serve ports from other docker containers without actually exposing the container ports on the host device. See the compose example below:\nversion: \u0026#34;3.8\u0026#34; services: cloudflared: image: cloudflare/cloudflared:latest container_name: cloudflare-tun restart: unless-stopped command: tunnel run networks: - cloudflared environment: - \u0026#34;TUNNEL_TOKEN=[TUNNEL KEY]\u0026#34; placement: constraints: [node.platform.os == linux] [some service]: image: [repo]/[image]:[tag] container_name: [container name] restart: unless-stopped networks: - cloudflared networks: cloudflared: driver: bridge In the cloudflare dashboard, expose the hostname of the container through the tunnel: http://[container name]:[port] via the tunnel. Reminder: some services may be running over https, and this will require a slightly tweaked configuration in the cloudflare dashboard.\nConfiguring SSH Access Use the following format in .ssh/config to allow hosts to be accessed over a Cloudflare tunnel\nHost [HOSTNAME] ProxyCommand /usr/local/bin/cloudflared access ssh --hostname %h Access Cloudflare Docs\nApplication - adds a white list based authentication to subdomains under the application; allows for tailored management of access to sites, as well as cookie expiration management: Overview, Policies, Authentication, Settings Access Groups - allows you to build tailored access groups of white listed identities Service Auths Tags Networks Tunnels Cloudflare Docs Cloudflare Tunnel provides you with a secure way to connect your resources to Cloudflare without a publicly routable IP address. With Tunnel, you do not send traffic to an external IP — instead, a lightweight daemon in your infrastructure (cloudflared) creates outbound-only connections to Cloudflare’s global network. Cloudflare Tunnel can connect HTTP web servers, SSH servers, remote desktops, and other protocols safely to Cloudflare. This way, your origins can serve traffic through Cloudflare without being vulnerable to attacks that bypass Cloudflare.\nRoutes Cloudflare Docs With Cloudflare Zero Trust, you can connect private networks and the services running in those networks to Cloudflare’s global network. This involves installing a connector on the private network, and then setting up routes which define the IP addresses available in that environment. Unlike public hostname routes, private network routes can expose both HTTP and non-HTTP resources.\n","permalink":"https://new.cloud.nobodyhome.dev/posts/cloudflare-service/","summary":"\u003cp\u003e\u003cimg alt=\"250\" loading=\"lazy\" src=\"https://upload.wikimedia.org/wikipedia/commons/4/4b/Cloudflare_Logo.svg\"\u003e\u003c/p\u003e\n\u003ch3 id=\"links\"\u003eLinks\u003c/h3\u003e\n\u003cp\u003e\u003ca href=\"https://dash.cloudflare.com/\"\u003edash.cloudflare.com\u003c/a\u003e\n\u003ca href=\"https://one.dash.cloudflare.com\"\u003eone.dash.cloudflare.com\u003c/a\u003e\u003c/p\u003e\n\u003ch3 id=\"installing-the-service\"\u003eInstalling the service\u003c/h3\u003e\n\u003ch5 id=\"ubuntu\"\u003eUbuntu\u003c/h5\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003ecurl -L --output cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb \u0026amp;\u0026amp; \n\nsudo dpkg -i cloudflared.deb \u0026amp;\u0026amp; \n\nsudo cloudflared service install [TUNNEL KEY]\n\u003c/code\u003e\u003c/pre\u003e\u003ch5 id=\"red-hat\"\u003eRed Hat\u003c/h5\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003ecurl -L --output cloudflared.rpm https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-x86_64.rpm \u0026amp;\u0026amp; \n\nsudo yum localinstall -y cloudflared.rpm \u0026amp;\u0026amp; \n\nsudo cloudflared service install [TUNNEL KEY]\n\u003c/code\u003e\u003c/pre\u003e\u003ch5 id=\"docker\"\u003eDocker\u003c/h5\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run cloudflare/cloudflared:latest tunnel --no-autoupdate run --token [TUNNEL KEY]\n\u003c/code\u003e\u003c/pre\u003e\u003ch5 id=\"docker-compose\"\u003eDocker Compose\u003c/h5\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eversion: \u0026#34;3.8\u0026#34;\n\nservices:\n  cloudflared:\n    image: cloudflare/cloudflared:latest\n    restart: unless-stopped\n    command: tunnel run\n    network_mode: host\n    environment:\n      - \u0026#34;TUNNEL_TOKEN=[TUNNEL KEY]\u0026#34;\n    deploy:\n      mode: global\n      placement:\n        constraints: [node.platform.os == linux]\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"cloudflare-as-a-docker-sidecar\"\u003eCloudflare as a Docker Sidecar\u003c/h4\u003e\n\u003cp\u003eCloudflare can serve ports from other docker containers without actually exposing the container ports on the host device. See the compose example below:\u003c/p\u003e","title":"Cloudflare Tunnel"},{"content":"\nReferences Ollama.com\nInstallation curl -fsSL https://ollama.com/install.sh | sh Useful Commands sudo usermod -aG ollama $USER ollama pull llama3 llama2-uncensored godegemma gemma dolphin-mistral Service Configuration [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment=\u0026#34;PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin\u0026#34; Environment=\u0026#34;OLLAMA_HOST=0.0.0.0\u0026#34; [Install] WantedBy=default.target Useful Plugins Obsidian: local gpt Openweb-UI Misc Information Service runs on port 11433 By default service only listens on local host ","permalink":"https://new.cloud.nobodyhome.dev/posts/ollama-service/","summary":"\u003cp\u003e\u003cimg alt=\"llama|75\" loading=\"lazy\" src=\"https://ollama.com/public/ollama.png\"\u003e\u003c/p\u003e\n\u003ch3 id=\"references\"\u003eReferences\u003c/h3\u003e\n\u003cp\u003e\u003ca href=\"https://ollama.com/download\"\u003eOllama.com\u003c/a\u003e\u003c/p\u003e\n\u003ch3 id=\"installation\"\u003eInstallation\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003ecurl -fsSL https://ollama.com/install.sh | sh\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"useful-commands\"\u003eUseful Commands\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo usermod -aG ollama $USER \n\u003c/code\u003e\u003c/pre\u003e\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eollama pull llama3 llama2-uncensored godegemma gemma dolphin-mistral\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"service-configuration\"\u003eService Configuration\u003c/h3\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e[Unit]\nDescription=Ollama Service\nAfter=network-online.target\n\n[Service]\nExecStart=/usr/local/bin/ollama serve\nUser=ollama\nGroup=ollama\nRestart=always\nRestartSec=3\nEnvironment=\u0026#34;PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin\u0026#34;\nEnvironment=\u0026#34;OLLAMA_HOST=0.0.0.0\u0026#34;\n\n[Install]\nWantedBy=default.target\n\u003c/code\u003e\u003c/pre\u003e\u003ch3 id=\"useful-plugins\"\u003eUseful Plugins\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eObsidian: local gpt\u003c/li\u003e\n\u003cli\u003eOpenweb-UI\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3 id=\"misc-information\"\u003eMisc Information\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eService runs on port 11433\u003c/li\u003e\n\u003cli\u003eBy default service only listens on local host\u003c/li\u003e\n\u003c/ul\u003e","title":"Ollama Service"},{"content":"References: Open WebUI Open WebUI Troubleshooting Searxng Integration This is my error\u0026hellip;\nSetup Main docker run -d -p 8080:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main Latest docker run -d -p 8080:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:latest Docker Compose Yaml version: \u0026#34;3\u0026#34; services: open-webui: image: ghcr.io/open-webui/open-webui:latest container_name: open-webui volumes: - /home/mechanicus/open-webui:/app/backend/data restart: unless-stopped port: - \u0026#34;8080:8080\u0026#34; extra_hosts: - \u0026#34;host.docker.internal:host-gateway\u0026#34; Updates Pull the new image (make sure you pick main or latest based on your install) docker pull ghcr.io/open-webui/open-webui:latest Remove the old container docker rm --force open-webui Start the new container by rerunning the setup command docker run -d -p 8080:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:latest (Optional) Let [[Watchtower]] Do it docker run -d --name watchtower --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower open-webui Troubleshooting Resetting the admin password on a local instance - generate a new password hash htpasswd -bnBC 10 \u0026#34;\u0026#34; your-new-password | tr -d \u0026#39;:\\n\u0026#39; Change the password using a new docker container - replace HASH with the new password hash you just generated. docker run --rm -v open-webui:/data alpine/socat EXEC:\u0026#34;bash -c \u0026#39;apk add sqlite \u0026amp;\u0026amp; echo UPDATE auth SET password=\u0026#39;\\\u0026#39;\u0026#39;HASH\u0026#39;\\\u0026#39;\u0026#39; WHERE email=\u0026#39;\\\u0026#39;\u0026#39;admin@example.com\u0026#39;\\\u0026#39;\u0026#39;; | sqlite3 /data/webui.db\u0026#39;\u0026#34;, STDIO Data needs to be directly in the base folder for a mapped volume (Docker volumes use _data within a volume folder). [[Searxng]] needs the following changes to searxng/settings.yml: search: safe_search: 0 autocomplete: \u0026#34;\u0026#34; default_lang: \u0026#34;\u0026#34; formats: - html - json ","permalink":"https://new.cloud.nobodyhome.dev/posts/openweb-ui/","summary":"\u003ch3 id=\"references\"\u003eReferences:\u003c/h3\u003e\n\u003cp\u003e\u003ca href=\"https://docs.openwebui.com/\"\u003eOpen WebUI\u003c/a\u003e\n\u003ca href=\"https://docs.openwebui.com/troubleshooting/\"\u003eOpen WebUI Troubleshooting\u003c/a\u003e\n\u003ca href=\"https://docs.openwebui.com/tutorial/web_search/\"\u003eSearxng Integration\u003c/a\u003e\n\u003ca href=\"https://github.com/open-webui/open-webui/issues/2824\"\u003eThis is my error\u0026hellip;\u003c/a\u003e\u003c/p\u003e\n\u003ch4 id=\"setup\"\u003eSetup\u003c/h4\u003e\n\u003cul\u003e\n\u003cli\u003eMain\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run -d -p 8080:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main\n\u003c/code\u003e\u003c/pre\u003e\u003cul\u003e\n\u003cli\u003eLatest\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run -d -p 8080:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:latest\n\u003c/code\u003e\u003c/pre\u003e\u003cul\u003e\n\u003cli\u003eDocker Compose Yaml\u003c/li\u003e\n\u003c/ul\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eversion: \u0026#34;3\u0026#34;\nservices:\n\topen-webui:\n\t\timage: ghcr.io/open-webui/open-webui:latest\n\t\tcontainer_name: open-webui\n\t\tvolumes:\n\t\t\t- /home/mechanicus/open-webui:/app/backend/data\n\t\trestart: unless-stopped\n\t\tport:\n\t\t\t- \u0026#34;8080:8080\u0026#34;\n\t\textra_hosts:\n\t\t\t- \u0026#34;host.docker.internal:host-gateway\u0026#34;\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"updates\"\u003eUpdates\u003c/h4\u003e\n\u003col\u003e\n\u003cli\u003ePull the new image (make sure you pick main or latest based on your install)\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker pull ghcr.io/open-webui/open-webui:latest\n\u003c/code\u003e\u003c/pre\u003e\u003col start=\"2\"\u003e\n\u003cli\u003eRemove the old container\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker rm --force open-webui\n\u003c/code\u003e\u003c/pre\u003e\u003col start=\"3\"\u003e\n\u003cli\u003eStart the new container by rerunning the setup command\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run -d -p 8080:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:latest\n\u003c/code\u003e\u003c/pre\u003e\u003col start=\"4\"\u003e\n\u003cli\u003e(Optional) Let [[Watchtower]] Do it\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run -d --name watchtower --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower open-webui\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"troubleshooting\"\u003eTroubleshooting\u003c/h4\u003e\n\u003col\u003e\n\u003cli\u003eResetting the admin password on a local instance - generate a new password hash\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003ehtpasswd -bnBC 10 \u0026#34;\u0026#34; your-new-password | tr -d \u0026#39;:\\n\u0026#39;\n\u003c/code\u003e\u003c/pre\u003e\u003col start=\"2\"\u003e\n\u003cli\u003eChange the password using a new docker container - replace HASH with the new password hash you just generated.\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003edocker run --rm -v open-webui:/data alpine/socat EXEC:\u0026#34;bash -c \u0026#39;apk add sqlite \u0026amp;\u0026amp; echo UPDATE auth SET password=\u0026#39;\\\u0026#39;\u0026#39;HASH\u0026#39;\\\u0026#39;\u0026#39; WHERE email=\u0026#39;\\\u0026#39;\u0026#39;admin@example.com\u0026#39;\\\u0026#39;\u0026#39;; | sqlite3 /data/webui.db\u0026#39;\u0026#34;, STDIO\n\u003c/code\u003e\u003c/pre\u003e\u003col start=\"3\"\u003e\n\u003cli\u003eData needs to be directly in the base folder for a mapped volume (Docker volumes use  \u003ccode\u003e_data\u003c/code\u003e within a volume folder).\u003c/li\u003e\n\u003cli\u003e[[Searxng]] needs the following changes to \u003ccode\u003esearxng/settings.yml\u003c/code\u003e:\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esearch:\n\tsafe_search: 0\n\tautocomplete: \u0026#34;\u0026#34;\n\tdefault_lang: \u0026#34;\u0026#34;\n\tformats:\n\t\t- html\n\t\t- json\n\u003c/code\u003e\u003c/pre\u003e","title":"Openweb-UI"},{"content":"Description PCSCD serves as middleware to access a smart card using PC/SC. Install this program to\nInstallation sudo apt install pcscd -y Enable Socket sudo systemctl enable --now pcscd.socket Make Sure It\u0026rsquo;s Working systemctl status pcscd.service ","permalink":"https://new.cloud.nobodyhome.dev/posts/pcscd/","summary":"\u003ch4 id=\"description\"\u003eDescription\u003c/h4\u003e\n\u003cp\u003ePCSCD serves as middleware to access a smart card using PC/SC. Install this program to\u003c/p\u003e\n\u003ch4 id=\"installation\"\u003eInstallation\u003c/h4\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo apt install pcscd -y\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"enable-socket\"\u003eEnable Socket\u003c/h4\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esudo systemctl enable --now pcscd.socket\n\u003c/code\u003e\u003c/pre\u003e\u003ch4 id=\"make-sure-its-working\"\u003eMake Sure It\u0026rsquo;s Working\u003c/h4\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003esystemctl status pcscd.service\n\u003c/code\u003e\u003c/pre\u003e","title":"Smart Cards on Linux"}]