<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>LinuxPizza</title>
    <link>https://blogs.linux.pizza/</link>
    <description>Personal notes and occasional posts - 100% human, 0% AI generated</description>
    <pubDate>Tue, 14 Apr 2026 08:40:09 +0000</pubDate>
    <item>
      <title>How to issue 7 days certificate with Lets Encrypt and Certbot</title>
      <link>https://blogs.linux.pizza/how-to-issue-7-days-certificate-with-lets-encrypt-and-certbot</link>
      <description>&lt;![CDATA[It is actually pretty simple, for example with NGINX:&#xA;certbot --nginx --required-profile shortlived&#xA;&#xA;As you can see, use the option It can also be used with DNS-validation, the Apache plugin and so on.&#xA;Example, wildcard cert with the Bunny.Net plugin with ECC-certificates:&#xA;certbot certonly --key-type ecdsa --required-profile shortlived --authenticator dns-bunny --dns-bunny-credentials /var/lib/private/bunny.ini -d *.linux.pizza -d linux.pizza&#xA;&#xA;Have fun!&#xA;&#xA;#linux #certbot #letsencrypt]]&gt;</description>
      <content:encoded><![CDATA[<p>It is actually pretty simple, for example with NGINX:</p>

<pre><code>certbot --nginx --required-profile shortlived
</code></pre>

<p>As you can see, use the option <code>--required-profile shortlived</code>.
It can also be used with DNS-validation, the Apache plugin and so on.
Example, wildcard cert with the Bunny.Net plugin with ECC-certificates:</p>

<pre><code>certbot certonly --key-type ecdsa --required-profile shortlived --authenticator dns-bunny --dns-bunny-credentials /var/lib/private/bunny.ini -d *.linux.pizza -d linux.pizza
</code></pre>

<p>Have fun!</p>

<p><a href="https://blogs.linux.pizza/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blogs.linux.pizza/tag:certbot" class="hashtag"><span>#</span><span class="p-category">certbot</span></a> <a href="https://blogs.linux.pizza/tag:letsencrypt" class="hashtag"><span>#</span><span class="p-category">letsencrypt</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/how-to-issue-7-days-certificate-with-lets-encrypt-and-certbot</guid>
      <pubDate>Tue, 03 Feb 2026 07:30:55 +0000</pubDate>
    </item>
    <item>
      <title>Secure your API with Client-certificate authenatication in NGINX</title>
      <link>https://blogs.linux.pizza/secure-your-api-with-client-certificate-authenatication-in-nginx</link>
      <description>&lt;![CDATA[After a few hours trying to make it work with my current CA, where the Root is stored offline, AIA, OCSP, CRL and all that stuff is done by the book - I gave up.&#xA;Somehow, the Open Source variant of Nginx does not really like my OCSP setup, no idea why and I have no idea how to troubleshoot that.&#xA;&#xA;Solution? KISS-principle!&#xA;&#xA;I&#39;ll write this down, quick and dirty. But hopefully it helps someone.&#xA;&#xA;Lets start with create the private key for the CA that we will create:&#xA;&#xA;openssl genpkey -algorithm RSA -out CAROOT.key -aes256&#xA;With this command, we have created a private key with AES256. You will be prompted to give a password - write that down.&#xA;And the following command will create a certificate from the private key, valid for 10 years.&#xA;&#xA;openssl req -x509 -new -nodes -key CAROOT.key -sha256 -days 3650 -out CAROOT.crt&#xA;Fill in the information that the above command wants of you, like country-code, and so on.&#xA;After that, your CA is done. The crude, ugly and honestly boring CA. But it&#39;ll work for this usecase.&#xA;&#xA;Let&#39;s create the client-certificate!&#xA;&#xA;First, will start by creating the private.key, and the .csr:&#xA;openssl genpkey -algorithm RSA -out client-cert.key&#xA;openssl req -new -key client.key -out client-cert.csr&#xA;And again, fill out the information wanted by openssl that will populate the .csr. Make it looks pretty.&#xA;Ideally, the commands shall be run on the client only, so the private-key never leaves the client. The .csr is what the CA will need to sign and create a valid certificate.&#xA;&#xA;Bring the .csr to the CA, and sign it:&#xA;&#xA;openssl x509 -req -in client-cert.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client-cert.crt -days 365 -sha256&#xA;This will give you a signed certificate for your client named &#34;client-cert.crt&#34; - you may bring that to the client-machine and install it.&#xA;&#xA;Firefox wants a .pfx:&#xA;&#xA;In order to import the certificate into Firefox, you&#39;ll need to convert it to p12/pfx format:&#xA;openssl pkcs12 -export -out client-cert.pfx -inkey client-cert.key -in client-cert.crt -certfile CAROOT.crt&#xA;Please note, that you&#39;ll need the CAROOT.crt file too that you created.&#xA;&#xA;Configure NGINX to do client-certificate authentication&#xA;&#xA;Navigate to the virtualhost you want to enable client-certificate authentication on, and add the following:&#xA;&#xA;    sslclientcertificate /etc/ssl/private/CAROOT.crt;&#xA;    sslverifyclient on;&#xA;    sslverifydepth 2;&#xA;Please note, that you have to place the CA_ROOT.crt file in &#xA;Restart NGINX and try to visit the site. You&#39;ll probably be asked for permission to use client-certificate authentication.&#xA;&#xA;#linux #openssl #nginx #pki&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>After a few hours trying to make it work with my current CA, where the Root is stored offline, AIA, OCSP, CRL and all that stuff is done by the book – I gave up.
Somehow, the Open Source variant of Nginx does not really like my OCSP setup, no idea why and I have no idea how to troubleshoot that.</p>

<h4 id="solution-kiss-principle" id="solution-kiss-principle">Solution? KISS-principle!</h4>

<p>I&#39;ll write this down, quick and dirty. But hopefully it helps someone.</p>

<p>Lets start with create the private key for the CA that we will create:</p>

<pre><code>openssl genpkey -algorithm RSA -out CA_ROOT.key -aes256
</code></pre>

<p>With this command, we have created a private key with AES256. You will be prompted to give a password – write that down.
And the following command will create a certificate from the private key, valid for 10 years.</p>

<pre><code>openssl req -x509 -new -nodes -key CA_ROOT.key -sha256 -days 3650 -out CA_ROOT.crt
</code></pre>

<p>Fill in the information that the above command wants of you, like country-code, and so on.
After that, your CA is done. The crude, ugly and honestly boring CA. But it&#39;ll work for this usecase.</p>

<h4 id="let-s-create-the-client-certificate" id="let-s-create-the-client-certificate">Let&#39;s create the client-certificate!</h4>

<p>First, will start by creating the private.key, and the .csr:</p>

<pre><code>openssl genpkey -algorithm RSA -out client-cert.key
openssl req -new -key client.key -out client-cert.csr
</code></pre>

<p>And again, fill out the information wanted by openssl that will populate the .csr. Make it looks pretty.
Ideally, the commands shall be run on the client only, so the private-key never leaves the client. The .csr is what the CA will need to sign and create a valid certificate.</p>

<p>Bring the .csr to the CA, and sign it:</p>

<pre><code>openssl x509 -req -in client-cert.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client-cert.crt -days 365 -sha256
</code></pre>

<p>This will give you a signed certificate for your client named “client-cert.crt” – you may bring that to the client-machine and install it.</p>

<h4 id="firefox-wants-a-pfx" id="firefox-wants-a-pfx">Firefox wants a .pfx:</h4>

<p>In order to import the certificate into Firefox, you&#39;ll need to convert it to p12/pfx format:</p>

<pre><code>openssl pkcs12 -export -out client-cert.pfx -inkey client-cert.key -in client-cert.crt -certfile CA_ROOT.crt
</code></pre>

<p>Please note, that you&#39;ll need the CA_ROOT.crt file too that you created.</p>

<h4 id="configure-nginx-to-do-client-certificate-authentication" id="configure-nginx-to-do-client-certificate-authentication">Configure NGINX to do client-certificate authentication</h4>

<p>Navigate to the virtualhost you want to enable client-certificate authentication on, and add the following:</p>

<pre><code>    ssl_client_certificate /etc/ssl/private/CA_ROOT.crt;
    ssl_verify_client on;
    ssl_verify_depth 2;
</code></pre>

<p>Please note, that you have to place the CA_ROOT.crt file in <code>/etc/ssl/private/</code></p>

<p>Restart NGINX and try to visit the site. You&#39;ll probably be asked for permission to use client-certificate authentication.</p>

<p><a href="https://blogs.linux.pizza/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blogs.linux.pizza/tag:openssl" class="hashtag"><span>#</span><span class="p-category">openssl</span></a> <a href="https://blogs.linux.pizza/tag:nginx" class="hashtag"><span>#</span><span class="p-category">nginx</span></a> <a href="https://blogs.linux.pizza/tag:pki" class="hashtag"><span>#</span><span class="p-category">pki</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/secure-your-api-with-client-certificate-authenatication-in-nginx</guid>
      <pubDate>Sat, 15 Nov 2025 20:32:09 +0000</pubDate>
    </item>
    <item>
      <title>Linux.Pizza Matrix-server is (re)launching</title>
      <link>https://blogs.linux.pizza/linux-pizza-matrix-server-is-re-launching</link>
      <description>&lt;![CDATA[After 5 years, the Linux.Pizza Matrix-server is relauching. Last time, we housed over 3k active accounts.&#xA;However, 3k active accounts is not something that we aim to achieve this time, but rather - a complement to your social.linux.pizza Mastodon account.&#xA;&#xA;We achieve this by just enabling social.linux.pizza as a OIDC-provider on the matrix-server - the same functionality that already is being used when you authenticate your mobile application.&#xA;&#xA;In order to login with your social.linux.pizza account. Just used the Matrix-client you prefer (Element(X), SchlidiChat/SchlidiChat Next, Cinny or even Thunderbird) - set &#34;synapse.linux.pizza&#34; as your &#34;Homeserver&#34;, and the option to login with social.linux.pizza should appear.&#xA;&#xA;Image showing the login-process to the Linux.Pizza Matrix-server&#xA;&#xA;Image showing the login-process to the Linux.Pizza Matrix-server&#xA;&#xA;Image showing the login-process to the Linux.Pizza Matrix-server&#xA;&#xA;Image showing the login-process to the Linux.Pizza Matrix-server&#xA;&#xA;Worth noting, is that this service will launch as a Beta-service, so every tester is welcome :)]]&gt;</description>
      <content:encoded><![CDATA[<h2 id="after-5-years-the-linux-pizza-matrix-server-is-relauching-last-time-we-housed-over-3k-active-accounts" id="after-5-years-the-linux-pizza-matrix-server-is-relauching-last-time-we-housed-over-3k-active-accounts">After 5 years, the Linux.Pizza Matrix-server is relauching. Last time, we housed over 3k active accounts.</h2>

<h3 id="however-3k-active-accounts-is-not-something-that-we-aim-to-achieve-this-time-but-rather-a-complement-to-your-social-linux-pizza-mastodon-account" id="however-3k-active-accounts-is-not-something-that-we-aim-to-achieve-this-time-but-rather-a-complement-to-your-social-linux-pizza-mastodon-account">However, 3k active accounts is not something that we aim to achieve this time, but rather – a complement to your social.linux.pizza Mastodon account.</h3>

<p>We achieve this by just enabling social.linux.pizza as a OIDC-provider on the matrix-server – the same functionality that already is being used when you authenticate your mobile application.</p>

<p>In order to login with your social.linux.pizza account. Just used the Matrix-client you prefer (Element(X), SchlidiChat/SchlidiChat Next, Cinny or even Thunderbird) – set “synapse.linux.pizza” as your “Homeserver”, and the option to login with social.linux.pizza should appear.</p>

<p><img src="https://pictures.blogs.linux.pizza/matrix/step1.png" alt="Image showing the login-process to the Linux.Pizza Matrix-server" title="Picture1"></p>

<p><img src="https://pictures.blogs.linux.pizza/matrix/step2.png" alt="Image showing the login-process to the Linux.Pizza Matrix-server" title="Picture2"></p>

<p><img src="https://pictures.blogs.linux.pizza/matrix/step3.png" alt="Image showing the login-process to the Linux.Pizza Matrix-server" title="Picture3"></p>

<p><img src="https://pictures.blogs.linux.pizza/matrix/step4.png" alt="Image showing the login-process to the Linux.Pizza Matrix-server" title="Picture4"></p>

<p>Worth noting, is that this service will launch as a Beta-service, so every tester is welcome :)</p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/linux-pizza-matrix-server-is-re-launching</guid>
      <pubDate>Sat, 04 Jan 2025 23:17:35 +0000</pubDate>
    </item>
    <item>
      <title>Enable SNMP on Cisco SG350XG-2F10</title>
      <link>https://blogs.linux.pizza/enable-snmp-on-cisco-sg350xg-2f10</link>
      <description>&lt;![CDATA[Writing this down, so people and myself can easily find this solution&#xA;&#xA;The Cisco docs is incomplete, this is the correct way of enabling SNMP on the SG350 series:&#xA;&#xA;configure term&#xA;snmp-server community public RO&#xA;snmp-server community private RW&#xA;snmp-server server&#xA;snmp-server location hackerspace&#xA;&#xA;Thanks to @fedops@fosstodon.org for telling me about the &#34;snmp-server server&#34; step.&#xA;&#xA;#cisco #networking #switching #snmp #observium]]&gt;</description>
      <content:encoded><![CDATA[<h3 id="writing-this-down-so-people-and-myself-can-easily-find-this-solution" id="writing-this-down-so-people-and-myself-can-easily-find-this-solution">Writing this down, so people and myself can easily find this solution</h3>

<p>The Cisco docs is incomplete, this is the correct way of enabling SNMP on the SG350 series:</p>

<pre><code>configure term
snmp-server community public RO
snmp-server community private RW
snmp-server server
snmp-server location hackerspace
</code></pre>

<p>Thanks to <a href="https://blogs.linux.pizza/@/fedops@fosstodon.org" class="u-url mention">@<span>fedops@fosstodon.org</span></a> for telling me about the “snmp-server server” step.</p>

<p><a href="https://blogs.linux.pizza/tag:cisco" class="hashtag"><span>#</span><span class="p-category">cisco</span></a> <a href="https://blogs.linux.pizza/tag:networking" class="hashtag"><span>#</span><span class="p-category">networking</span></a> <a href="https://blogs.linux.pizza/tag:switching" class="hashtag"><span>#</span><span class="p-category">switching</span></a> <a href="https://blogs.linux.pizza/tag:snmp" class="hashtag"><span>#</span><span class="p-category">snmp</span></a> <a href="https://blogs.linux.pizza/tag:observium" class="hashtag"><span>#</span><span class="p-category">observium</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/enable-snmp-on-cisco-sg350xg-2f10</guid>
      <pubDate>Tue, 09 Apr 2024 07:27:38 +0000</pubDate>
    </item>
    <item>
      <title>Replace the default certificate on a Unifi Dream Router with your own</title>
      <link>https://blogs.linux.pizza/replace-the-default-certificate-on-a-unifi-dream-router-with-your-own</link>
      <description>&lt;![CDATA[I dont claim responsibility for anything being done on your router. This short TODO is written for myself - dont follow if you are not familiar with certificates and PKI.&#xA;&#xA;1  SSH into your machine&#xA;Navigate to Replace Replace Restart  Unifi Core:&#xA;systemctl restart unifi-core&#xA;&#xA;Done!&#xA;A screenshot, showing a valid certificate on udr.selea.se, located on a Unifi Dream Router&#xA;&#xA;#linux #pki #certificates #unifi]]&gt;</description>
      <content:encoded><![CDATA[<h3 id="i-dont-claim-responsibility-for-anything-being-done-on-your-router-this-short-todo-is-written-for-myself-dont-follow-if-you-are-not-familiar-with-certificates-and-pki" id="i-dont-claim-responsibility-for-anything-being-done-on-your-router-this-short-todo-is-written-for-myself-dont-follow-if-you-are-not-familiar-with-certificates-and-pki">I dont claim responsibility for anything being done on your router. This short TODO is written for myself – dont follow if you are not familiar with certificates and PKI.</h3>

<p>1  SSH into your machine
2. Navigate to <code>/data/unifi-core/config</code>
3. Replace <code>unifi-core.key</code> with your private key
4. Replace <code>unifi-core.crt</code> with your TLS-certificate
5. Restart  Unifi Core:</p>

<pre><code>systemctl restart unifi-core
</code></pre>

<p>Done!
<img src="https://pictures.blogs.linux.pizza/misc/udr.png" alt="A screenshot, showing a valid certificate on udr.selea.se, located on a Unifi Dream Router" title="UDR certificate"></p>

<p><a href="https://blogs.linux.pizza/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blogs.linux.pizza/tag:pki" class="hashtag"><span>#</span><span class="p-category">pki</span></a> <a href="https://blogs.linux.pizza/tag:certificates" class="hashtag"><span>#</span><span class="p-category">certificates</span></a> <a href="https://blogs.linux.pizza/tag:unifi" class="hashtag"><span>#</span><span class="p-category">unifi</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/replace-the-default-certificate-on-a-unifi-dream-router-with-your-own</guid>
      <pubDate>Sun, 24 Mar 2024 15:51:35 +0000</pubDate>
    </item>
    <item>
      <title>Random stuff cheat-sheet</title>
      <link>https://blogs.linux.pizza/random-stuff-cheat-sheet</link>
      <description>&lt;![CDATA[LVM stuff&#xA;&#xA;WARNING: PV /dev/sda2 in VG vg0 is using an old PV header, modify the VG to update.&#xA;Update the metadata with the vgck command - where the &#34;vg0&#34; is your own pool.&#xA;vgck --updatemetadata vg0&#xA;curl stuff&#xA;Curl a specific IP with a another host-header&#xA;curl -H &#34;Host: subdomain.example.com&#34; http://172.243.6.400/&#xA;git stuff&#xA;tell git.exe to use the built-in CA-store in Windows&#xA;git config --global http.sslBackend schannel&#xA;random stuff&#xA;See which process is using a file&#xA;fuser file&#xA;Import RootCert into Java-keystore example&#xA;sudo /usr/lib/java/jdk8u292-b10-jre/bin/keytool -import -alias some-rootcert -keystore /usr/lib/java/jdk8u292-b10-jre/lib/security/cacerts -file /usr/share/ca-certificates/extra/someRoot.crt`&#xA;&#xA;Apache2 configs example&#xA;Enable AD-authentication for web-resources&#xA;Location /&#xA;   AuthName &#34;AD authentication&#34;&#xA;   AuthBasicProvider ldap&#xA;   AuthType Basic&#xA;   AuthLDAPGroupAttribute member&#xA;   AuthLDAPGroupAttributeIsDN On&#xA;   AuthLDAPURL ldap://IP:389/OU=Users,OU=pizza,DC=linux,DC=pizza? &#xA;   sAMAccountName?sub?(objectClass=)&#xA;   AuthLDAPBindDN cn=tomcat7,ou=ServiceAccounts,ou=Users,OU=pizza,dc=linux,dc=pizza&#xA;  AuthLDAPBindPassword &#34;exec:/bin/cat /etc/apache2/ldap-password.conf&#34;&#xA;  Require ldap-group &#xA;  CN=somegroup,OU=Groups,OU=pizza,DC=linux,DC=pizza&#xA;  ProxyPass &#34;http://localhost:5601/&#34;&#xA;  ProxyPassReverse &#34;http://localhost:5601/&#34;&#xA;&#xA;/Location&#xA;&#xA;Insert Matomo tracking script in Apache using modsubstitute&#xA;AddOutputFilterByType SUBSTITUTE text/html&#xA;Substitute &#34;s-/head-script type=\&#34;text/javascript\&#34;var paq = paq || [];paq.push([&#39;trackPageView&#39;]);paq.push([&#39;enableLinkTracking&#39;]);(function() {var u=\&#34;https://matomo.example.com/\&#34;;paq.push([&#39;setTrackerUrl&#39;, u+&#39;matomo.php&#39;]);paq.push([&#39;setSiteId&#39;, &#39;1&#39;]);var d=document, g=d.createElement(&#39;script&#39;), s=d.getElementsByTagName(&#39;script&#39;)[0];g.type=&#39;text/javascript&#39;; g.async=true; g.defer=true; g.src=u+&#39;matomo.js&#39;; s.parentNode.insertBefore(g,s);})();/script/head-n&#34;&#xA;Load balance backend-servers&#xA;Proxy balancer://k3singress&#xA;&#x9;BalancerMember http://x.x.x.1:80&#xA;&#x9;BalancerMember http://x.x.x.2:80&#xA;&#x9;BalancerMember http://x.x.x.3:80&#xA;&#x9;BalancerMember http://x.x.x.4:80&#xA;&#x9;ProxySet lbmethod=bytraffic&#xA;&#x9;ProxySet connectiontimeout=5 timeout=30&#xA;&#x9;SetEnv force-proxy-request-1.0 1&#xA;&#x9;SetEnv proxy-nokeepalive 1&#xA;/Proxy&#xA;       ProxyPass &#34;/&#34; &#34;balancer://k3singress/&#34;&#xA;       ProxyPassReverse &#34;/&#34; &#34;balancer://k3singress/&#34;&#xA;       ProxyVia Full&#xA;       ProxyRequests On&#xA;       ProxyPreserveHost On&#xA;Basic Apache-config for PHP-FPM&#xA;VirtualHost :80&#xA;  ServerName www.example.com&#xA;  DocumentRoot /srv/www.example.com/htdocs&#xA;  Directory /srv/www.example.com/htdocs&#xA;    AllowOverride All&#xA;    Require all granted&#xA;    DirectoryIndex index.html index.htm index.php&#xA;    FilesMatch &#34;\.php$&#34;&#xA;      SetHandler proxy:unix:/run/php/www.example.com.sock|fcgi://localhost&#xA;    /FilesMatch&#xA;  /Directory&#xA;  SetEnvIf x-forwarded-proto https HTTPS=on&#xA;/VirtualHost&#xA;Basic PHP-fpm pool&#xA;[www.example.com]&#xA;user = USER&#xA;group = GROUP&#xA;&#xA;listen = /var/run/php/$pool.sock&#xA;&#xA;listen.owner = www-data&#xA;listen.group = www-data&#xA;&#xA;pm = ondemand&#xA;pm.processidletimeout = 10&#xA;pm.maxchildren = 1&#xA;&#xA;chdir = /&#xA;&#xA;phpadminvalue[sendmailpath] = /usr/sbin/sendmail -t -i -f no-reply@ftp.selea.se&#xA;phpadminvalue[mail.log] = /srv/ftp.selea.se/log/mail.log&#xA;phpadminvalue[openbasedir] = /srv/ftp.selea.se:/tmp&#xA;phpadminvalue[memorylimit] = 64M&#xA;phpadminvalue[uploadmaxfilesize] = 64M&#xA;phpadminvalue[postmaxsize] = 64M&#xA;phpadminvalue[maxexecutiontime] = 180&#xA;phpadminvalue[maxinputvars] = 1000&#xA;&#xA;phpadminvalue[disablefunctions] = passthru,exec,shellexec,system,procopen,popen,curlexec,curlmultiexec,parseinifile,showsource,mail&#xA;Netplan - use device MAC instead of /etc/machine-id for DHCP&#xA;network:&#xA;  ethernets:&#xA;    eth0:&#xA;      dhcp4: true&#xA;      dhcp-identifier: mac&#xA;  version: 2&#xA;HPs apt repo for various utilities for proliant machines &#xA;deb http://downloads.linux.hpe.com/SDR/repo/mcp buster/current non-free&#xA;psql stuff&#xA;CREATE DATABASE yourdbname;&#xA;CREATE USER youruser WITH ENCRYPTED PASSWORD &#39;yourpass&#39;;&#xA;GRANT ALL PRIVILEGES ON DATABASE yourdbname TO youruser;&#xA;&#xA;Get entity for AD/SMB based user so you can put it in getent passwd USERNAME&#xA;Nicely shutdown NetApp cluster&#xA;system node autosupport invoke -node  -type all -message &#34;MAINT=48h Power Maintenance&#34;&#xA;system node halt -node  -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true&#xA;Allow a process to listen on ports 0-1000 in systemd.service file&#xA;[Service]&#xA;AmbientCapabilities=CAPNETBINDSERVICE&#xA;&#xA;#linux #kubernetes #netplan #php-fpm #apache #LVM&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<h4 id="lvm-stuff" id="lvm-stuff">LVM stuff</h4>

<pre><code>WARNING: PV /dev/sda2 in VG vg0 is using an old PV header, modify the VG to update.
</code></pre>

<p>Update the metadata with the vgck command – where the “vg0” is your own pool.</p>

<pre><code>vgck --updatemetadata vg0
</code></pre>

<h4 id="curl-stuff" id="curl-stuff">curl stuff</h4>

<p>Curl a specific IP with a another host-header</p>

<pre><code>curl -H &#34;Host: subdomain.example.com&#34; http://172.243.6.400/
</code></pre>

<h4 id="git-stuff" id="git-stuff">git stuff</h4>

<p>tell git.exe to use the built-in CA-store in Windows</p>

<pre><code>git config --global http.sslBackend schannel
</code></pre>

<h4 id="random-stuff" id="random-stuff">random stuff</h4>

<p>See which process is using a file</p>

<pre><code>fuser file
</code></pre>

<h4 id="import-rootcert-into-java-keystore-example" id="import-rootcert-into-java-keystore-example">Import RootCert into Java-keystore example</h4>

<pre><code>sudo /usr/lib/java/jdk8u292-b10-jre/bin/keytool -import -alias some-rootcert -keystore /usr/lib/java/jdk8u292-b10-jre/lib/security/cacerts -file /usr/share/ca-certificates/extra/someRoot.crt`
</code></pre>

<h2 id="apache2-configs-example" id="apache2-configs-example">Apache2 configs example</h2>

<h4 id="enable-ad-authentication-for-web-resources" id="enable-ad-authentication-for-web-resources">Enable AD-authentication for web-resources</h4>

<pre><code>&lt;Location /&gt;
   AuthName &#34;AD authentication&#34;
   AuthBasicProvider ldap
   AuthType Basic
   AuthLDAPGroupAttribute member
   AuthLDAPGroupAttributeIsDN On
   AuthLDAPURL ldap://IP:389/OU=Users,OU=pizza,DC=linux,DC=pizza? 
   sAMAccountName?sub?(objectClass=*)
   AuthLDAPBindDN cn=tomcat7,ou=ServiceAccounts,ou=Users,OU=pizza,dc=linux,dc=pizza
  AuthLDAPBindPassword &#34;exec:/bin/cat /etc/apache2/ldap-password.conf&#34;
  Require ldap-group 
  CN=some_group,OU=Groups,OU=pizza,DC=linux,DC=pizza
  ProxyPass &#34;http://localhost:5601/&#34;
  ProxyPassReverse &#34;http://localhost:5601/&#34;

&lt;/Location&gt;

</code></pre>

<h4 id="insert-matomo-tracking-script-in-apache-using-mod-substitute" id="insert-matomo-tracking-script-in-apache-using-mod-substitute">Insert Matomo tracking script in Apache using mod_substitute</h4>

<pre><code>AddOutputFilterByType SUBSTITUTE text/html
Substitute &#34;s-&lt;/head&gt;-&lt;script type=\&#34;text/javascript\&#34;&gt;var _paq = _paq || [];_paq.push([&#39;trackPageView&#39;]);_paq.push([&#39;enableLinkTracking&#39;]);(function() {var u=\&#34;https://matomo.example.com/\&#34;;_paq.push([&#39;setTrackerUrl&#39;, u+&#39;matomo.php&#39;]);_paq.push([&#39;setSiteId&#39;, &#39;1&#39;]);var d=document, g=d.createElement(&#39;script&#39;), s=d.getElementsByTagName(&#39;script&#39;)[0];g.type=&#39;text/javascript&#39;; g.async=true; g.defer=true; g.src=u+&#39;matomo.js&#39;; s.parentNode.insertBefore(g,s);})();&lt;/script&gt;&lt;/head&gt;-n&#34;
</code></pre>

<h4 id="load-balance-backend-servers" id="load-balance-backend-servers">Load balance backend-servers</h4>

<pre><code>&lt;Proxy balancer://k3singress&gt;
	BalancerMember http://x.x.x.1:80
	BalancerMember http://x.x.x.2:80
	BalancerMember http://x.x.x.3:80
	BalancerMember http://x.x.x.4:80
	ProxySet lbmethod=bytraffic
	ProxySet connectiontimeout=5 timeout=30
	SetEnv force-proxy-request-1.0 1
	SetEnv proxy-nokeepalive 1
&lt;/Proxy&gt;
       ProxyPass &#34;/&#34; &#34;balancer://k3singress/&#34;
       ProxyPassReverse &#34;/&#34; &#34;balancer://k3singress/&#34;
       ProxyVia Full
       ProxyRequests On
       ProxyPreserveHost On
</code></pre>

<h4 id="basic-apache-config-for-php-fpm" id="basic-apache-config-for-php-fpm">Basic Apache-config for PHP-FPM</h4>

<pre><code>&lt;VirtualHost *:80&gt;
  ServerName www.example.com
  DocumentRoot /srv/www.example.com/htdocs
  &lt;Directory /srv/www.example.com/htdocs&gt;
    AllowOverride All
    Require all granted
    DirectoryIndex index.html index.htm index.php
    &lt;FilesMatch &#34;\.php$&#34;&gt;
      SetHandler proxy:unix:/run/php/www.example.com.sock|fcgi://localhost
    &lt;/FilesMatch&gt;
  &lt;/Directory&gt;
  SetEnvIf x-forwarded-proto https HTTPS=on
&lt;/VirtualHost&gt;
</code></pre>

<h4 id="basic-php-fpm-pool" id="basic-php-fpm-pool">Basic PHP-fpm pool</h4>

<pre><code>[www.example.com]
user = USER
group = GROUP

listen = /var/run/php/$pool.sock

listen.owner = www-data
listen.group = www-data

pm = ondemand
pm.process_idle_timeout = 10
pm.max_children = 1

chdir = /

php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f no-reply@ftp.selea.se
php_admin_value[mail.log] = /srv/ftp.selea.se/log/mail.log
php_admin_value[open_basedir] = /srv/ftp.selea.se:/tmp
php_admin_value[memory_limit] = 64M
php_admin_value[upload_max_filesize] = 64M
php_admin_value[post_max_size] = 64M
php_admin_value[max_execution_time] = 180
php_admin_value[max_input_vars] = 1000

php_admin_value[disable_functions] = passthru,exec,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,mail
</code></pre>

<h3 id="netplan-use-device-mac-instead-of-etc-machine-id-for-dhcp" id="netplan-use-device-mac-instead-of-etc-machine-id-for-dhcp">Netplan – use device MAC instead of /etc/machine-id for DHCP</h3>

<pre><code>network:
  ethernets:
    eth0:
      dhcp4: true
      dhcp-identifier: mac
  version: 2
</code></pre>

<h4 id="hps-apt-repo-for-various-utilities-for-proliant-machines" id="hps-apt-repo-for-various-utilities-for-proliant-machines">HPs apt repo for various utilities for proliant machines</h4>

<pre><code>deb http://downloads.linux.hpe.com/SDR/repo/mcp buster/current non-free
</code></pre>

<h4 id="psql-stuff" id="psql-stuff">psql stuff</h4>

<pre><code>CREATE DATABASE yourdbname;
CREATE USER youruser WITH ENCRYPTED PASSWORD &#39;yourpass&#39;;
GRANT ALL PRIVILEGES ON DATABASE yourdbname TO youruser;
</code></pre>

<p>Get entity for AD/SMB based user so you can put it in <code>/etc/passwd</code>:</p>

<pre><code>getent passwd USERNAME
</code></pre>

<p>Nicely shutdown NetApp cluster</p>

<pre><code>system node autosupport invoke -node * -type all -message &#34;MAINT=48h Power Maintenance&#34;
system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true
</code></pre>

<p>Allow a process to listen on ports 0-1000 in systemd.service file</p>

<pre><code>[Service]
AmbientCapabilities=CAP_NET_BIND_SERVICE
</code></pre>

<p><a href="https://blogs.linux.pizza/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blogs.linux.pizza/tag:kubernetes" class="hashtag"><span>#</span><span class="p-category">kubernetes</span></a> <a href="https://blogs.linux.pizza/tag:netplan" class="hashtag"><span>#</span><span class="p-category">netplan</span></a> <a href="https://blogs.linux.pizza/tag:php" class="hashtag"><span>#</span><span class="p-category">php</span></a>-fpm <a href="https://blogs.linux.pizza/tag:apache" class="hashtag"><span>#</span><span class="p-category">apache</span></a> <a href="https://blogs.linux.pizza/tag:LVM" class="hashtag"><span>#</span><span class="p-category">LVM</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/random-stuff-cheat-sheet</guid>
      <pubDate>Fri, 30 Jun 2023 07:53:42 +0000</pubDate>
    </item>
    <item>
      <title>Where the /¤&#34;# is /var/log/syslog in Debian 12?????</title>
      <link>https://blogs.linux.pizza/where-the-is-var-log-syslog-in-debian-12</link>
      <description>&lt;![CDATA[Imagine my suprise when I could not tail the syslog anymore..&#xA;&#xA;Debian 12 has moved the syslog to journalctl. So just run &#xA;If you want to check the logs from for example apache:&#xA;&#xA;If you want to format the logs as json, just append &#xA;&#xA;#linux #debian #logging ]]&gt;</description>
      <content:encoded><![CDATA[<h4 id="imagine-my-suprise-when-i-could-not-tail-the-syslog-anymore" id="imagine-my-suprise-when-i-could-not-tail-the-syslog-anymore">Imagine my suprise when I could not tail the syslog anymore..</h4>

<p>Debian 12 has moved the syslog to journalctl. So just run <code>journalctl -f</code> and you will be greeted with the logs running throu the screen :)</p>

<p>If you want to check the logs from for example apache:</p>

<p><code>journalctl -u apache2.service</code></p>

<p>If you want to format the logs as json, just append <code>o json-pretty</code></p>

<p><a href="https://blogs.linux.pizza/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blogs.linux.pizza/tag:debian" class="hashtag"><span>#</span><span class="p-category">debian</span></a> <a href="https://blogs.linux.pizza/tag:logging" class="hashtag"><span>#</span><span class="p-category">logging</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/where-the-is-var-log-syslog-in-debian-12</guid>
      <pubDate>Fri, 16 Jun 2023 09:59:02 +0000</pubDate>
    </item>
    <item>
      <title>A note regarding mirror.linux.pizza</title>
      <link>https://blogs.linux.pizza/a-note-regarding-mirror-linux-pizza</link>
      <description>&lt;![CDATA[8 years ago, I saw a post somewhere about a pretty small niché distro that was looking for a mirror for its packages. That got me thinking about the possibility to provide a public mirror for Linux packages for various distros.&#xA;&#xA;It started back then in my home office, with redundant ISP and the two HP Microservers and the Supermicro box that I had running.&#xA;My ambitions did not stop, and I applied to be an official mirror for Debian, Ubuntu, Parabola, Linux-Libre and more in the weeks after.&#xA;&#xA;One year after that, I got access to a nice environment that my friends had. With 100TB of storage and unlimited bandwidth - I moved the mirror there, and it has been living there ever since.&#xA;&#xA;Fast forward a couple of years...&#xA;&#xA;The small distros that mirror.linux.pizza was the sole mirror for has dissappeared, and the other projects such as Parabola, EndeavourOS and PureOS where I was the first one to start mirroring them - has gotten plenty of more mirrors to help out. &#xA;&#xA;I&#39;ve decided to shut mirror.linux.pizza down, the reason is financial and I want to focus my effort on the community that is social.linux.pizza instead.&#xA;&#xA;I&#39;ve already notified the different projects about the shut down, and I will take steps to ensure that systems does not break after the mirror goes offline, such as HTTP-redirects to other mirrors in the nordic.&#xA;&#xA;I&#39;ve also reached out to the hosting providers that have been using the mirror exclusively to notify them about the upcoming change, so they can prepare for that aswell.&#xA;&#xA;I am thankful that I have been able to give something back to the community by hosting this mirror - around 100k unique IP-addresses connect to it every day. So it did definitely help out! &#xA;&#xA;#linux #mirror #mirrorlinuxpizza #sunset #debian #ubuntu #pureos]]&gt;</description>
      <content:encoded><![CDATA[<p>8 years ago, I saw a post somewhere about a pretty small niché distro that was looking for a mirror for its packages. That got me thinking about the possibility to provide a public mirror for Linux packages for various distros.</p>

<p>It started back then in my home office, with redundant ISP and the two HP Microservers and the Supermicro box that I had running.
My ambitions did not stop, and I applied to be an official mirror for Debian, Ubuntu, Parabola, Linux-Libre and more in the weeks after.</p>

<p>One year after that, I got access to a nice environment that my friends had. With 100TB of storage and unlimited bandwidth – I moved the mirror there, and it has been living there ever since.</p>

<p>Fast forward a couple of years...</p>

<p>The small distros that mirror.linux.pizza was the sole mirror for has dissappeared, and the other projects such as Parabola, EndeavourOS and PureOS where I was the first one to start mirroring them – has gotten plenty of more mirrors to help out.</p>

<p>I&#39;ve decided to shut mirror.linux.pizza down, the reason is financial and I want to focus my effort on the community that is social.linux.pizza instead.</p>

<p>I&#39;ve already notified the different projects about the shut down, and I will take steps to ensure that systems does not break after the mirror goes offline, such as HTTP-redirects to other mirrors in the nordic.</p>

<p>I&#39;ve also reached out to the hosting providers that have been using the mirror exclusively to notify them about the upcoming change, so they can prepare for that aswell.</p>

<p>I am thankful that I have been able to give something back to the community by hosting this mirror – around 100k unique IP-addresses connect to it every day. So it did definitely help out!</p>

<p><a href="https://blogs.linux.pizza/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blogs.linux.pizza/tag:mirror" class="hashtag"><span>#</span><span class="p-category">mirror</span></a> <a href="https://blogs.linux.pizza/tag:mirrorlinuxpizza" class="hashtag"><span>#</span><span class="p-category">mirrorlinuxpizza</span></a> <a href="https://blogs.linux.pizza/tag:sunset" class="hashtag"><span>#</span><span class="p-category">sunset</span></a> <a href="https://blogs.linux.pizza/tag:debian" class="hashtag"><span>#</span><span class="p-category">debian</span></a> <a href="https://blogs.linux.pizza/tag:ubuntu" class="hashtag"><span>#</span><span class="p-category">ubuntu</span></a> <a href="https://blogs.linux.pizza/tag:pureos" class="hashtag"><span>#</span><span class="p-category">pureos</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/a-note-regarding-mirror-linux-pizza</guid>
      <pubDate>Mon, 27 Mar 2023 16:33:51 +0000</pubDate>
    </item>
    <item>
      <title>Kubectl cheat-sheet</title>
      <link>https://blogs.linux.pizza/kubectl-cheat-sheet</link>
      <description>&lt;![CDATA[## Just some random #kubectl commands for myself. I have tested these on 1.20  1.25&#xA;&#xA;Get all ingress logs (if your ingress is nginx)&#xA;kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx&#xA;Get all logs from Deployment&#xA;kubectl logs deployment/deployment -n namespace --watch&#xA;Why is the pod stuck in &#34;ContainerCreating&#34;?&#xA;kubectl get events --sort-by=.metadata.creationTimestamp --watch&#xA;Restart your deployment, nice and clean&#xA;kubectl rollout restart deployment/deployment -n namespace&#xA;Check which namespaces are using the most disk space&#xA;kubectl get namespace --no-headers | xargs -I {} sh -c &#39;echo {}; kubectl get pods -n {} --no-headers | xargs -I {} sh -c &#34;kubectl logs {} -n {} | wc -c&#34;&#39; | awk &#39;{print $1&#34; &#34;($2/1024/1024)&#34; MB&#34;}&#39; | sort -k2 -n -r | head&#xA;Check if any pods are using a lot of disk space&#xA;kubectl get pods --all-namespaces -o json | jq &#39;.items[].spec.containers[].resources.requests.storage&#39; | grep -v null&#xA;Check the Kubernetes event logs for any disk-related errors&#xA;&#xA;kubectl get events --field-selector involvedObject.kind=Node,reason=OutOfDisk&#xA;&#xA;I&#39;ll add more when I find more usefull stuff&#xA;&#xA;#linux #k8s #kubernetes #kubectl #ingress #nginx #deployment #logs]]&gt;</description>
      <content:encoded><![CDATA[<h2 id="just-some-random-kubectl-commands-for-myself-i-have-tested-these-on-1-20-1-25" id="just-some-random-kubectl-commands-for-myself-i-have-tested-these-on-1-20-1-25">Just some random <a href="https://blogs.linux.pizza/tag:kubectl" class="hashtag"><span>#</span><span class="p-category">kubectl</span></a> commands for myself. I have tested these on 1.20 &lt;&gt; 1.25</h2>

<h4 id="get-all-ingress-logs-if-your-ingress-is-nginx" id="get-all-ingress-logs-if-your-ingress-is-nginx">Get all ingress logs (if your ingress is nginx)</h4>

<pre><code>kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx
</code></pre>

<h4 id="get-all-logs-from-deployment" id="get-all-logs-from-deployment">Get all logs from Deployment</h4>

<pre><code>kubectl logs deployment/&lt;deployment&gt; -n &lt;namespace&gt; --watch
</code></pre>

<h4 id="why-is-the-pod-stuck-in-containercreating" id="why-is-the-pod-stuck-in-containercreating">Why is the pod stuck in “ContainerCreating”?</h4>

<pre><code>kubectl get events --sort-by=.metadata.creationTimestamp --watch
</code></pre>

<h4 id="restart-your-deployment-nice-and-clean" id="restart-your-deployment-nice-and-clean">Restart your deployment, nice and clean</h4>

<pre><code>kubectl rollout restart deployment/&lt;deployment&gt; -n &lt;namespace&gt;
</code></pre>

<h4 id="check-which-namespaces-are-using-the-most-disk-space" id="check-which-namespaces-are-using-the-most-disk-space">Check which namespaces are using the most disk space</h4>

<pre><code>kubectl get namespace --no-headers | xargs -I {} sh -c &#39;echo {}; kubectl get pods -n {} --no-headers | xargs -I {} sh -c &#34;kubectl logs {} -n {} | wc -c&#34;&#39; | awk &#39;{print $1&#34; &#34;($2/1024/1024)&#34; MB&#34;}&#39; | sort -k2 -n -r | head
</code></pre>

<h4 id="check-if-any-pods-are-using-a-lot-of-disk-space" id="check-if-any-pods-are-using-a-lot-of-disk-space">Check if any pods are using a lot of disk space</h4>

<pre><code>kubectl get pods --all-namespaces -o json | jq &#39;.items[].spec.containers[].resources.requests.storage&#39; | grep -v null
</code></pre>

<h4 id="check-the-kubernetes-event-logs-for-any-disk-related-errors" id="check-the-kubernetes-event-logs-for-any-disk-related-errors">Check the Kubernetes event logs for any disk-related errors</h4>

<pre><code>kubectl get events --field-selector involvedObject.kind=Node,reason=OutOfDisk
</code></pre>

<p>I&#39;ll add more when I find more usefull stuff</p>

<p><a href="https://blogs.linux.pizza/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blogs.linux.pizza/tag:k8s" class="hashtag"><span>#</span><span class="p-category">k8s</span></a> <a href="https://blogs.linux.pizza/tag:kubernetes" class="hashtag"><span>#</span><span class="p-category">kubernetes</span></a> <a href="https://blogs.linux.pizza/tag:kubectl" class="hashtag"><span>#</span><span class="p-category">kubectl</span></a> <a href="https://blogs.linux.pizza/tag:ingress" class="hashtag"><span>#</span><span class="p-category">ingress</span></a> <a href="https://blogs.linux.pizza/tag:nginx" class="hashtag"><span>#</span><span class="p-category">nginx</span></a> <a href="https://blogs.linux.pizza/tag:deployment" class="hashtag"><span>#</span><span class="p-category">deployment</span></a> <a href="https://blogs.linux.pizza/tag:logs" class="hashtag"><span>#</span><span class="p-category">logs</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/kubectl-cheat-sheet</guid>
      <pubDate>Tue, 28 Feb 2023 08:04:47 +0000</pubDate>
    </item>
    <item>
      <title>Download all files in a remote catalogue over SFTP with lftp</title>
      <link>https://blogs.linux.pizza/download-all-files-in-a-remote-catalogue-over-sftp-with-lftp</link>
      <description>&lt;![CDATA[Hopefully this will save some of you alot of time, energy, and save you day.&#xA;&#xA;I recently had troubles getting a job to work. The short story is:&#xA;&#xA;Download all files in a remote catalogue, over SFTP, on certain times.&#xA;&#xA;I had a working solution with curl, but when the naming of the files changed (such as whitespaces) - the function broke.&#xA;&#xA;lftp - the saver&#xA;&#xA;After have spent a couple of hours trying to grasp lftp via the manpage, I came up with a solution:&#xA;lftp -c &#39;&#xA;open sftp://USER:PASSWORD@remoteserver.example.com:22&#xA;mirror --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/&#xA;&#39;&#xA;And if you want to remove the source-files after download:&#xA;lftp -c &#39;&#xA;open sftp://USER:PASSWORD@remoteserver.example.com:22&#xA;mirror --Remove-source-files --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/&#xA;&#39;&#xA;&#xA;This download all files in the specified remote catalogue to the specified local one, then exits.&#xA;&#xA;#linux #bash #sftp #lftp]]&gt;</description>
      <content:encoded><![CDATA[<h3 id="hopefully-this-will-save-some-of-you-alot-of-time-energy-and-save-you-day" id="hopefully-this-will-save-some-of-you-alot-of-time-energy-and-save-you-day">Hopefully this will save some of you alot of time, energy, and save you day.</h3>

<p>I recently had troubles getting a job to work. The short story is:</p>

<p>Download all files in a remote catalogue, over SFTP, on certain times.</p>

<p>I had a working solution with curl, but when the naming of the files changed (such as whitespaces) – the function broke.</p>

<h3 id="lftp-the-saver" id="lftp-the-saver">lftp – the saver</h3>

<p>After have spent a couple of hours trying to grasp lftp via the manpage, I came up with a solution:</p>

<pre><code>lftp -c &#39;
open sftp://USER:PASSWORD@remoteserver.example.com:22
mirror --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/
&#39;
</code></pre>

<p>And if you want to remove the source-files after download:</p>

<pre><code>lftp -c &#39;
open sftp://USER:PASSWORD@remoteserver.example.com:22
mirror --Remove-source-files --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/
&#39;
</code></pre>

<p>This download all files in the specified remote catalogue to the specified local one, then exits.</p>

<p><a href="https://blogs.linux.pizza/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blogs.linux.pizza/tag:bash" class="hashtag"><span>#</span><span class="p-category">bash</span></a> <a href="https://blogs.linux.pizza/tag:sftp" class="hashtag"><span>#</span><span class="p-category">sftp</span></a> <a href="https://blogs.linux.pizza/tag:lftp" class="hashtag"><span>#</span><span class="p-category">lftp</span></a></p>
]]></content:encoded>
      <guid>https://blogs.linux.pizza/download-all-files-in-a-remote-catalogue-over-sftp-with-lftp</guid>
      <pubDate>Wed, 11 Jan 2023 08:58:22 +0000</pubDate>
    </item>
  </channel>
</rss>