LinuxPizza

Personal notes and occasional posts – 100% human, 0% AI generated

After a few hours trying to make it work with my current CA, where the Root is stored offline, AIA, OCSP, CRL and all that stuff is done by the book – I gave up. Somehow, the Open Source variant of Nginx does not really like my OCSP setup, no idea why and I have no idea how to troubleshoot that.

Solution? KISS-principle!

I'll write this down, quick and dirty. But hopefully it helps someone.

Lets start with create the private key for the CA that we will create:

openssl genpkey -algorithm RSA -out CA_ROOT.key -aes256

With this command, we have created a private key with AES256. You will be prompted to give a password – write that down. And the following command will create a certificate from the private key, valid for 10 years.

openssl req -x509 -new -nodes -key CA_ROOT.key -sha256 -days 3650 -out CA_ROOT.crt

Fill in the information that the above command wants of you, like country-code, and so on. After that, your CA is done. The crude, ugly and honestly boring CA. But it'll work for this usecase.

Let's create the client-certificate!

First, will start by creating the private.key, and the .csr:

openssl genpkey -algorithm RSA -out client-cert.key
openssl req -new -key client.key -out client-cert.csr

And again, fill out the information wanted by openssl that will populate the .csr. Make it looks pretty. Ideally, the commands shall be run on the client only, so the private-key never leaves the client. The .csr is what the CA will need to sign and create a valid certificate.

Bring the .csr to the CA, and sign it:

openssl x509 -req -in client-cert.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client-cert.crt -days 365 -sha256

This will give you a signed certificate for your client named “client-cert.crt” – you may bring that to the client-machine and install it.

Firefox wants a .pfx:

In order to import the certificate into Firefox, you'll need to convert it to p12/pfx format:

openssl pkcs12 -export -out client-cert.pfx -inkey client-cert.key -in client-cert.crt -certfile CA_ROOT.crt

Please note, that you'll need the CA_ROOT.crt file too that you created.

Configure NGINX to do client-certificate authentication

Navigate to the virtualhost you want to enable client-certificate authentication on, and add the following:

    ssl_client_certificate /etc/ssl/private/CA_ROOT.crt;
    ssl_verify_client on;
    ssl_verify_depth 2;

Please note, that you have to place the CA_ROOT.crt file in /etc/ssl/private/

Restart NGINX and try to visit the site. You'll probably be asked for permission to use client-certificate authentication.

#linux #openssl #nginx #pki

After 5 years, the Linux.Pizza Matrix-server is relauching. Last time, we housed over 3k active accounts.

However, 3k active accounts is not something that we aim to achieve this time, but rather – a complement to your social.linux.pizza Mastodon account.

We achieve this by just enabling social.linux.pizza as a OIDC-provider on the matrix-server – the same functionality that already is being used when you authenticate your mobile application.

In order to login with your social.linux.pizza account. Just used the Matrix-client you prefer (Element(X), SchlidiChat/SchlidiChat Next, Cinny or even Thunderbird) – set “synapse.linux.pizza” as your “Homeserver”, and the option to login with social.linux.pizza should appear.

Image showing the login-process to the Linux.Pizza Matrix-server

Image showing the login-process to the Linux.Pizza Matrix-server

Image showing the login-process to the Linux.Pizza Matrix-server

Image showing the login-process to the Linux.Pizza Matrix-server

Worth noting, is that this service will launch as a Beta-service, so every tester is welcome :)

Writing this down, so people and myself can easily find this solution

The Cisco docs is incomplete, this is the correct way of enabling SNMP on the SG350 series:

configure term
snmp-server community public RO
snmp-server community private RW
snmp-server server
snmp-server location hackerspace

Thanks to @fedops@fosstodon.org for telling me about the “snmp-server server” step.

#cisco #networking #switching #snmp #observium

I dont claim responsibility for anything being done on your router. This short TODO is written for myself – dont follow if you are not familiar with certificates and PKI.

1 SSH into your machine 2. Navigate to /data/unifi-core/config 3. Replace unifi-core.key with your private key 4. Replace unifi-core.crt with your TLS-certificate 5. Restart Unifi Core:

systemctl restart unifi-core

Done! A screenshot, showing a valid certificate on udr.selea.se, located on a Unifi Dream Router

#linux #pki #certificates #unifi

LVM stuff

WARNING: PV /dev/sda2 in VG vg0 is using an old PV header, modify the VG to update.

Update the metadata with the vgck command – where the “vg0” is your own pool.

vgck --updatemetadata vg0

curl stuff

Curl a specific IP with a another host-header

curl -H "Host: subdomain.example.com" http://172.243.6.400/

git stuff

tell git.exe to use the built-in CA-store in Windows

git config --global http.sslBackend schannel

random stuff

See which process is using a file

fuser file

Import RootCert into Java-keystore example

sudo /usr/lib/java/jdk8u292-b10-jre/bin/keytool -import -alias some-rootcert -keystore /usr/lib/java/jdk8u292-b10-jre/lib/security/cacerts -file /usr/share/ca-certificates/extra/someRoot.crt`

Apache2 configs example

Enable AD-authentication for web-resources

<Location />
   AuthName "AD authentication"
   AuthBasicProvider ldap
   AuthType Basic
   AuthLDAPGroupAttribute member
   AuthLDAPGroupAttributeIsDN On
   AuthLDAPURL ldap://IP:389/OU=Users,OU=pizza,DC=linux,DC=pizza? 
   sAMAccountName?sub?(objectClass=*)
   AuthLDAPBindDN cn=tomcat7,ou=ServiceAccounts,ou=Users,OU=pizza,dc=linux,dc=pizza
  AuthLDAPBindPassword "exec:/bin/cat /etc/apache2/ldap-password.conf"
  Require ldap-group 
  CN=some_group,OU=Groups,OU=pizza,DC=linux,DC=pizza
  ProxyPass "http://localhost:5601/"
  ProxyPassReverse "http://localhost:5601/"

</Location>

Insert Matomo tracking script in Apache using mod_substitute

AddOutputFilterByType SUBSTITUTE text/html
Substitute "s-</head>-<script type=\"text/javascript\">var _paq = _paq || [];_paq.push(['trackPageView']);_paq.push(['enableLinkTracking']);(function() {var u=\"https://matomo.example.com/\";_paq.push(['setTrackerUrl', u+'matomo.php']);_paq.push(['setSiteId', '1']);var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];g.type='text/javascript'; g.async=true; g.defer=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s);})();</script></head>-n"

Load balance backend-servers

<Proxy balancer://k3singress>
	BalancerMember http://x.x.x.1:80
	BalancerMember http://x.x.x.2:80
	BalancerMember http://x.x.x.3:80
	BalancerMember http://x.x.x.4:80
	ProxySet lbmethod=bytraffic
	ProxySet connectiontimeout=5 timeout=30
	SetEnv force-proxy-request-1.0 1
	SetEnv proxy-nokeepalive 1
</Proxy>
       ProxyPass "/" "balancer://k3singress/"
       ProxyPassReverse "/" "balancer://k3singress/"
       ProxyVia Full
       ProxyRequests On
       ProxyPreserveHost On

Basic Apache-config for PHP-FPM

<VirtualHost *:80>
  ServerName www.example.com
  DocumentRoot /srv/www.example.com/htdocs
  <Directory /srv/www.example.com/htdocs>
    AllowOverride All
    Require all granted
    DirectoryIndex index.html index.htm index.php
    <FilesMatch "\.php$">
      SetHandler proxy:unix:/run/php/www.example.com.sock|fcgi://localhost
    </FilesMatch>
  </Directory>
  SetEnvIf x-forwarded-proto https HTTPS=on
</VirtualHost>

Basic PHP-fpm pool

[www.example.com]
user = USER
group = GROUP

listen = /var/run/php/$pool.sock

listen.owner = www-data
listen.group = www-data

pm = ondemand
pm.process_idle_timeout = 10
pm.max_children = 1

chdir = /

php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f no-reply@ftp.selea.se
php_admin_value[mail.log] = /srv/ftp.selea.se/log/mail.log
php_admin_value[open_basedir] = /srv/ftp.selea.se:/tmp
php_admin_value[memory_limit] = 64M
php_admin_value[upload_max_filesize] = 64M
php_admin_value[post_max_size] = 64M
php_admin_value[max_execution_time] = 180
php_admin_value[max_input_vars] = 1000

php_admin_value[disable_functions] = passthru,exec,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,mail

Netplan – use device MAC instead of /etc/machine-id for DHCP

network:
  ethernets:
    eth0:
      dhcp4: true
      dhcp-identifier: mac
  version: 2

HPs apt repo for various utilities for proliant machines

deb http://downloads.linux.hpe.com/SDR/repo/mcp buster/current non-free

psql stuff

CREATE DATABASE yourdbname;
CREATE USER youruser WITH ENCRYPTED PASSWORD 'yourpass';
GRANT ALL PRIVILEGES ON DATABASE yourdbname TO youruser;

Get entity for AD/SMB based user so you can put it in /etc/passwd:

getent passwd USERNAME

#linux #kubernetes #netplan #php-fpm #apache #LVM

Imagine my suprise when I could not tail the syslog anymore..

Debian 12 has moved the syslog to journalctl. So just run journalctl -f and you will be greeted with the logs running throu the screen :)

If you want to check the logs from for example apache:

journalctl -u apache2.service

If you want to format the logs as json, just append o json-pretty

#linux #debian #logging

8 years ago, I saw a post somewhere about a pretty small niché distro that was looking for a mirror for its packages. That got me thinking about the possibility to provide a public mirror for Linux packages for various distros.

It started back then in my home office, with redundant ISP and the two HP Microservers and the Supermicro box that I had running. My ambitions did not stop, and I applied to be an official mirror for Debian, Ubuntu, Parabola, Linux-Libre and more in the weeks after.

One year after that, I got access to a nice environment that my friends had. With 100TB of storage and unlimited bandwidth – I moved the mirror there, and it has been living there ever since.

Fast forward a couple of years...

The small distros that mirror.linux.pizza was the sole mirror for has dissappeared, and the other projects such as Parabola, EndeavourOS and PureOS where I was the first one to start mirroring them – has gotten plenty of more mirrors to help out.

I've decided to shut mirror.linux.pizza down, the reason is financial and I want to focus my effort on the community that is social.linux.pizza instead.

I've already notified the different projects about the shut down, and I will take steps to ensure that systems does not break after the mirror goes offline, such as HTTP-redirects to other mirrors in the nordic.

I've also reached out to the hosting providers that have been using the mirror exclusively to notify them about the upcoming change, so they can prepare for that aswell.

I am thankful that I have been able to give something back to the community by hosting this mirror – around 100k unique IP-addresses connect to it every day. So it did definitely help out!

#linux #mirror #mirrorlinuxpizza #sunset #debian #ubuntu #pureos

Just some random #kubectl commands for myself. I have tested these on 1.20 <> 1.25

Get all ingress logs (if your ingress is nginx)

kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx

Get all logs from Deployment

kubectl logs deployment/<deployment> -n <namespace> --watch

Why is the pod stuck in “ContainerCreating”?

kubectl get events --sort-by=.metadata.creationTimestamp --watch

Restart your deployment, nice and clean

kubectl rollout restart deployment/<deployment> -n <namespace>

Check which namespaces are using the most disk space

kubectl get namespace --no-headers | xargs -I {} sh -c 'echo {}; kubectl get pods -n {} --no-headers | xargs -I {} sh -c "kubectl logs {} -n {} | wc -c"' | awk '{print $1" "($2/1024/1024)" MB"}' | sort -k2 -n -r | head

Check if any pods are using a lot of disk space

kubectl get pods --all-namespaces -o json | jq '.items[].spec.containers[].resources.requests.storage' | grep -v null
kubectl get events --field-selector involvedObject.kind=Node,reason=OutOfDisk

I'll add more when I find more usefull stuff

#linux #k8s #kubernetes #kubectl #ingress #nginx #deployment #logs

Hopefully this will save some of you alot of time, energy, and save you day.

I recently had troubles getting a job to work. The short story is:

Download all files in a remote catalogue, over SFTP, on certain times.

I had a working solution with curl, but when the naming of the files changed (such as whitespaces) – the function broke.

lftp – the saver

After have spent a couple of hours trying to grasp lftp via the manpage, I came up with a solution:

lftp -c '
open sftp://USER:PASSWORD@remoteserver.example.com:22
mirror --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/
'

And if you want to remove the source-files after download:

lftp -c '
open sftp://USER:PASSWORD@remoteserver.example.com:22
mirror --Remove-source-files --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/
'

This download all files in the specified remote catalogue to the specified local one, then exits.

#linux #bash #sftp #lftp

Here is a post about Windows for a change.

If you want to check if you can query a NTP-server from your Windows-machine, you can just use the following

w32tm /stripchart /computer:computername

For example:

w32tm /stripchart /computer:ntp.netnod.se

If everything works, you'll see something like this:

Tracking ntp.netnod.se [194.58.200.20:123].
The current time is 2022-12-06 14:06:13.
14:06:13, d:+00.0260863s o:+00.0277480s  [      *      ]

Have a pleasant tuesday

#windows #ntp