LinuxPizza

Personal notes and occasional posts

I am a fan of jalapeños and chilies in general, and this year I had some luck with the weather so my only jalapeñoplant did pretty well. So today, where are going to pickle the jalapeños.

freshly picked jalapenos

What you will need

  • Garlic (couple of smashed cloves or powder is fine)
  • 0.45 dl Sugar
  • 15 ml Salt
  • 3dl White Vinegar
  • 3dl Water
  • ~400grams of fresh Jalapeños (or other chilies)

The procedure

I decided to slice the jalapeños together with three quite big cloves of garlic: Sliced Jalapeños

Then, mix water, vinegar, salt and sugar into a pot. Let the sugar and salt dissolve and wait until the mix starts to boil a little.

a soon boiling pot

Then, just add the jalapeños and garlic. Let it putter for 5 minutes. Jalapeños and garlic in a pot

Lastly, put it into your glass-container of choice! Jalapeños and garlic in a container

This should last a couple of month, and serves well with taco, pizza or if you are like me – on EVERYTHING!

Done and easy! Everyone can do this, and it works with almost anything. I also did this with unriped tomatoes, and it tasted very good too! Tomatoes, unriped

Modsecurity is an open-source Web Application Firewall for the modern webserver such as Apache and Nginx. In this short guide we are going to install Modsecurity for Apache on Debian 10, enable it and add additional rules.

Installation of the Modsecurity module

The installation is very simple:

root@debian:~# apt install libapache2-mod-security2 -y

Great, now we just have to activate the module. It is currently running in “Detection Mode” which means that is will only log attempts and not perform any blocking. This can be useful for testing.

cd /etc/modsecurity/
mv modsecurity.conf-recommended modsecurity.conf
sed -i -e s/"SecRuleEngine DetectionOnly"/"SecRuleEngine On"/g modsecurity.conf

That's about it! If you want to run mod_security2 with the recommended ruleset including the ones from OWASP top 10 – you are now done! You only need to restart apache:

systemctl apache2 restart

Done! Simple and easy!

Copy the sshd_config file to a separate file:

cp /etc/ssh/sshd_config /etc/ssh/sshd_vhost_config

Append the following to the file:

AllowTCPForwarding no
ChrootDirectory /path/to catalogue
ForceCommand internal-sftp

Match User user1
  ChrootDirectory /path/to catalogue/user1

Match User user2
  ChrootDirectory /path/to catalogue/user2
USER # ssh

Also, you can have to change the port because we will run the SFTP-server separately from the SSH service. So edit the following line:

Port 2222

Create a systemD service in /etc/systemd/system/sshvirtual.system

[Unit]
Description=OpenBSD Secure Shell server for lue
After=network.target auditd.service
ConditionPathExists=!/etc/ssh/sshd_not_to_be_run  

[Service]
ExecStartPre=/usr/sbin/sshd -f /etc/ssh/sshd_vhost_config -t
ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd_vhost_config
ExecReload=/usr/sbin/sshd -f /etc/ssh/sshd_vhost_config -t
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure 

[Install]
WantedBy=multi-user.target
Alias=sftp-sshd.service

You would also like to have it start when the system starts:

systemctl daemon-reload; systemctl enable sshvirtual; systemctl start sshvirtual

And now, you are able to connect to the SSH server on port 2222.

And the possible future it has

The TL;RD of this post is: – Linux.Pizza will not actively deploy new services – Linux.Pizza are going to discontinue some services within 12 month (from this post have been made) – Linux.Pizza will focus on Mastodon, Mirroring distros and DNS.

You might wonder why, if so – please continue to read.

The short version is – I have realized that I am not able to deliver quality services anymore. And this is due to lack of time, funding and increased stress at my main job.

And the longer version: One year ago, I was “forced” to change job in order to make things work with my family – kids started school and wife returned to studies. I could'nt work 40 minutes from home anymore and needed something more closer to home.

So I switched, even if I hated the fact that I had to.

Anyway, the new job is great! And as the only Systems Administrator I am responsible for everything IT and I have alot of freedom when it comes to the software stack the company will use and so on. I recently deployed Nextcloud and Matrix which has been great!

Family takes more time

My kids are getting bigger, and I have decided to spend more time with them instead of in front of the PC. I have actually realized that I can't miss the time that I have with my family, so I have to prioritize while I can.

Work takes alot of time aswell

My new role and the new job that I got has brought a lot of “unwanted” responsibilities – I tend to take stuff way to personal when it comes to IT stuff where I work. If something goes wrong – I blame myself very much. And that needs to stop aswell.

Linux.Pizza is not going to dissappear

While Linux.Pizza is down-scaling – it will not dissappear! The social aspect of mastodon has been very good and important for me atleast – I see it as a “premium social network”. It costs some money every month but I think it is worth it actually since I have gotten to know many good people from different cultures, geographic locations, religions and political backgrounds and that has been very refreshing!

mirror.linux.pizza is also going to be a thing – it is a official mirror for many distros and shutting down that would be very irresponsible.

FreeDNS is also going to stay active aswell.

So in short – Linux.Pizza will offer some service, but only those that I want and I will not wake up in the middle of the night anymore to fix broken services as I have used to the past years.

“It it ain't fun – don't do it!” – Someone on Mastodon

I hope that you understand, and if you are in need of other services similar to those Linux.Pizza has offered – please check out The Librehosters Network.

The first time I heard about the PineBook Pro was the spring of 2019, when Pine64 posted their may update which contained information about the PineBook Pro.

I have been able to try out the original PineBook, since one of my previous colleague did get one. She claimed it was a good buy and that she liked the machine. Well, considering that it only cost $99 – I think there is no real reason to think otherwhise!

However, fast forward to March 2020. My own ThinkPad Helix broke down on me and I was suddenly without a laptop. That meant that I longer could travel while I was “on call” at work because I no longer could remote in to work when I needed to. I also had no place to store my stupid collection of webm's either. Also, I was not willing to spend to much on a machine – so I had two options: – Get an used Librebooted ThinkPad – Get the PineBook Pro

The choice finally fell on the PineBook Pro, because I have started to get an urge to start using non-x86 machines as my daily drivers, such as the PineBook Pro and the Blackbird POWER9 Desktop from Raptor Computing. I've always been weak for stuff that is not used by to many people, like a specific car model with a specific color (like my old Mazda 3 2010 with the “Celestial Blue” color) or just plain Motorola Phones (not at all popular in Sweden). That was the reason I started with Linux back in 2001, because Windows was everywhere and I wanted to be different – lol.

I placed the order on the 3th of April 2020, of the PineBook Pro together with some other essential stuff like: – PineBook Pro itself – USB-Barrel connector for charging – PCI-E to M.2 adapter – USB-UART(serial)

I did forget the USB eMMC reader, but that is something I could get a hold of via a local shop.

Finally, on June the 1st. I got the notification that the order has been shipped from Hong Kong. Pine64 has been very clear that there will be delay thanks to the current pandemic going on, and that is understandable.

Delivery

I got the order delivered to my work on June the 4th, since I spend my days there and not at home. Here is what the package looked like: package

(yes yes, that's my lunch)

Unboxing and first impression

I waited to open the package until I came home, since I wanted to show you how the packaging looks like and what you as a possible future customer to Pine64 can expect with some good music that have a high chance of making you feel nostalgics: Note: The embedded video is broken in some browsers, feel free to check out the video here.

The Pinebook Pro looks slick, feel sturdy and does not flex that much you would expect from a $200 laptop. The rest of Day 1 was spent on trying to like Manjaro as a system. Manjaro works very well on the Pinebook Pro – it is snappy and looks great on it. If you are buying the Pinebook just as a “browser + ssh” machine (as someone on fedi called it) – I would recommend with sticking with Manjaro that is delivered with the Pinebook Pro.

Day 2, Bye Manjaro – Hi Debian

I am not a fan of Manjaro, and trust me – I have really tried to like it! My personal feeling is that Manjaro is messy – but that is probably because I do not like Arch Linux at all. Anyway, I was thinking about switching over to Debian instead since I am more used to it and the image has come a long way since the first version. I flashed a MicroSD card with this Debian image, booted it and downloaded this scripts that installs Debian for you on the eMMC card. The installation took 15-20 minutes for me since I am blessed with a fast and stable internet connection. I did have trouble getting into the Desktop Environment on Bullseye (Testing), so I installed Debian Buster instead and that seems to have solved it. And I am want to use it as a daily driver so a stable system is not wrong :)

Day 3, why the (“%¤ does it take a day to charge the Pinebook?

One thing that has started to bother me, is the battery take several hours to fully charge from zero. I have given that alot of thoughts and I think the reason is that I have become used to fast-charging that exist in most modern smartphones today. The VERY BIG PLUS, is that you can charge it several way's. You can use the official ROCKPRO PSU (the one that is stuck in my outlet), you can use a USB to “power” adapter, and you can also charge it with USB-C. The latter one is a HUGE advantage and one of the biggest “cool factor” in the Pinebook. That means that I can charge the Pinebook on the go. With an ordinary Powerbank, in my car or at someone elses house even if I forgot my own PSU.

What do I like/dislike?

The keyboard

After a few days of typing on the machine, I have come to like the keyboard of it. It does not feel bad at all. Since it is a ISO keyboard with a physical UK layout, I can use it with a Swedish layout in Debian. Luckily, I am very used to typing so I am not noticing that the physical layout is different since I dont look at the keyboard when I type. Writing this blogpost feels great too!

Headphone jack?

I started to watch a movie on the Pinebook with my headphones that I just plugged into the headphone jack, and suddenly all my kids came up to me and wondered what I was looking at so I took my headphones of and realized that the sounds was playing on the speakers and my headphone at the same time. I do realize that this is probably something that Debian Buster has issues with. I connected my Bluetooth headset instead and could watch in peace.

Charging takes many hours

I wrote about that earlier, but it is worth mentioning here too. Charging the PineBook Pro does take a very long time. I have tested the charger that arrives with the machine and other supposedly “stronger” USB-chargers aswell. I think that the reason is that I have gotten used to Fast-Charging my phone and the ability to wait is something that we have lost the last few years. Anyway, the battery last 7-9 hours with normal use on Debian Buster with maximum screen brightness and “tilda” running in fullscreen with tmux with a couple of ssh-sessions – perfectly fine! Remember that you can charge it practically anywhere with almost any USB-charger whether it is a wall-plugged one, solar-driven or other powerbanks. That fact makes this machine very portable and flexible. Perfect for the trip!

Closing words

I can compress my experience to this sentence: The more I use the PineBook Pro – The more I realize that THIS is the laptop I always wanted!

Wow, that's is pretty big words! I will try to explain why. First of all – The Pinebook Pro is the result of the hard work of the team over at Pine64. The machine have been made “as a community service” to provide a cheap, hackable and fun laptop to hackers, advanced users and pioneers on the AARCH64 platform. I really get the feeling that there is no greed for revenue unlike other companies – that is worth supporting!

The machine is not made for with planned obsolesce – the scary and sad trend that is going on with Tech-companies nowadays. You can buy every single part of this machine from the Pine64 shop so you can repair it if you need to.

The community is great! I have been hanging out in the Pinebook Pro chat on Matrix and the folks over there is very helpful and exited over the product that Pine64 has released.

Atlast, I think most of the Pinebook Pro users would love to use Manjaro ARM that is by default shipped with the machine. Manjaro has done a great job on increasing performance and stability of the builds and it does not seems to stop! I will cover more aspects of the Pinebook Pro in the future, like Multimedia performance such as video-playback, simple gaming, USB-C docking capabilites and Installation of the M2 drive.

TLSA records – or more commonly known as DANE(DNS-based Authentication of Named Entities) is a protocol that is being used to “bind” TLS-certificates to a server. It is most used on email-servers to secure communication between different servers. The reason that DANE exist is to provide an additional layer of security and trust between server and client.

In this guide, I will walk your thru the following steps:

  • How to check if a SMTP-server uses DANE
  • How to configure postfix to start use DANE-verification on outgoing and incoming emails.
  • Generation of TLSA-records
  • DANE + Let's Encrypt – A Walkaround

Currently, DANE is not something that is widely deployed by big organisations and companies world wide. Instead, smaller companies, organisations and individuals with more flexibility in their IT-infrastructure has been able to contribute to SMTP-security. The only really big company that has announced their plans for a DANE-implementation is Microsoft – they announced their plans for DANE in april 2020 and hope the implementation to be finished in 2021. DANE does also require that the domain is DNSSEC signed for it to work, there is some mail-servers that can do DANE-verification without DNSSEC (like postfix), but I am not going to cover that part today.

But you are not here in order to wait for it to happend! Let's get started!

Does this server have a TLSA-record deployed?

First, does your email-server have TLSA-record deployed already? We can test it the simple way, with tools that already exist online, like this from sys4 and the one from Simon Huque.

Or, if you are as me – we will do the checks from the terminal with the tools our system provides.

Enter “dig”, a command that can be found in the package “dnsutils” on debian-based systems (“bind-utils” on RHEL based ones). So let's check the TLSA-record of the mailserver of linux.pizza:

dig _25._tcp.kebab.linux.pizza TLSA +short

This gives us the following answer:

3 1 1 2B4685AC11110AC51D117607C0E58D98AF3FD9A417EF3B5B61210578 67D92111

So, what we just did here was checking the host _25._tcp.hashmal.selea.se for a TLSA record. The first part – _25. represents the port. Second part – _tcp. represent the protocol. The third part – kebab.linux.pizza. represent the actual hostname of the server.

Deploy DANE-verification in postfix

This is probably the easiest step of them all: Add this into your main.cf file

smtp_tls_security_level = dane
smtp_dns_support_level = dnssec

Now, postfix does validate DANE for outgoing and incoming SMTP-connections – Nice!

Let's Encrypt + DANE

Since the hash in the TLSA-record is based on the private TLS-key, it does not really make sense to deploy it with Let's Encrypt since Certbot (the most used tool to deploy Let's Encrypt) generates a new private key every time a certificate is being requested.

We will generate the certificate using Certbot, feel free to use whatever client you'd like. Just keep in mind that you have to reuse the same .csr.

We will have to do the issuing and renewal via HTTP/HTTPS, so I assume that you have a webserver installed on your machine. Create this config-file and place it in a good location (like /usr/share/etc/leconfig/mx.your.host):

domains = mx.your.hostname
webroot-path = /path/to/webserver/root
 
rsa-key-size = 4096
email = info@your.hostname
text = True
authenticator = webroot
renew-by-default = true
agree-tos = true

Now, issue your initial certificate:

certbot -c /usr/share/etc/leconfig/mx.your.hostname certonly

Once the certificate have been issued, you can find it in /etc/letsencrypt/live/mx.your.hostname, for sanity sake, we will copy the entire folder to another location.

mkdir -p /usr/local/etc/letsencrypt/live/
cp /etc/letsencrypt/live/mx.your.hostname /usr/local/etc/letsencrypt/live/

Let's copy the .csr file also (most important!), assuming that this is the first certificate issued – take the one starting with 0000, otherwhise you can match the csr timestamp with the certificate you just generated:

cp /etc/letsencrypt/csr/0000_csr-certbot.pem /usr/local/etc/letsencrypt/live/mx.your.hostname/mx.your.hostname.csr

And lets modify the configuration file that we did before accordingly in order to tell certbot where the .csr file is, and where to place the certicate:

domains = mx.your.hostname
webroot-path = /path/to/webserver/root

csr = /usr/local/etc/letsencrypt/live/mx.your.hostname/mx.your.hostname.csr
cert-path = /etc/letsencrypt/live/mx.your.hostname/cert.pem
fullchain-path = /etc/letsencrypt/live/mx.your.hostname/fullchain.pem
chain-path = /etc/letsencrypt/live/mx.your.hostname/chain.pem

rsa-key-size = 4096
email = info@your.hostname
text = True
authenticator = webroot
renew-by-default = true
agree-tos = true

You can try reissuing the certificate with:

certbot -c /usr/share/etc/leconfig/mx.your.hostname certonly

Awesome! Your certificate should have been renewed with the same .csr file and private key. Now we can proceed to configuring postfix to use the certificate, private key and intermediate certificate. Look for the following lines:

smtpd_tls_key_file
smtpd_tls_cert_file
smtpd_tls_CAfile

And we will add the path to the certificate-chain:

smtpd_tls_key_file = /usr/local/etc/letsencrypt/live/mx.your.hostname/privkey.pem
smtpd_tls_cert_file = /etc/letsencrypt/live/mx.your.hostname/cert.pem
smtpd_tls_CAfile = /etc/letsencrypt/live/mx.your.hostname/chain.pem

Restart postfix, and you are ready for the next step!

Generate your own TLSA-record

We will use the “hash-slinger” package, and it is very simple! Just issue the following on any computer that has a https-connection to your mailserver:

tlsa --create mx.your.hostname

You will get something like this:

_443._tcp.mx.your.hostname. IN TLSA 3 0 1 54f3fd877632a41c15b0ff4e50e254ed8d1873486236dc6cd5e9c1c1993d1e4e

Perfect, you now has the record that you should deploy at your DNS-provider, with a slight modification:

_25._tcp.mx.your.hostname. IN TLSA 3 0 1 54f3fd877632a41c15b0ff4e50e254ed8d1873486236dc6cd5e9c1c1993d1e4e

Notice how we change the first part – the port. After you have published your record, wait for a little while and check if it valid with this tool.

Thank you for making Email awesome again!

End

I hope that you found this little guide helpful! Let me know what you think, hook me up on Mastodon on @selea@social.linux.pizza

With great power comes great responsibility, so let's abuse our power we have as sysadmin in the companies we work for!

I do assume that your colleagues have a sense of humor!

Randomly let the computer talk

This script let the computer say “good morning” from the speaker, with a minimal interval of 20, and maximum of 360 minutes.

Set sapi=CreateObject("sapi.spvoice") 
randomize 
message = "good morning"
max=360 
min=20
skew=Int( ( max-min + 1) * Rnd + min) 
wscript.sleep(skew * 5 * 60000) 
do 
sapi.Speak message 
skew=Int( ( max-min + 1) * Rnd + min) 
wscript.sleep(skew * 60000) 
loop

Eject the CD-ROM drive once every 3000 seconds

Set oWMP = CreateObject("WMPlayer.OCX.7" )
Set ArrCDROM = oWMP.cdromCollection
while (1)
wscript.sleep 3000
ArrCDROM.Item(0).Eject
wscript.sleep 3000
ArrCDROM.Item(0).Eject
wend

This script changes that playing song on spotify

Set WshShell = WScript.CreateObject("WScript.Shell")
  ' spotify or user_id_number if playlist is private. ID can be found by using Spotify's Share button
  WshShell.Run "spotify:user:<spotify/user_id_number>:playlist:<playlist_code>", 3, false
WScript.sleep 20000
  ' Change active Window
WshShell.AppActivate "Spotify"
  ' Start playing selected queue
WshShell.SendKeys " "
  ' Focus?
WshShell.SendKeys "{ENTER}"
WScript.sleep 100
  ' Shuffle play next track
WshShell.SendKeys "^{RIGHT}"
WScript.Quit 0

Not really the typical “Linux Sysadmin things” this time, but if you work as a sysadmin at a company – this can be gold! If you have any other ones, let me know and I will add them here and credit you :)

And stickers

I am not a person that is good at wrapping messages up in nice words and persuasive language, especially when it comes to something about money (I would never be a good salesperson). But we want to be transparent with out stuff and what we do, so that require us to write about it too. I hate begging for $, and would much rather finance everything from my own pocket but that is not really possible and would also not be that good for our users.

Running Linux.Pizza is not free – infact it costs several hundreds of dollar every year (close to $1000/year) not including the time we spend on maintenance and support. (A breakdown of the cost is further down)

Also, Linux.Pizza would really love to add new services and functionality to its portfolio, but due to limited economical resources we are not really able to make that a priority – but with your help we can change that!

LinuxPizza got donations worth $200 during 2019 and it covered the cost of operations for 2.5 month – we are extremely thankful for the generosity of our users!

So today, we are launching a small campaign to encourage everyone who have the means and want to donate to just that. And as a small thank you, we will ship you a couple of sticker that you can stick anywhere you want! For example: your laptop, your car or parents car or your stickerwall (everyone has one right?).

So in order to keep it realistic, everyone that donates atleast $10 is eligible for a small “sticker-pack” as shown in the pictures. Just let us know to whom we should ship the stickers to after the donation is made! If you have the means and will – head over to the liberapay-page or paypal-page. If you are a Brave user, you can always send a tip :) Send us an email when you have sent at donation with your email and address.

100% of the donations will go back to the Linux.Pizza project and nothing else.

Linux.Pizza Stickers! NOTE: You will receive a couple of stickers from each pile social.linux.pizza stickers!

Breakdown of the cost per year

Domains:
  1. linux.pizza – $36
  2. pixelfed.se – $12
Infrastructure:
  1. DNS (this includes FreeDNS environment – $144
  2. Pixelfed instance – $60
  3. Mastodon instance – $240
  4. Temporary email service – $110
  5. Power consumption and Internet connection – $150
  6. Mirror for various distros and software –$300
    • Most of the cost is sponsored by operationtulip)
  7. CDN for the Mastodon instance – $20

Some of the stuff that we would like to get started with:

  1. Nextcloud
    • This would require us to get more storage like harddrives or SSD's.
  2. Email service
    • Technically, we could rent a cheap VPS at some provider at get started. But that would certainly be hard due to the fact that GAFAM is marking mail as spam unless they come from a clean network. And cleaner networks/ISPs tend to cost more.
  3. Peertube?
    • We have gotten the question a couple of times, but we are unsure how it would fit into Linux.Pizzas services. This would require more storage anyway.

I have a custom application that my wife wrote for one of her personal projects. It turns out that the application crashes after 50-70 hours of uptime and both of us does not have the time or knowledge yet to debug that.

And that application is not that important either, it is just a website that displays various articles and pictures.

So in order to just push the problem under the rug, I just configured the system to restart the application ever 4th hour.

First, I create a service that we name “custom-application-restart“:

vi /etc/systemd/system/custom-application-restart.service
[Unit]
Description=restart custom application

[Service]
Type=oneshot
ExecStart=/bin/systemctl restart custom-application

Next, we have to add a timer-service, note that the name of the timer-service must be the exact name of the restart-service, except that we swap out “service” to “timer”:

vi /etc/systemd/system/custom-application-restart.timer
[Timer]
OnActiveSec=4h
OnUnitActiveSec=4h

[Install]
WantedBy=timer.target

Now, you should do the following:

systemctl daemon-reload
systemctl enable custom-application-restart.timer
systemctl start custom-application-restart.timer

Now, you should see your newly added timer-service in this list:

systemctl list-timers --all
Mon 2020-02-17 11:01:42 UTC  50min left Mon 2020-02-17 07:01:42 UTC  3h 9min ago  custom-application-restart.timer    custom-application-restart.service

Galera is a part of MariaDB and enables active/active/active replikation of databases between servers. While it necessarily dont provide any performance gains, it instead enabled a HA for the databases.

This guide assumes that you run Debian 10, which comes with MariaDB 10.3

Install MariaDB 10.3

    apt-get update
    apt-get install mariadb-server galera

Configuration

It is always STRONGLY recommended to run an odd number of nodes, and atleast three nodes. This is to avoid split-brain and alot of headache and frustration in the future. Please, just set up three nodes and dont bother with a 2 node cluster.

Sure, more servers = the slower the writes will be. So it is recommended to go with atleast 3 nodes, and maximum

Galera configuration

In order to create our galera-cluster, we have to create the following file: /etc/mysql/conf.d/galera.cnf, add the following content. Just be sure that you edit it to fit your needs.

    [mysqld]
    binlog_format=ROW
    default-storage-engine=innodb
    innodb_autoinc_lock_mode=2
    innodb_doublewrite=1
    query_cache_size=0
    query_cache_type=0
    bind-address=0.0.0.0
    wsrep_on=ON
    wsrep_provider=/usr/lib/galera/libgalera_smm.so
    wsrep_cluster_name="galera1"
    wsrep_cluster_address=gcomm://192.168.2.11,192.168.2.12,192.168.2.13
    wsrep_sst_method=rsync
    wsrep_node_address=192.168.2.11

You might want to edit the “listen” address for the MariaDB installation, it is usually found in /etc/mysql/mariadb.cnf.

Configure the other servers accordingly, and execute systemctl restart mariadb-server on all nodes. You might want to execute galera_new_cluster on one of the nodes and restart all the nodes again.

Now, you can try to create a database on one node:

    create database testdb

And you should be able to see it from the other nodes:

    show databases;