Image Optimization for Pagespeed Insights

On Debian/Ubuntu systems install required packages:

apt install -y advancecomp optipng pngcrush jpegoptim

Create a bash File in

nano /usr/local/bin/optimizeImages

and paste:

#!/bin/bash

find . -type f -iname "*.png" -exec optipng -nb -nc {} \;
find . -type f -iname "*.png" -exec advpng -z4 {} \;
find . -type f -iname "*.png" -exec pngcrush -rem gAMA -rem alla -rem cHRM -rem iCCP -rem sRGB -rem time -ow {} \;
find . -type f \( -iname "*.jpg" -o -iname "*.jpeg" \) -exec jpegoptim -f --strip-all {} \;


Save the file (press F2 in nano). Then make the script executable:

chmod a+x /usr/local/bin/optimizeImages

The running time will be depending on the folder size. Go to your image Folder, for example

cd /var/www/myproject/web/images/

and run

optimizeImages

Now it can take a long time. Stand by … thats all!

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

CHIA BC

I was fascinated, the idea sounds good at first. If you have free hard disk space available, you could make money with it.

I made a couple of so called «plots», six in total. No problem, there are still over 4TB free on my NAS. And whow, only a short 3 year waiting period.

With one plot you were between 11 years and a week later at 13 years. But in the week before that, at 7 years with a plot:

Within the creation time, from the first to the sixth plot, the blockchain grew from 1.8 EiB to 3.3 EiB! Enough market capitalization is available in the meantime and the CHIA went on the coin exchanges.

But then I have to allow a look at my stats of my new NVME. And now I’m a little scared:

root@heiteira /home/breit # smartctl -a /dev/nvme0

smartctl 7.0 2019-05-21 r4917 [x86_64-linux-5.3.18-57-default] (SUSE RPM)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number:                       Samsung SSD 980 PRO 1TB
Serial Number:                      S5GXNF0R343228M
Firmware Version:                   2B2QGXA7
PCI Vendor/Subsystem ID:            0x144d
IEEE OUI Identifier:                0x002538
Total NVM Capacity:                 1.000.204.886.016 [1,00 TB]
Unallocated NVM Capacity:           0
Controller ID:                      6
Number of Namespaces:               1
Namespace 1 Size/Capacity:          1.000.204.886.016 [1,00 TB]
Namespace 1 Utilization:            677.679.161.344 [677 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            002538 b311b16dc1
Local Time is:                      Mon May 10 21:13:29 2021 CEST
Firmware Updates (0x16):            3 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x0057):     Comp Wr_Unc DS_Mngmt Sav/Sel_Feat Timestmp
Maximum Data Transfer Size:         128 Pages
Warning  Comp. Temp. Threshold:     82 Celsius
Critical Comp. Temp. Threshold:     85 Celsius
Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     8.49W       -        -    0  0  0  0        0       0
 1 +     4.48W       -        -    1  1  1  1        0     200
 2 +     3.18W       -        -    2  2  2  2        0    1000
 3 -   0.0400W       -        -    3  3  3  3     2000    1200
 4 -   0.0050W       -        -    4  4  4  4      500    9500
Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        34 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    0%
Data Units Read:                    32.742.182 [16,7 TB]
Data Units Written:                 29.863.229 [15,2 TB]
Host Read Commands:                 113.055.126
Host Write Commands:                88.847.725
Controller Busy Time:               590
Power Cycles:                       9
Power On Hours:                     238
Unsafe Shutdowns:                   2
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               34 Celsius
Temperature Sensor 2:               44 Celsius
Error Information (NVMe Log 0x01, max 64 entries)
No Errors Logged

Data Units Read: 32.742.182 [16,7 TB]
Data Units Written: 29.863.229 [15,2 TB]

For just 0.6TB on my NAS. No NVME will survive this for long!

You should think twice about whether it is worth it, especially since how much space can you really give up permanently?

How many hard drives are there in 5.1 EiB worldwide (as of May 15, 2021)? Just recalculated:

1 EiB ~= 1152921 TB and 504 GB

Then multiply the result by 5. And are they all connected to this BlockChain at this moment? Are they all online just for that?

A new hype and a big black hole for pointless waste of resources. The Chia is not «greener» than the BitCoin. It’s not like people sacrifice their free space in order to generate money with it. These people have no chance of being selected with their few TB (chia selection process). It will therefore run in such a way that there will be further hardware and chip shortages in the future, because greedy companies already stock up on hardware in order to be able to generate this new crypto currency. And similar to graphics cards, the prices for NVME’s and SSD’s will soon explode.

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

easy XMRIG on Debian Buster

To get started easily, the following lines may help:

cd /opt
apt update
apt install -y git cmake libhwloc-dev libuv1-dev libuvc-dev librust-openssl-dev
git clone https://github.com/xmrig/xmrig.git
mkdir xmrig/build
cd xmrig/build
cmake ..
make -jxx   # xx = Your CPU thread count
ln -s /opt/xmrig/build/xmrig /usr/local/bin/

Now we can just run xmrig. As an an example:

xmrig \
-a cryptonight \
-o stratum+tcp://randomxmonero.eu.nicehash.com:3380 \
-u <YourHolyWalletHash> \
-p <YourHolyPassword> \
--donate-level=5 \
--threads=4

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

Playtime with scrapy

example config File – breit_work.py:

# -*- coding: utf-8 -*-
import scrapy


class BreitWorkSpider(scrapy.Spider):
    name = 'breit.work'
    allowed_domains = ['breit.work']
    start_urls = ['https://breit.work/']

    def parse(self, response):
        for title in response.css('h3'):
            yield {'title 1': title.css('a ::text').get()}

        for title in response.css('h2'):
            yield {'title 2': title.css('a ::text').get()}

        for next_page in response.css('a.next-posts-link'):
            yield response.follow(next_page, self.parse)

run with:

scrapy runspider breit_work.py

example output:

2020-10-13 11:05:26 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: scrapybot)
2020-10-13 11:05:26 [scrapy.utils.log] INFO: Versions: lxml 4.5.2.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 2.7.16 (default, Oct 10 2019, 22:02:15) - [GCC 8.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.6.1, Platform Linux-4.19.0-10-amd64-x86_64-with-debian-10.5
2020-10-13 11:05:26 [scrapy.crawler] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True}
2020-10-13 11:05:26 [scrapy.extensions.telnet] INFO: Telnet Password: ~not4YouNoob*
2020-10-13 11:05:26 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2020-10-13 11:05:26 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-10-13 11:05:26 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-10-13 11:05:26 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-10-13 11:05:26 [scrapy.core.engine] INFO: Spider opened
2020-10-13 11:05:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-10-13 11:05:26 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-10-13 11:05:26 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.breit.work/> from <GET https://breit.work/>
2020-10-13 11:05:26 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.breit.work/> (referer: None)
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 1': u'Impress'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'Boston Dynamics spielt mit dem Krieg'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'It\u2019s done \u2026 Bye bye Britain !'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'Play with redis-server'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'dd with a nice progress indicator'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'XEN 4.11 on Debian 10'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'Redis 6.0 is out'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'Setup Pageflow Storytelling with NGINX & Lets Encrypt'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'MySQL 8 \u2013 create Database with own user'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'Schade REAL, das war es leider'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u't\xe4gliche Mysql-Backups'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'Setup Redis-Server 6.0.0 Beta'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': u'Lustiges beim Server Einrichten'}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': None}
2020-10-13 11:05:27 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.breit.work/>
{'title 2': None}
2020-10-13 11:05:27 [scrapy.core.engine] INFO: Closing spider (finished)
2020-10-13 11:05:27 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 422,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 26080,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/301': 1,
 'elapsed_time_seconds': 0.361133,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 10, 13, 11, 5, 27, 61229),
 'item_scraped_count': 15,
 'log_count/DEBUG': 17,
 'log_count/INFO': 10,
 'memusage/max': 51576832,
 'memusage/startup': 51576832,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2020, 10, 13, 11, 5, 26, 700096)}
2020-10-13 11:05:27 [scrapy.core.engine] INFO: Spider closed (finished)

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

dd with a nice progress indicator

Example for Blockdevices:


You may need the following additional packages: sudo pv

DISKSIZE=`sudo blockdev --getsize64 /dev/vol0/target-disk` && sudo dd bs=4096 if=/dev/vol0/target-disk | pv -s $DISKSIZE | sudo gzip -9 > target-disk.img.gz

Example Result:

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

XEN 4.11 on Debian 10

Extended Setup with additional bpo linux-image

Setup for Hetzner dedicated Servers

In order to set up the XEN hypervisor on a newly set up Hetzner server, the following actions must be carried out. First edit /etc/apt/sources.list and add:

# buster backports
deb http://http.debian.net/debian buster-backports main

then:

apt update  
apt -y dist-upgrade
apt -y install htop mc rsync screen nload net-tools locate aptitude linux-headers-5.4.0-0.bpo.2-amd64 linux-support-5.4.0-0.bpo.2 linux-image-5.4.0-0.bpo.2-amd64 bash-completion xenstore-utils xen-utils-common xen-utils-4.11 xen-tools xen-hypervisor-4.11-amd64 libxenstore3.0 grub-xen-bin

adjust bootloader:

mv /etc/grub.d/10_linux /etc/grub.d/20_linux && mv /etc/grub.d/20_linux_xen /etc/grub.d/10_linux_xen

Edit the file /etc/default/grub and replace the lines:

GRUB_CMDLINE_LINUX_DEFAULT="nomodeset consoleblank=0"  
GRUB_CMDLINE_LINUX=""

with:

GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 acpi=ht"  
GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=512M dom0_max_vcpus=1"  
GRUB_CMDLINE_LINUX=""

This gives Dom0 a vCPU and 512m RAM. These settings are sufficient for normal operation.
Finally, edit the file /etc/default/grub.d/xen.cfg and change the line:

# XEN_OVERRIDE_GRUB_DEFAULT=0

to

XEN_OVERRIDE_GRUB_DEFAULT=1

Now udate the Bootloader with new configuration:

update-grub2

Setup a networkbridge

An example standard configuration file from Hetzner can look like this:

### Hetzner Online GmbH installimage  

source /etc/network/interfaces.d/*  

auto lo  
iface lo inet loopback  
iface lo inet6 loopback  

auto enp34s0
iface enp34s0 inet static  
address 135.181.x.x  
netmask 255.255.255.192  
gateway 135.181.x.x

# route 135.181.3.0/26 via 135.181.3.1  
up route add -net 135.181.3.0 netmask 255.255.255.192 gw 135.181.3.1 dev enp34s0  

iface enp34s0 inet6 static  
address 2a01:4f9:4b:4fe6::2  
netmask 64

and should be changed according to this example as follows

### Hetzner Online GmbH installimage  

source /etc/network/interfaces.d/*  

auto lo  
iface lo inet loopback  
iface lo inet6 loopback  

allow-hotplug enp34s0  
iface enp34s0 inet manual  
iface enp34s0 inet6 manual  

auto br0  
iface br0 inet static  
address 135.181.3.25  
netmask 255.255.255.192  
gateway 135.181.3.1  
bridge_ports enp34s0  
bridge_stp off  

# route 135.181.3.0/26 via 135.181.3.1  
# up route add -net 135.181.3.0 netmask 255.255.255.192 gw 135.181.3.1 dev enp34s0  

iface br0 inet6 static  
address 2a01:4f9:4b:4fe6::2  
netmask 64

Thats all. Now reboot the server

reboot && exit

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

Setup Pageflow Storytelling with NGINX & Lets Encrypt

Es war nicht unbedingt einfach, aber nicht unmöglich. Da die Dokumentionen nicht besten sind, die man findet, oder teilweise auch unvollständig, habe ich hier meine eigene Anleitung für einen eigenen Pageflow-Server:

[protectedContentOpen][/protectedContentOpen]

# SETUP PAGEFLOW - Debian 10
#
# Um pageflow auf einem frisch installierten Debian 10 sind folgende Schritte durchzuführen: 
#
apt update
apt install sudo lsb-release bash-completion htop net-tools aptitude mc rsync screen nload locate wget git build-essential ruby2.5-dev ruby-rails rails redis-server imagemagick imagemagick-common imagemagick-doc libsqlite3-dev libodb-sqlite-dev zlib1g-dev zlibc libghc-bzlib-dev libcrypto++-dev libssl-dev nginx -y
cd /tmp
wget https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb
dpkg -i mysql-apt-config_0.8.13-1_all.deb
apt update
apt install mysql-server default-libmysqlclient-dev default-libmysqld-dev -y
git clone https://github.com/codevise/pageflow.git
mkdir /opt/pageflow
rsync -av pageflow/ /opt/pageflow/
cd /opt/pageflow/

mysql -p

create database ownpageflow;
create user 'ownpageflow'@'localhost' IDENTIFIED BY 'My$secRetPassw0rd';
grant all on *.* TO 'ownpageflow'@'localhost';
flush privileges;
quit;

sudo gem install mysql2 -v '0.5.3'
sudo gem install rake
sudo gem install rainbow -v '2.2.2'
sudo gem install listen
sudo bundle install

# RAILS Starter vorbereiten

echo "cd /opt/pageflow/ownpageflow/ && foreman start" > .foreman.start
echo "cat /opt/pageflow/ownpageflow/tmp/pids/server.pid | xargs kill -9  > /dev/null && rm -rf /opt/pageflow/ownpageflow/tmp/pids/server.pid  > /dev/null" > .foreman.stop
chmod a+x .foreman*

echo '#!/bin/bash

screen -d -m -S pageflow bash /opt/pageflow/ownpageflow/.foreman.start &> /dev/null' > /usr/local/bin/pageflow.start

echo '#!/bin/bash

/opt/pageflow/ownpageflow/.foreman.stop' > /usr/local/bin/pageflow.stop

chmod a+x /usr/local/bin/pageflow*

echo '[Unit]
Description=Advanced key-value store
After=network.target
Documentation=http://redis.io/documentation, man:redis-server(1)

[Service]
Type=forking
ExecStart=/usr/local/bin/pageflow.start
ExecStop=/bin/kill -s TERM $MAINPID
PIDFile=/opt/pageflow/ownpageflow/tmp/pids/server.pid
TimeoutStopSec=0
Restart=always
User=root
Group=root
RuntimeDirectory=redis
RuntimeDirectoryMode=2755

UMask=007
PrivateTmp=yes
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
Alias=pageflow.service' > /lib/systemd/system/pageflow.service

systemctl enable pageflow.service

systemctl start pageflow.service

# NGINX Configuration erstellen

echo 'server {
  listen 80 deferred;
  server_name myhost.de;
  root /opt/pageflow/ownpageflow;

  location / {
    try_files $uri/index.html $uri.html $uri @app;
  }

  location @app {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_pass http://localhost:5000;
  }
}' > /etc/nginx/sites-available/myhost.cfg

ln -s /etc/nginx/sites-available/myhost.cfg /etc/nginx/sites-enabled

service restart nginx

# Setup LETS ENCRYPT

cd /usr/local/bin
wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto
./certbot-auto --install-only

# cerbot nun nochmal ausführen und Anweisungen durchlaufen um myhost.de mit lets-encrypt zu verifizieren und anschliessend den Web-Server neu starten

certbot-auto

service nginx restart

# Damit sollte pageflow nun zunächst im Browser unter der eingerichteten Domain myhost.de erreichbar


##############################################
# PERSONALISIERUNG AWS / S3 / Amazon Services
##############################################
#
# Um AWS / S3 / Amazon Services benutzen zu können muss die Datei
#
# ./config/initializers/pageflow.rb
#
# angepasst werden. 
#
# Anschliessend sollte der Pageflow-Service neu gestartet werden.
#
# Wenn alles richtig eingerichtet wurde, kann man nun Medien via Pageflow organisieren. 

[protectedContentClose][/protectedContentClose]

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

MySQL 8 – create Database with own user

Die Syntax für das Anlegen eines MySQL-Benutzers hat sich geändert. Daher eine kurze Notiz dazu:

mysql -p
create database mydatabase;
create user 'myuser'@'localhost' IDENTIFIED BY 'My$secRetPassw0rd';
grant all on mydatabase.* TO 'myuser'@'localhost';
flush privileges;
exit

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

tägliche Mysql-Backups

Sofern der MySQL-Root User mit einem Passwort belegt wurde sollte man zunächst Passwort Datei für MySQL anlegen.

Anlegen der Datei

/root/.my.cnf

Inhalt:

[mysqldump]
user=root
password=YourPassW0rd
[mysql]
user=root
password=YourPassW0rd

Rechte setzen:

chmod 0600 /root/.my.cnf

Das Script für da MySQL-Backup wird in

/usr/local/bin/mysqlbackup

angelegt. Folgender Inhalt kann übernommen werden.

#!/bin/bash

USER="root"
BDIR="/home/backup/mysql"
DATE=$(date +%G-%m-%d)

if [ ! -d "${BDIR}" ]
then
        mkdir -p "${BDIR}"
fi

for i in $(/usr/bin/mysql -u${USER} -e 'show databases' | grep -vw "Database")
do
        if [ ! -d "${BDIR}/${i}" ]
        then
                mkdir "${BDIR}/${i}"
        fi
        /bin/touch "${BDIR}"
        /usr/bin/mysqldump -u${USER} -c -Q --default-character-set=latin1  ${i} >> "${BDIR}/${i}/${i}-${DATE}.sql"
        /bin/gzip -9 "${BDIR}/${i}/${i}-${DATE}.sql"
done

/usr/bin/find "${BDIR}" -mtime +3 -type f -exec rm {} \; 2> /dev/null

Das Script ausführbar machen:

chmod a+x /usr/local/bin/mysqlbackup

Sollte man Systemweit das Kommando mysqlbackup ausführen können. Um es nun täglich laufen zu lassen, reicht ein Symlink nach /etc/cron.daily

ln -s /usr/local/bin/mysqlbackup /etc/cron.daily/

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?

Setup Redis-Server 6.0.0 Beta

Hier ein Beispiel Setup für den kommenden Redis-Server zum testen. Dieses Beispiel funktioniert auf derzeit aktuellen Debian u. Ubuntu Installationen.

Sofern ein Redis-Server bereits vorhanden ist, sollte dieser vorab entfernt werden:

apt remove redis-server redis-tools
dpkg --purge redis-server redis-tools

Falls nicht vorhanden die build & binutils nachinstallieren:

apt update && apt install build-essential autoconf automake libtool flex bison debhelper binutils -y

Das Projekt frisch aus dem git clonen:

cd /usr/local/src/
git clone https://github.com/antirez/redis.git

Den neuen Redis-Server compilieren und installieren:

cd redis/
make -j4
make install

Konfiguration und Init-Script kopieren:

cp redis.conf /etc/redis/
cp utils/redis_init_script /etc/init.d/redis-server
chmod a+x /etc/init.d/redis-server

Fertige Beispiel Konfigurationen (können direkt übernommen werden):

Benötigte Symlinks setzen, sofern redis-server schon via apt installiert war:

ln -s /usr/local/bin/redis-server /usr/bin/
ln -s /usr/local/bin/redis-cli /usr/bin/
ln -s /usr/local/bin/redis-benchmark /usr/bin/
ln -s /usr/local/bin/redis-check-aof /usr/bin/
ln -s /usr/local/bin/redis-check-rdb /usr/bin/
ln -s /usr/local/bin/redis-sentinal /usr/bin/

Zu guter letzt kann nun der Redis-Server gestartet werden:

/etc/init.d/redis-server start

Eine Ausgabe bei erfolgreichem Start könnte wie folgt aussehen:

/etc/init.d/redis-server start
Starting Redis server...
31976:C 10 Feb 2020 23:47:23.183 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
31976:C 10 Feb 2020 23:47:23.183 # Redis version=999.999.999, bits=64, commit=256ec6c5, modified=0, pid=31976, just started
31976:C 10 Feb 2020 23:47:23.183 # Configuration loaded

HINWEIS: In diesem Init-Script funktionieren derzeit nur start und stop

Ein Redis-Benchmark kann wie in diesem Beispiel aussehen:

redis-benchmark -t set,get --threads 1 -s /tmp/redis.sock -q
SET: 173611.12 requests per second
GET: 176366.86 requests per second

Es scheint, als könnten wir nicht finden, wonach du suchst. Vielleicht hilft eine Suche?