Backup all MySQL databases in separate files

MySQL dump is a nice tool but will allow you only to backup a single database in one file or multiple databases in one single file. Furthermore will it not allow you to compress the output unless you are piping it to e.g. gzip. Of course there are many scripts out there and complete software packages, but people like me prefer a more simple solution, which you can easily modify and embed where ever it would be needed. For those who are looking for a more complex script than mine, have a look at AutoMySQLBackup (http://sourceforge.net/projects/automysqlbackup/).

My script below will backup all databases on a given server, split them into separate files and compress those files one-by-one. As an addition to this, the script will create a separate backup folder each time when the script gets executed (date labelled).

Create a new file in your user space:

touch backupMySQLDBsinSingleFiles.sh

Make the file executable:

chmod u+x backupMySQLDBsinSingleFiles.sh

Open the file:

nano backupMySQLDBsinSingleFiles.sh

Copy the content below, change the username, password and outpud dir!

#!/bin/bash
################################################
#
# Backup all MySQL databases in separate files and compress those.
# Furthermore the script will create a folder with the current time stamp
# @author: Per Lasse Baasch (http://skycube.net)
# @Version: 2014-06-13
# NOTE: MySQL and gzip installed on the system
# and you will need write permissions in the directory where you executing this script
#
################################################
# MySQL User
USER='root'
# MySQL Password
PASSWORD='password'
# Backup Directory - NO TAILING SLASH!
OUTPUT="."
 
##### And 
TIMESTAMP=`date +%Y%m%d_%H%M%S`;
mkdir $OUTPUT/$TIMESTAMP;
cd $OUTPUT/$TIMESTAMP;
echo "Starting MySQL Backup";
echo `date`;
databases=`mysql --user=$USER --password=$PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`
for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]] ; then
        echo "Dumping database: $db"
        mysqldump --force --opt --user=$USER --password=$PASSWORD --databases $db > $OUTPUT/dbbackup-$TIMESTAMP-$db.sql
	gzip $OUTPUT/dbbackup-$TIMESTAMP-$db.sql
    fi
done
echo "Finished MySQL Backup";
echo `date`;

If you have any suggestions, or comments, please let me know :)

Find NTP Time Servers in your network

Finding NTP-Server should no be to hard but or let’s say, as a network admin or server admin you should know your neighbourhood. Unfortunately, for some reasons some packages (Debian/Ubuntu) bring you to the point to install a ntp-server even when you don’t need it or even better like Ubuntu server, it comes with it! This should not be a problem as far you know about it and your network will not let them talk to the public (Hopefully your Cisco or local firewall e.g. iptables blocks it).

These days people/hackers/crackes get more and more creative in even using linux ntp-servers for ddos attacks. Therefore it could happen that you receive a message like this from your provider:

A public NTP server on your network, running on IP address xxx.xxx.xxx.xxx, participated in a very large-scale attack against a customer of ours today, generating UDP responses to spoofed “monlist” requests that claimed to be from the attack target…

How to find NTP Servers in your network??? I couldn’t figure out how to get something like a report with nmap or similar tools as well as I couldn’t find a script by someone. Therefore here is my solution:
If you have any recommendations please comment :)

Copy and paste the content below following in a file e.g. ntp-check.sh and make the file executable, please note that you have to adjust the IP range!

Create a new file in your user space:

touch ntp-check.sh

Make the file executable:

chmod u+x ntp-check.sh

Open the file:

nano ntp-check.sh

Copy the content below, change the IP range and save it!

#!/bin/bash
################################################
#
# Simple Script to check for ntp servers in a network
# @author: Per Lasse Baasch (http://skycube.net)
# @Version: 2014-03-10
# NOTE: you need ntpdate installed (should be present)
# you will need write permissions in the directory where you executing this script
#
################################################
# CLASS C NETWORK TO SCAN
# Syntax 'xxx.xxx.xxx' NO TAILING DOT
BASEIP='192.168.0';
 
/bin/rm -f ntpfound.log;
/bin/touch ntpfound.log;
 
BASEIP='192.168.0';
for (( c=1; c<=254; c++ ))
do
   echo "Checking $BASEIP.$c";
   /usr/sbin/ntpdate -q $BASEIP.$c > ntpcheck.log 2>&1; cat ntpcheck.log | grep 'adjust time server' >> ntpfound.log;
done
# Remove temporary log file
/bin/rm -f ntpcheck.log;
 
# Display results
cat ntpfound.log;
 
### Possible Output Which indicate there is a possible a nto server present
#10 Mar 13:16:01 ntpdate[25552]: adjust time server 192.168.0.23 offset 0.013292 sec
#10 Mar 13:16:09 ntpdate[25555]: adjust time server 192.168.0.66 offset 0.013306 sec
#10 Mar 13:16:39 ntpdate[30586]: adjust time server 192.168.0.102 offset -0.037400 sec
exit;

Execute the script:

./ntp-check.sh

synchronize and copy bind9 zone files between servers

Running name servers is sometime a funny job as well as keeping them in sync. A usual and simple setup for smaller hosting providers is two separate servers (preferred Linux) which running a bind9 name server service. As we know as good name server admins, synchronizing can be done automatic via Master and Slave setup.

The solution below discusses a simple Master/Master solution where both servers have the same file and that just via rsync/ssh and a cron tab.

Our servers:

  • ns1.skycube.net (Master)
  • ns2.skycube.net (Master 2)

What we simply need is a little script which I for testing purposes saved in:
/root/bind9sync.sh
on the first server ns1.skycube.net.

NOTE: You have to setup private/public key authentication first!

#!/bin/bash
################################################
#
# script to sync bind9 configs
# @author: Per Lasse Baasch
# @Version: 2013-10-29
# NOTE: PRIVATE KEY AUTHENTICATION IS REQUIRED
# FOR AUTOMATIC SSH
#
################################################
# Binary paths
RSYNCBIN=/usr/bin/rsync
SSHBIN=/usr/bin/ssh
LOGFILE=/var/log/bind9sync.log
 
#### config
# YES appending SLASHES!!!!
LOCAL_PATH=/etc/bind/
# NO  appending SLASHES!!!!
REMOTE_HOST='ns2.skycube.net'
REMOTE_PATH='/etc/bind'
REMOTE_BIND9RELOADCMD='/etc/init.d/bind9 reload' 
 
#### DO THE SYNC
# rsync -avz --delete /etc/bind/ -e ssh $REMOTE_HOST:/etc/bind
result=$($RSYNCBIN -aiz --delete $LOCAL_PATH -e $SSHBIN $REMOTE_HOST:$REMOTE_PATH);
count=${#result};
 
### IF something been transferred, reload the bind on remote host
if [ $count -gt 5 ]
then
  ### RELOAD BIND
  date >> $LOGFILE;
  echo $result >> $LOGFILE;
  echo "TRY To RELOAD Bind on $REMOTE_HOST" >> $LOGFILE;
  $SSHBIN $REMOTE_HOST exec "$REMOTE_BIND9RELOADCMD" >> $LOGFILE;
  echo "-----" >> $LOGFILE;
fi

And to do all above every 5 minutes edit your cron tabs via

crontab -e

and paste in the bottom the following (assuming you saved the file in /root/bind9sync.sh

# Sync NS every 5 min 
*/5 * * * * /root//bind9sync.sh > /dev/null 2>&1

Rename files via shell-script

In order to a miss configuration I had for a long time the problem with hundreds of
files which had a crap name. Tonight I had the time to solve this:

Preamble:

-rw------- 1 apache root 273K 24. Jul 15:37 UPLOAD_PATH1
-rw-r--r-- 1 apache root 275K 24. Jul 15:37 UPLOAD_PATH10
-rw-r--r-- 1 apache root 220K 19. Okt 15:14 UPLOAD_PATH11
-rw-r--r-- 1 apache root 3,1M 19. Okt 15:17 UPLOAD_PATH12
-rw-r--r-- 1 apache root 241K 19. Okt 16:13 UPLOAD_PATH13
-rw------- 1 apache root  60K 24. Jul 15:37 UPLOAD_PATH2
-rw------- 1 apache root 105K 24. Jul 15:37 UPLOAD_PATH3
-rw------- 1 apache root  60K 24. Jul 15:37 UPLOAD_PATH4
-rw------- 1 apache root  60K 24. Jul 15:37 UPLOAD_PATH5
-rw------- 1 apache root  60K 24. Jul 15:37 UPLOAD_PATH6
-rw------- 1 apache root  60K 24. Jul 15:37 UPLOAD_PATH7
-rw------- 1 apache root  60K 24. Jul 15:37 UPLOAD_PATH8
-rw-r--r-- 1 apache root 273K 24. Jul 15:37 UPLOAD_PATH9

The way to rename them is pretty simple with a bash script, but the notation/ syntax
is just a little bit tricky.
While searching with google, I’d found several scripts but no simple solution.
My solution just executes a ls command, push the output in an array, and the builds a new name using sed.
After this steps, the new name is ready to use with a simple mv command.

Solution via bash:

#!/bin/bash
list=(`ls `)
for filename in "${list[@]}"
do
  myvar=$(echo $filename | sed 's/UPLOAD_PATH//g')
  #echo 'mv '$filename' '$myvar
  mv $filename $myvar
done