How to remove iRedMail

Here is an example of removing iRedMail.
On the test, delete the installed iRedMail 0.9.7 with Ubuntu Server 16.04 using the uninstall script – clear_iredmail.

Let’s go to the tools directory of the iRedMail installer, save the script in it, in my case it’s:

cd /root/iRedMail-0.9.7/tools/
wget https://ixnfo.com/wp-content/uploads/2017/08/clear_iredmail.zip
unzip clear_iredmail.zip

Let’s make it executable:

chmod +x clear_iredmail.sh

And run:

bash clear_iredmail.sh

The script will remove mysql, ssl, amavisd, clamav, spamassassin, dovecot, postfix, iredapd, users, etc., you need to be careful if there is something else on the server besides iRedMail.
In the script code, you can see the step-by-step process of removing iRedMail.

How to fix the problem with mdadm disks

I received three email messages from one of the servers on Hetzner with information about raids md0, md1, md2:

DegradedArray event on /dev/md/0:example.com
This is an automatically generated mail message from mdadm
running on example.com
A DegradedArray event had been detected on md device /dev/md/0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4] [raid1]
md2 : active raid6 sdb3[1] sdd3[3]
208218112 blocks super 1.0 level 6, 512k chunk, algorithm 2 [4/2] [_U_U]
md1 : active raid1 sdb2[1] sdd2[3]
524224 blocks super 1.0 [4/2] [_U_U]
md0 : active raid1 sdb1[1] sdd1[3]
12582784 blocks super 1.0 [4/2] [_U_U]
unused devices:

I looked at the information about RAID and disks:

cat /proc/mdstat
cat /proc/partitions
mdadm --detail /dev/md0
mdadm --detail /dev/md1
mdadm --detail /dev/md2
fdisk -l | grep '/dev/sd'
fdisk -l | less

I was going to send a ticket to the tech support and plan to replace the dropped SSD disks.
SMART recorded information about the dropped discs in the files, there was also their serial number:

smartctl -x /dev/sda > sda.log
smartctl -x /dev/sdc > sdc.log

Remove disks from the raid if you can:

mdadm /dev/md0 -r /dev/sda1
mdadm /dev/md1 -r /dev/sda2
mdadm /dev/md2 -r /dev/sda3

mdadm /dev/md0 -r /dev/sdc1
mdadm /dev/md1 -r /dev/sdc2
mdadm /dev/md2 -r /dev/sdc3

If any partition of the disk is displayed as working, and the disk needs to be extracted, then first mark the partition not working and then delete, for example, if /dev/sda1, /dev/sda2 are dropped, and /dev/sda3 works:

mdadm /dev/md0 -f /dev/sda3
mdadm /dev/md0 -r /dev/sda3

In my case, having looked at the information about the dropped discs, I found that they are whole and working, even better than active ones.

I looked at the disk partitions:

fdisk /dev/sda
p
q
fdisk /dev/sdc
p
q

They were marked the same way as before:

Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00015e3f
Device Boot Start End Blocks Id System
/dev/sda1 1 1567 12582912+ fd Linux raid autodetect
/dev/sda2 1567 1633 524288+ fd Linux raid autodetect
/dev/sda3 1633 14594 104109528+ fd Linux raid autodetect

Therefore, after waiting for the synchronization of each returned these discs back to the raid:

mdadm /dev/md0 -a /dev/sda1
mdadm /dev/md1 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda3

mdadm /dev/md0 -a /dev/sdc1
mdadm /dev/md1 -a /dev/sdc2
mdadm /dev/md2 -a /dev/sdc3

At the end, the command cat /proc/mdstat was already displayed with [UUUU].

If the disks are replaced with new ones, then they need to be broken in the same way as the ones installed.
An example of partitioning the disk /dev/sdb is similar to /dev/sda with MBR:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

Example of partitioning /dev/sdb with GPT and assigning a random UUID disk:

sgdisk -R /dev/sdb /dev/sda
sgdisk -G /dev/sdb

Also on the newly installed disk you need to install the bootloader:

grub-install --version
grub-install /dev/sdb
update-grub

Either through the menu grub (hd0 is /dev/sda, hd0,1 – /dev/sda2):

cat /boot/grub/device.map
grub
device (hd0) /dev/sda
root (hd0,1)
setup (hd0)
quit

If the grub installation is performed from the rescue disk, you need to look at the partition list and mount it, for example if RAID is not used:

ls /dev/[hsv]d[a-z]*[0-9]*
mount /dev/sda3 /mnt

If you are using software RAID:

ls /dev/md*
mount /dev/md2 /mnt

Either LVM:

ls /dev/mapper/*
mount /dev/mapper/vg0-root /mnt

And execute chroot:

chroot-prepare /mnt
chroot /mnt

After mounting, you can restore GRUB as I wrote above.

See also my other articles:
How did I make a request to Hetzner to replace the disk in the raid
The solution to the error “md: kicking non-fresh sda1 from array”
The solution to the warning “mismatch_cnt is not 0 on /dev/md*”
mdadm – utility for managing software RAID arrays
Description of RAID types
Diagnostics HDD using smartmontools
Recovering GRUB Linux

Mount NTFS partitions on Linux

After connecting the disk to the server, let’s see a list of all the disks and find the name of the desired one:

sudo fdisk -l

I’ll give an example of mounting NTFS partition of a disk in Ubuntu (since I had a disk partitioned into two partitions, drive C and D, then they were found in the system as /dev/sdb1 and /dev/sdb2, both mounted to the created directories):

sudo mkdir /newhdd1
sudo mount -t ntfs /dev/sdb1 /newhdd1
sudo mkdir /newhdd2
sudo mount -t ntfs /dev/sdb2 /newhdd2

Since before this disk was used in the Windows system, I had a mount error:

The disk contains an unclean file system (0, 0).
Metadata kept in Windows cache, refused to mount.
Failed to mount ‘/dev/sdb1’: The operation is not allowed
The NTFS partition is in an unsafe state. Please resume and shutdown
Windows fully (no hibernation or fast restarting), or mount the volume
read-only with the ‘ro’ mount option.

In this case, you can mount the partition in read-only mode:

sudo mount -t ntfs -o ro /dev/sdb1 /newhdd1

Either fix the partitions with the command:

sudo ntfsfix /dev/sdb1
sudo ntfsfix /dev/sdb2

And after that, mount with full access:

sudo mount -t ntfs /dev/sdb1 /newhdd1
sudo mount -t ntfs /dev/sdb2 /newhdd2

You can unmount it like this:

sudo umount -t ntfs /dev/sdb1 /newhdd1
sudo umount -t ntfs /dev/sdb2 /newhdd2

See also:
Managing disk partitions in Ubuntu using fdisk

Adding a disk to LVM

Suppose we have already configured LVM, for example, as I described in this article – Setting up and using LVM

Switch to the root user:

sudo -i

If there is no hot-swap drive, turn off the server, connect a new disk, turn on the server and look at the name of the new disk (in my case it’s /dev/sdd):

fdisk -l

Let’s see the existing groups and how much space is left:

vgdisplay

Let’s see a list of physical volumes:

pvdisplay

Let’s start marking a new disk:

fdisk /dev/sdd
n
p
1
Enter
Enter
t
8e
w

Now create a physical volume:

pvcreate /dev/sdd1

Let’s see a list of logical volumes:

lvdisplay

We extend it by adding a new partition (where ixnfo is a volume group):

vgextend ixnfo /dev/sdd1

See the list of physical volumes as follows:

pvscan

Let’s look at the path of the logical volume (in my case /dev/ixnfo/temp) and add a new section:

lvextend /dev/ixnfo/temp /dev/sdd1

Let’s see the size of the mounted logical volume:

df -h

So the size did not change, we’ll fix it with the command:

resize2fs /dev/ixnfo/temp

Done.

Setting up and using LVM

LVM (Logical Volume Management) allows you to compile multiple disks and areas from disks into one logical volume and then split again as you like.

PV (Physical Volume) — partition or whole disk
VG (Volume Group) — a single disk assembled from physical volumes
LV (Logical Volume)

Switch to the root user:

sudo -i

Install LVM if it is not already installed (Ubuntu/Debian):

apt-get install lvm2

Let’s look at the information about the disks:

fdisk -l

On the test I have /dev/sda with the system and not marked /dev/sdb.

Let’s make the physical partition all /dev/sdb without partitioning:

pvcreate /dev/sdb

To view the list of physical volumes, use the command:

pvdisplay

Create a volume group named ixnfo:

vgcreate ixnfo /dev/sdb

If necessary, delete as follows:

vgremove ixnfo

Example of viewing existing groups and how much space is left:

vgdisplay

For the test, create a logical volume “temp” of 100 megabytes:

lvcreate -L100 -n temp ixnfo

To view the list of logical volumes, use the command:

lvdisplay

Let’s format it:

mkfs.ext4 -L temp /dev/ixnfo/temp

Create a folder, mount the created volume:

mkdir /mnt/temp
mount /dev/ixnfo/temp /mnt/temp

You can unmount it like this:

umount /mnt/temp/

See also:
Adding a disk to LVM
Managing disk partitions in Ubuntu using fdisk

Troubleshooting /usr/sbin/ejabberdctl: line 428: 14615 Segmentation fault

I noticed once after installing EJabberd in Ubuntu Server 16.04 and adding the user from the root command:

ejabberdctl register USER localhost PASSWORD

The following error:

/usr/sbin/ejabberdctl: line 428: 14615 Segmentation fault $EXEC_CMD “$CMD”

The log file /var/log/syslog reported:

Sep 11 11:17:00 mail kernel: [4647543.535271] audit: type=1400 audit(1505117820.598:43): apparmor=”DENIED” operation=”file_mmap” profile=”/usr/sbin/ejabberdctl//su” name=”/bin/su” pid=14439 comm=”su” requested_mask=”m” denied_mask=”m” fsuid=0 ouid=0

To solve the error, I opened the apparmor configuration file:

nano /etc/apparmor.d/usr.sbin.ejabberdctl

Found the string:

/bin/su                                 r,

And changed it by adding m:

/bin/su                                 rm,

Restarted the apparmor:

sudo service apparmor restart

Done, the error no longer appears.

Configuring the ircd-hybrid

Suppose we installed ircd-hybrid as I described in this article – Installing the IRC server – ircd-hybrid
Now proceed to setup.

Let’s edit the text of the welcome message:

sudo nano /etc/ircd-hybrid/ircd.motd

Make a copy of the configuration file just in case:

sudo cp /etc/ircd-hybrid/ircd.conf /etc/ircd-hybrid/ircd_original.conf

Open the main configuration file in the text editor, configure the parameters and comment out the unnecessary ones:

sudo nano /etc/ircd-hybrid/ircd.conf

In the configuration file, the standard serverinfo parameters are first followed, if desired, we change them:

serverinfo {
        name = "hybrid8.debian.local";
        description = "test";
        network_name = "debian";
        network_desc = "This is My Network";
        hub = no;
        default_max_clients = 512;
        max_nick_length = 15;
        max_topic_length = 300;
};

Further, contact the server administrator, if desired, change them:

admin {
        name = "SYSADMIN";
        description = "Main Server Administrator";
        email = "<admin@example.com>";
};

Network parameters (on which ports the ircd-hybrid will work, for example, you can change to 6667):

listen {
        port = 6665 .. 6669;
};

The first auth block that allows you to connect everything from the local address 127.0.0.1:

auth {
        user = "*@127.0.0.1";
        spoof = "i.love.debian.org";
        class = "opers";
        flags = need_password, spoof_notice, exceed_limit, kline_exempt,
                xline_exempt, resv_exempt, no_tilde, can_flood;
};

Another auth block allows you to connect to all (comment or change to your own needs):

auth {
        user = "*@*";
        class = "users";
        flags = need_ident;
};

For example, create a password for the user, copy the result of the command in an encrypted form:

mkpasswd PASSWORD

We add the possibility of authorization to any users only with a password and from the specified network:

auth {
        user = "*@192.168.3.0/24";
        class = "users";
        flags = need_password;
encrypted = yes;
password = "PASSWORD_FROM_mkpasswd";
};

In the auth block the password will be stored in encrypted form, in the IRC client it is specified as it is.
To specify the password in the auth block in the unencrypted form, you need to remove encrypted.

In the general section, disable the need_ident:

general {
...
disable_auth = yes;
...
};

Restart ircd-hybrid to apply the changes:

sudo /etc/init.d/ircd-hybrid restart
sudo service ircd-hybrid restart

We can also add an operator:

auth {
name = "admin";
user = "admin@192.168.3.254/32";
class = "opers";
flags = need_password, spoof_notice, exceed_limit, kline_exempt;
encrypted = yes;
password = "PASSWORD_FROM_mkpasswd";
};

You can block IP addresses as necessary in the following ways:

deny {
       ip = "192.168.4.4/32";
       reason = "Spam";
};

After the changes in the configuration file, you need to restart the ircd-hybrid.
As a customer you can use for example free AdiIRC.