Managing Asterisk modules

Let’s connect to the Asterisk console:

sudo asterisk -rvv

Let’s see what modules are already in use:

module show

Files of modules with the extension * .so are in the directory /usr/lib/asterisk/modules/

To load and unload a module, commands are used (the module name is specified without a file extension, for example, not chan_sip.so, but chan_sip):

module load NAME
module unload NAME

In order for the necessary modules to be loaded automatically when starting Asterisk, they must be specified in the file /etc/asterisk/modules.conf, for example, open it in the text editor nano:

sudo nano /etc/asterisk/modules.conf

You can enable the autoloading of all existing modules in the folder /usr/lib/asterisk/modules/:

[modules]
autoload=yes

And then we can exclude unnecessary ones using the following commands:

noload => module.so

Either prohibit downloading all and specify only those that are needed, for example:

;SIP VoIP driver
load => chan_sip.so
load => res_rtp_asterisk.so
load => app_dial.so
load => bridge_simple.so
load => res_features.so
load => res_musiconhold.so
load => res_adsi.so
load => pbx_config.so
; List of required codecs
load => codec_a_mu.so
load => codec_adpcm.so
load => codec_alaw.so
load => codec_ulaw.so
load => codec_gsm.so
load => codec_ilbc.so
load => codec_lpc10.so
; If you use Dahdi cards for analog lines
load => chan_dahdi.so
; Call parking
load => res_parking.so 
; Below are the modules I needed when setting up call recording
; требуется если используется res_monitor.so
load => func_periodic_hook.so
; Required if res_monitor.so is used, the function STRFTIME
load => func_strings.so
; Required if res_monitor.so is used to determine the number, function CALLERID
load => func_callerid.so
; Required if res_monitor.so is used for MixMonitor
load => app_dial.so
; For recording calls
load => res_monitor.so
; To support WAV format
load => format_wav.so
; For MP3 format support
load => format_mp3.so
; To record statistics of calls to MySQL database
load => cdr_mysql.so
; To enable SNMP functionality, for example, to collect statistics by various monitoring systems
load => res_snmp.so
; To make calls from the context of the placed files to the directory /var/spool/asterisk/outgoing/
load => pbx_spool.so

To apply the changes in the /etc/asterisk/modules.conf file, execute the command from the Asterisk console:

module reload

If necessary, you can reboot Asterisk as follows:

sudo service asterisk restart

How to remove iRedMail

Here is an example of removing iRedMail.
On the test, delete the installed iRedMail 0.9.7 with Ubuntu Server 16.04 using the uninstall script – clear_iredmail.

Let’s go to the tools directory of the iRedMail installer, save the script in it, in my case it’s:

cd /root/iRedMail-0.9.7/tools/
wget https://ixnfo.com/wp-content/uploads/2017/08/clear_iredmail.zip
unzip clear_iredmail.zip

Let’s make it executable:

chmod +x clear_iredmail.sh

And run:

bash clear_iredmail.sh

The script will remove mysql, ssl, amavisd, clamav, spamassassin, dovecot, postfix, iredapd, users, etc., you need to be careful if there is something else on the server besides iRedMail.
In the script code, you can see the step-by-step process of removing iRedMail.

How to fix the problem with mdadm disks

I received three email messages from one of the servers on Hetzner with information about raids md0, md1, md2:

DegradedArray event on /dev/md/0:example.com
This is an automatically generated mail message from mdadm
running on example.com
A DegradedArray event had been detected on md device /dev/md/0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4] [raid1]
md2 : active raid6 sdb3[1] sdd3[3]
208218112 blocks super 1.0 level 6, 512k chunk, algorithm 2 [4/2] [_U_U]
md1 : active raid1 sdb2[1] sdd2[3]
524224 blocks super 1.0 [4/2] [_U_U]
md0 : active raid1 sdb1[1] sdd1[3]
12582784 blocks super 1.0 [4/2] [_U_U]
unused devices:

I looked at the information about RAID and disks:

cat /proc/mdstat
cat /proc/partitions
mdadm --detail /dev/md0
mdadm --detail /dev/md1
mdadm --detail /dev/md2
fdisk -l | grep '/dev/sd'
fdisk -l | less

I was going to send a ticket to the tech support and plan to replace the dropped SSD disks.
SMART recorded information about the dropped discs in the files, there was also their serial number:

smartctl -x /dev/sda > sda.log
smartctl -x /dev/sdc > sdc.log

Remove disks from the raid if you can:

mdadm /dev/md0 -r /dev/sda1
mdadm /dev/md1 -r /dev/sda2
mdadm /dev/md2 -r /dev/sda3

mdadm /dev/md0 -r /dev/sdc1
mdadm /dev/md1 -r /dev/sdc2
mdadm /dev/md2 -r /dev/sdc3

If any partition of the disk is displayed as working, and the disk needs to be extracted, then first mark the partition not working and then delete, for example, if /dev/sda1, /dev/sda2 are dropped, and /dev/sda3 works:

mdadm /dev/md0 -f /dev/sda3
mdadm /dev/md0 -r /dev/sda3

In my case, having looked at the information about the dropped discs, I found that they are whole and working, even better than active ones.

I looked at the disk partitions:

fdisk /dev/sda
p
q
fdisk /dev/sdc
p
q

They were marked the same way as before:

Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00015e3f
Device Boot Start End Blocks Id System
/dev/sda1 1 1567 12582912+ fd Linux raid autodetect
/dev/sda2 1567 1633 524288+ fd Linux raid autodetect
/dev/sda3 1633 14594 104109528+ fd Linux raid autodetect

Therefore, after waiting for the synchronization of each returned these discs back to the raid:

mdadm /dev/md0 -a /dev/sda1
mdadm /dev/md1 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda3

mdadm /dev/md0 -a /dev/sdc1
mdadm /dev/md1 -a /dev/sdc2
mdadm /dev/md2 -a /dev/sdc3

At the end, the command cat /proc/mdstat was already displayed with [UUUU].

If the disks are replaced with new ones, then they need to be broken in the same way as the ones installed.
An example of partitioning the disk /dev/sdb is similar to /dev/sda with MBR:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

Example of partitioning /dev/sdb with GPT and assigning a random UUID disk:

sgdisk -R /dev/sdb /dev/sda
sgdisk -G /dev/sdb

Also on the newly installed disk you need to install the bootloader:

grub-install --version
grub-install /dev/sdb
update-grub

Either through the menu grub (hd0 is /dev/sda, hd0,1 – /dev/sda2):

cat /boot/grub/device.map
grub
device (hd0) /dev/sda
root (hd0,1)
setup (hd0)
quit

If the grub installation is performed from the rescue disk, you need to look at the partition list and mount it, for example if RAID is not used:

ls /dev/[hsv]d[a-z]*[0-9]*
mount /dev/sda3 /mnt

If you are using software RAID:

ls /dev/md*
mount /dev/md2 /mnt

Either LVM:

ls /dev/mapper/*
mount /dev/mapper/vg0-root /mnt

And execute chroot:

chroot-prepare /mnt
chroot /mnt

After mounting, you can restore GRUB as I wrote above.

See also my other articles:
How did I make a request to Hetzner to replace the disk in the raid
The solution to the error “md: kicking non-fresh sda1 from array”
The solution to the warning “mismatch_cnt is not 0 on /dev/md*”
mdadm – utility for managing software RAID arrays
Description of RAID types
Diagnostics HDD using smartmontools
Recovering GRUB Linux

How to add a Windows user from the command line

It took one day to add a user to Windows 10 from the command line, because nothing happened when the add button was pressed from the control panel.

The first step is to start the command prompt as administrator, for this, in the start menu, type “cmd” or simply find the shortcut “Command line” and click on it with the right mouse button select “Run as administrator “.

At the command prompt, execute the add user command (where NAME is the user name):

net user NAME /add

Finish, the new user can already be seen in the “Control Panel\User Accounts\User Accounts\Account Management

Mount NTFS partitions on Linux

After connecting the disk to the server, let’s see a list of all the disks and find the name of the desired one:

sudo fdisk -l

I’ll give an example of mounting NTFS partition of a disk in Ubuntu (since I had a disk partitioned into two partitions, drive C and D, then they were found in the system as /dev/sdb1 and /dev/sdb2, both mounted to the created directories):

sudo mkdir /newhdd1
sudo mount -t ntfs /dev/sdb1 /newhdd1
sudo mkdir /newhdd2
sudo mount -t ntfs /dev/sdb2 /newhdd2

Since before this disk was used in the Windows system, I had a mount error:

The disk contains an unclean file system (0, 0).
Metadata kept in Windows cache, refused to mount.
Failed to mount ‘/dev/sdb1’: The operation is not allowed
The NTFS partition is in an unsafe state. Please resume and shutdown
Windows fully (no hibernation or fast restarting), or mount the volume
read-only with the ‘ro’ mount option.

In this case, you can mount the partition in read-only mode:

sudo mount -t ntfs -o ro /dev/sdb1 /newhdd1

Either fix the partitions with the command:

sudo ntfsfix /dev/sdb1
sudo ntfsfix /dev/sdb2

And after that, mount with full access:

sudo mount -t ntfs /dev/sdb1 /newhdd1
sudo mount -t ntfs /dev/sdb2 /newhdd2

You can unmount it like this:

sudo umount -t ntfs /dev/sdb1 /newhdd1
sudo umount -t ntfs /dev/sdb2 /newhdd2

See also:
Managing disk partitions in Ubuntu using fdisk

Adding a disk to LVM

Suppose we have already configured LVM, for example, as I described in this article – Setting up and using LVM

Switch to the root user:

sudo -i

If there is no hot-swap drive, turn off the server, connect a new disk, turn on the server and look at the name of the new disk (in my case it’s /dev/sdd):

fdisk -l

Let’s see the existing groups and how much space is left:

vgdisplay

Let’s see a list of physical volumes:

pvdisplay

Let’s start marking a new disk:

fdisk /dev/sdd
n
p
1
Enter
Enter
t
8e
w

Now create a physical volume:

pvcreate /dev/sdd1

Let’s see a list of logical volumes:

lvdisplay

We extend it by adding a new partition (where ixnfo is a volume group):

vgextend ixnfo /dev/sdd1

See the list of physical volumes as follows:

pvscan

Let’s look at the path of the logical volume (in my case /dev/ixnfo/temp) and add a new section:

lvextend /dev/ixnfo/temp /dev/sdd1

Let’s see the size of the mounted logical volume:

df -h

So the size did not change, we’ll fix it with the command:

resize2fs /dev/ixnfo/temp

Done.

Setting up and using LVM

LVM (Logical Volume Management) allows you to compile multiple disks and areas from disks into one logical volume and then split again as you like.

PV (Physical Volume) — partition or whole disk
VG (Volume Group) — a single disk assembled from physical volumes
LV (Logical Volume)

Switch to the root user:

sudo -i

Install LVM if it is not already installed (Ubuntu/Debian):

apt-get install lvm2

Let’s look at the information about the disks:

fdisk -l

On the test I have /dev/sda with the system and not marked /dev/sdb.

Let’s make the physical partition all /dev/sdb without partitioning:

pvcreate /dev/sdb

To view the list of physical volumes, use the command:

pvdisplay

Create a volume group named ixnfo:

vgcreate ixnfo /dev/sdb

If necessary, delete as follows:

vgremove ixnfo

Example of viewing existing groups and how much space is left:

vgdisplay

For the test, create a logical volume “temp” of 100 megabytes:

lvcreate -L100 -n temp ixnfo

To view the list of logical volumes, use the command:

lvdisplay

Let’s format it:

mkfs.ext4 -L temp /dev/ixnfo/temp

Create a folder, mount the created volume:

mkdir /mnt/temp
mount /dev/ixnfo/temp /mnt/temp

You can unmount it like this:

umount /mnt/temp/

See also:
Adding a disk to LVM
Managing disk partitions in Ubuntu using fdisk