Create 1N Vertica 9.2 cluster in AWS EC2 RHEL 8 (Part 1)

In this post I’ll describe the steps to create a single node Vertica (version 9.2) cluster on AWS using EC2 (t2.large instance type) with RHEL version 8. I have already instantiated an EC2 box with 300 GB additional HDD EBS.

Also, I have installed the required access keys on my local machine, so that I can ssh into EC2 using ec2-user.

sudha@DESKTOP-61047H4 MINGW64 ~
$ ssh -i Downloads/tspawskeyohio.pem  ec2-user@ec2-18-219-218-102.us-east-2.compute.amazonaws.com
Last login: Wed Aug 14 21:48:44 2019 from 73.187.197.50
[ec2-user@ip-172-31-1-75 ~]$ exit;
logout
Connection to ec2-18-219-218-102.us-east-2.compute.amazonaws.com closed.
sudha@DESKTOP-61047H4 MINGW64 ~

Please note: Several Pre-install steps and Vertica installation will require root access to OS.

Pre-Install Checklist

Vertica installation script will check for several HW/SW/OS configuration for setups, and give errors or warning and exists without installation. In this section I’ll go over the check list and configure them per Vertica documentation.

By default the installation script will create dbadmin user and verticadba in the Linux OS, on all the hosts.

Vertica installation script

  • Configures and authorizes dbadmin for passwordless SSH between all cluster nodes. SSH must be installed and configured to allow passwordless logins. See Enable Secure Shell (SSH) Logins.
  • Sets the dbadmin user’s BASH shell to /bin/bash, required to run scripts, such as install_vertica and the Administration Tools.
  • Provides read-write-execute permissions on the following directories:
    • /opt/vertica/
    • */home/dbadmin—the default directory for database data and catalog files (configurable through the install script)

Confirm PERL 5 is installed

$ ssh -i Downloads/tspawskeyohio.pem  ec2-user@ec2-18-219-218-102.us-east-2.compute.amazonaws.com
Last login: Thu Aug 15 15:04:53 2019 from 73.187.197.50
[ec2-user@ip-172-31-1-75 ~]$
[ec2-user@ip-172-31-1-75 ~]$
[ec2-user@ip-172-31-1-75 ~]$
[ec2-user@ip-172-31-1-75 ~]$ perl --version
-bash: perl: command not found
[ec2-user@ip-172-31-1-75 ~]$

Since perl 5 is not installed, let us install and verify.

[root@ip-172-31-1-75 ~]# yum install perl
Last metadata expiration check: 0:11:08 ago on Thu 15 Aug 2019 07:10:53 PM UTC.
Dependencies resolved.
. . . SNIP SNIP . . . 
Complete!
[root@ip-172-31-1-75 ~]# perl --version

This is perl 5, version 26, subversion 3 (v5.26.3) built for x86_64-linux-thread-multi
(with 51 registered patches, see perl -V for more detail)

Copyright 1987-2018, Larry Wall

Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.

Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl".  If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.

[root@ip-172-31-1-75 ~]# perl -V:version
version='5.26.3';
[root@ip-172-31-1-75 ~]#

Verify SUDO works and root can run any commands anywhere

[root@ip-172-31-1-75 ~]# which sudo
/usr/bin/sudo
[root@ip-172-31-1-75 ~]# cat /etc/sudoers | grep root
## the root user, without needing the root password.
## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
## cdrom as root
[root@ip-172-31-1-75 ~]#

Ensure BASH is the shell for root and dbadmin (which does not exists yet). you can use chsh for root shell setting. Also you can change the symbolic link of /bin/sh to /bin/bash.

Disk storage preparation

I created EBS backed instances, and EBS need to be mounted and formatted to the EC2 instance. Note below, the EBS availability and EC2 instance storage on AWS console.

Note that I have one 500 GB disk attached to the instance but the other 2 are not attached and are available.

Now, I’ll format and mount the attached volume on the EC2 OS. First let us find out what disks are available and mounted on the EC2 host. The instructions for these steps are in AWS documentation.

[root@ip-172-31-1-75 ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   10G  0 disk
├─xvda1 202:1    0    1M  0 part
└─xvda2 202:2    0   10G  0 part /
xvdb    202:16   0  500G  0 disk
[root@ip-172-31-1-75 ~]#

Note that the 10 GB disk is mounted at root / drive. where are there is 500 GB disk available but not mounted.

Let us check if there is a file-system on the disk.

[root@ip-172-31-1-75 ~]# file -s /dev/xvdb
/dev/xvdb: data
[root@ip-172-31-1-75 ~]# file -s /dev/xvda
/dev/xvda: DOS/MBR boot sector
[root@ip-172-31-1-75 ~]#

/dev/xvdb does not have any file system.

I’ll use mkfs (Make File Systesm) command to create a file system.

[root@ip-172-31-1-75 ~]# mkfs -t xfs /dev/xvdb
meta-data=/dev/xvdb              isize=512    agcount=4, agsize=32768000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=131072000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=64000, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@ip-172-31-1-75 ~]#

Now I’ll create /data directory and mount the new disk on the directory.

[root@ip-172-31-1-75 ~]# mkdir /data
mkdir: cannot create directory ‘/data’: File exists
[root@ip-172-31-1-75 ~]# ll /
total 16
lrwxrwxrwx.   1 root root    7 Aug 12  2018 bin -> usr/bin
dr-xr-xr-x.   6 root root 4096 Aug 14 15:43 boot
drwxr-xr-x.   2 root root    6 Jun 18 17:10 data
drwxr-xr-x.  18 root root 2740 Aug 15 20:36 dev
drwxr-xr-x.  80 root root 8192 Aug 15 19:22 etc
drwxr-xr-x.   3 root root   22 Aug 14 15:43 home
lrwxrwxrwx.   1 root root    7 Aug 12  2018 lib -> usr/lib
lrwxrwxrwx.   1 root root    9 Aug 12  2018 lib64 -> usr/lib64
drwxr-xr-x.   2 root root    6 Aug 12  2018 media
drwxr-xr-x.   2 root root    6 Aug 12  2018 mnt
drwxr-xr-x.   2 root root    6 Aug 12  2018 opt
dr-xr-xr-x. 104 root root    0 Aug 14 15:41 proc
dr-xr-x---.   3 root root  149 Aug 14 15:43 root
drwxr-xr-x.  23 root root  660 Aug 14 15:43 run
lrwxrwxrwx.   1 root root    8 Aug 12  2018 sbin -> usr/sbin
drwxr-xr-x.   2 root root    6 Aug 12  2018 srv
dr-xr-xr-x.  13 root root    0 Aug 14 15:41 sys
drwxrwxrwt.   8 root root  172 Aug 15 19:22 tmp
drwxr-xr-x.  12 root root  144 Jun 18 17:04 usr
drwxr-xr-x.  20 root root  278 Aug 14 15:42 var
[root@ip-172-31-1-75 ~]# mkdir /vdata
[root@ip-172-31-1-75 ~]# mount /dev/xvdb /vdata
[root@ip-172-31-1-75 ~]#

Unfortunately /data directory already exists, so I created /vdata directory and mounted the disk. We can verify the disk.

[root@ip-172-31-1-75 ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   10G  0 disk
├─xvda1 202:1    0    1M  0 part
└─xvda2 202:2    0   10G  0 part /
xvdb    202:16   0  500G  0 disk /vdata
[root@ip-172-31-1-75 ~]#

Note the /vdata directory with 500 GB available. However this mount will not be preserved across reboots. To achieve that we’ll modify the fstab file.

[root@ip-172-31-1-75 ~]# cp /etc/fstab /etc/fstab.orig
[root@ip-172-31-1-75 ~]# blkid
/dev/xvda2: UUID="a727b695-0c21-404a-b42b-3075c8deb6ab" TYPE="xfs" PARTUUID="e6e324f2-02"
/dev/xvda1: PARTUUID="e6e324f2-01"
/dev/xvdb: UUID="f9fe8fde-ba3f-492c-944c-8d6c8e44ace2" TYPE="xfs"
[root@ip-172-31-1-75 ~]# vim /etc/fstab
-bash: vim: command not found
[root@ip-172-31-1-75 ~]# sudovi /etc/fstab
-bash: sudovi: command not found
[root@ip-172-31-1-75 ~]# vi /etc/fstab
[root@ip-172-31-1-75 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Jun 18 17:03:37 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=a727b695-0c21-404a-b42b-3075c8deb6ab /                       xfs     defaults        0 0
[root@ip-172-31-1-75 ~]#
[root@ip-172-31-1-75 ~]#
[root@ip-172-31-1-75 ~]# vi /etc/fstab
[root@ip-172-31-1-75 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Jun 18 17:03:37 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=a727b695-0c21-404a-b42b-3075c8deb6ab /                       xfs     defaults        0 0
UUID=f9fe8fde-ba3f-492c-944c-8d6c8e44ace2 /vdata                  xfs     defaults,nofail 0 2
[root@ip-172-31-1-75 ~]#

I’ll make a backup of current fstab. Find the UUID of the new disk using blkid, and modify the fstab as shown above.

Let us verify that this disk will be mounted without problem on next reboot.

[root@ip-172-31-1-75 ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   10G  0 disk
├─xvda1 202:1    0    1M  0 part
└─xvda2 202:2    0   10G  0 part /
xvdb    202:16   0  500G  0 disk /vdata
[root@ip-172-31-1-75 ~]# umount /vdata
[root@ip-172-31-1-75 ~]# mount -a
[root@ip-172-31-1-75 ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   10G  0 disk
├─xvda1 202:1    0    1M  0 part
└─xvda2 202:2    0   10G  0 part /
xvdb    202:16   0  500G  0 disk /vdata
[root@ip-172-31-1-75 ~]#

If there are any errors in previous command please restore the original fstab and investigate.

Network Configuration Check

Vertica requires several ports to be available as documented here.

OS Configuration

SWAP file of 2GB is required for Vertica

[root@ip-172-31-1-75 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        7999040      183100     7358408       16756      457532     7554228
Swap:             0           0           0
[root@ip-172-31-1-75 ~]# cat /proc/swaps
Filename                                Type            Size    Used    Priority
[root@ip-172-31-1-75 ~]#

So let create a swap file and make it permanent

[root@ip-172-31-1-75 ~]# dd if=/dev/zero of=/swapfile bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 20.9007 s, 205 MB/s
[root@ip-172-31-1-75 ~]# chmod 600 /swapfile
[root@ip-172-31-1-75 ~]# mkswap /swapfile
Setting up swapspace version 1, size = 4 GiB (4294963200 bytes)
no label, UUID=7b08ef72-423b-4e60-bffd-d83fcf993418
[root@ip-172-31-1-75 ~]# swapon /swapfile
[root@ip-172-31-1-75 ~]# swapon -s
Filename                                Type            Size    Used    Priority
/swapfile                               file            4194300 0       -2
[root@ip-172-31-1-75 ~]#

Now that 4GB swap file is defined, let us make it persist across reboots by modifying the fstab.

[root@ip-172-31-1-75 ~]# swapon -s
Filename                                Type            Size    Used    Priority
/swapfile                               file            4194300 0       -2
[root@ip-172-31-1-75 ~]# blkid
/dev/xvda2: UUID="a727b695-0c21-404a-b42b-3075c8deb6ab" TYPE="xfs" PARTUUID="e6e324f2-02"
/dev/xvdb: UUID="f9fe8fde-ba3f-492c-944c-8d6c8e44ace2" TYPE="xfs"
/dev/xvda1: PARTUUID="e6e324f2-01"
[root@ip-172-31-1-75 ~]# vi /etc/fstab
[root@ip-172-31-1-75 ~]# cat /proc/swaps
Filename                                Type            Size    Used    Priority
/swapfile                               file            4194300 0       -2
[root@ip-172-31-1-75 ~]#

Automatic OS configuration

There are several OS configuration Vertica installation script will make during installation. They are documented here.

Manual OS configuration

There are several OS configuration that must be done manually. They are documented here. In this section I’ll goover each.

Disk Readahead

Vertica requires that Disk Readahead be set to at least 2048. The installer reports this issue with the identifier: S0020.

For each drive in the Vertica system, Vertica recommends that you set the readahead value to at least 2048 for most deployments. The command immediately changes the readahead value for the specified disk. The second line adds the command to /etc/rc.local so that the setting is applied each time the system is booted.

[root@ip-172-31-1-75 ~]# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   10G  0 disk
├─xvda1 202:1    0    1M  0 part
└─xvda2 202:2    0   10G  0 part /
xvdb    202:16   0  500G  0 disk /vdata
[root@ip-172-31-1-75 ~]# /sbin/blockdev --setra 2048 /dev/xvda
[root@ip-172-31-1-75 ~]# /sbin/blockdev --setra 2048 /dev/xvdb
[root@ip-172-31-1-75 ~]# echo '/sbin/blockdev --setra 2048 /dev/xvda' >> /etc/rc.local
[root@ip-172-31-1-75 ~]# echo '/sbin/blockdev --setra 2048 /dev/xvdb' >> /etc/rc.local
[root@ip-172-31-1-75 ~]#

Per Vertica documentation, If you are using Red Hat 7.0 or CentOS 7.0 or higher, run the following command as root or sudo:

[root@ip-172-31-1-75 ~]# ll /etc/rc.d/rc.local
-rw-r--r--. 1 root root 550 Aug 15 23:33 /etc/rc.d/rc.local
[root@ip-172-31-1-75 ~]# chmod +x /etc/rc.d/rc.local
[root@ip-172-31-1-75 ~]# ll /etc/rc.d/rc.local
-rwxr-xr-x. 1 root root 550 Aug 15 23:33 /etc/rc.d/rc.local
[root@ip-172-31-1-75 ~]#

I/O Scheduling

Vertica requires that I/O Scheduling be set to deadline or noop. The installer checks what scheduler the system is using, reporting an unsupported scheduler issue with identifier: S0150. If the installer cannot detect the type of scheduler in use (typically if your system is using a RAID array), it reports that issue with identifier: S0151.

If your system is not using a RAID array, then complete the following steps to change your system to a supported I/O Scheduler. If you are using a RAID array, then consult your RAID vendor documentation for the best performing scheduler for your hardware. More info here.

However in RHEL 8.0 the IO scheduler algorithms have been in changed. More info here.

I am going to leave I/O Scheduling to mq-deadline

[root@ip-172-31-1-75 ~]# cat /sys/block/xvdb/queue/scheduler
[mq-deadline] kyber bfq none
[root@ip-172-31-1-75 ~]#
[root@ip-172-31-1-75 ~]# cat /sys/block/*/queue/scheduler
[mq-deadline] kyber bfq none
[mq-deadline] kyber bfq none
[root@ip-172-31-1-75 ~]#

This MAY cause an error when installing Vertica on RHEL 8.

IF I/O Scheduling need to be changed, we can use the following commands.

echo deadline > /sys/block/sda/queue/scheduler
echo 'echo deadline > /sys/block/sda/queue/scheduler' >> /etc/rc.local

Enabling or Disabling Transparent Hugepages

Per Vertica documentation for RHEL 7.x that TH must be enabled, prior RHEL versions it must be disabled.

[root@ip-172-31-1-75 ~]#  cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
[root@ip-172-31-1-75 ~]#

Check for Swappiness

The swappiness kernel parameter defines the amount, and how often, the kernel copies RAM contents to a swap space. Vertica recommends a value of 1. We can check/set swappiness either of the following ways.

[root@ip-172-31-1-75 ~]# cat /sys/fs/cgroup/memory/memory.swappiness
30
[root@ip-172-31-1-75 ~]# cat /proc/sys/vm/swappiness
30
[root@ip-172-31-1-75 ~]#

Per archlinux.org documentation, swappiness can be set using sysctl command or by modifying /etc/sysctl.d/99-swappiness.conf.

[root@ip-172-31-1-75 ~]# cat /sys/fs/cgroup/memory/memory.swappiness
30
[root@ip-172-31-1-75 ~]# cat /proc/sys/vm/swappiness
30
[root@ip-172-31-1-75 ~]# sysctl -w vm.swappiness=10
vm.swappiness = 10
[root@ip-172-31-1-75 ~]# cat /sys/fs/cgroup/memory/memory.swappiness
10
[root@ip-172-31-1-75 ~]# cat /proc/sys/vm/swappiness
10
[root@ip-172-31-1-75 ~]# sysctl -w vm.swappiness=1
vm.swappiness = 1
[root@ip-172-31-1-75 ~]# cat /proc/sys/vm/swappiness
1
[root@ip-172-31-1-75 ~]# cat /sys/fs/cgroup/memory/memory.swappiness
1
[root@ip-172-31-1-75 ~]#

Also, we’ll update the /etc/sysctl.conf file with swappiness value.

[root@ip-172-31-1-75 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
[root@ip-172-31-1-75 ~]# vi /etc/sysctl.conf
[root@ip-172-31-1-75 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
vm.swappiness = 1
[root@ip-172-31-1-75 ~]#

Enabling Network Time Protocol (NTP)

The network time protocol (NTP) OR Chrony must be running on all of the hosts in the cluster so that their clocks are synchronized. The spread daemon relies on all of the nodes to have their clocks synchronized for timing purposes. If your nodes do not have NTP/Chrony running, the installation can fail with a spread configuration error or other errors.

[root@ip-172-31-1-75 ~]# chkconfig --list ntpd

Note: This output shows SysV services only and does not include native
      systemd services. SysV configuration data might be overridden by native
      systemd configuration.

      If you want to list systemd services use 'systemctl list-unit-files'.
      To see services enabled on particular target use
      'systemctl list-dependencies [target]'.

error reading information on service ntpd: No such file or directory
[root@ip-172-31-1-75 ~]#
[root@ip-172-31-1-75 ~]# systemctl list-unit-files ntpd

0 unit files listed.
[root@ip-172-31-1-75 ~]#

NTPD is NOT running. So let us check for chronyd.

[root@ip-172-31-1-75 ~]# systemctl status chronyd
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-08-14 15:42:18 UTC; 2 days ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
 Main PID: 654 (chronyd)
    Tasks: 1 (limit: 26213)
   Memory: 2.0M
   CGroup: /system.slice/chronyd.service
           └─654 /usr/sbin/chronyd

Aug 14 15:43:31 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: System clock TAI offset set to 37 seconds
Aug 14 15:47:51 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 208.75.88.4
Aug 14 19:18:04 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 199.233.236.226
Aug 15 02:45:26 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 72.87.88.202
Aug 15 12:19:14 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 199.233.236.226
Aug 16 01:57:21 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 72.87.88.202
Aug 16 03:14:50 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 208.75.88.4
Aug 16 05:11:09 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 72.87.88.202
Aug 16 07:29:16 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 208.75.88.4
Aug 16 11:08:50 ip-172-31-1-75.us-east-2.compute.internal chronyd[654]: Selected source 199.233.236.226
[root@ip-172-31-1-75 ~]#

We are all set with Chrony, by default in RHEL 8.

SELinux Configuration

Vertica does not support SELinux except when SELinux is running in permissive mode. If it detects that SELinux is installed and the mode cannot be determined the installer reports this issue with the identifier: S0080. If the mode can be determined, and the mode is not permissive, then the issue is reported with the identifier: S0081. Let’s verify and disable SELinux if required.

[root@ip-172-31-1-75 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@ip-172-31-1-75 ~]#

SELinux is set to enforcing, so we’ll change it to permissive, which is supported by Vertica.

[root@ip-172-31-1-75 ~]# setenforce Permissive
[root@ip-172-31-1-75 ~]# vi /etc/selinux/config
[root@ip-172-31-1-75 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=Permissive
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@ip-172-31-1-75 ~]#

CPU Frequency Scaling

In general, if you do not require CPU frequency scaling, then disable it so as not to impact system performance. The installer allows CPU frequency scaling to be enabled when the cpufreq scaling governor is set to performance. If the cpu scaling governor is set to ondemand, and ignore_nice_load is 1 (true), then the installer fails with the error S0140. If the cpu scaling governor is set to ondemand and ignore_nice_load is 0 (false), then the installer warns with the identifier S0141. In general, if you do not require CPU frequency scaling, then disable it so as not to impact system performance.

For this install I’ll leve the CPU frequency governors to their default.

Disabling Defrag 

On all Red Hat and CentOS systems, you must disable the defrag utility to meet Vertica configuration requirements.

[root@ip-172-31-1-75 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
always defer defer+madvise [madvise] never
[root@ip-172-31-1-75 ~]# cat /etc/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

/sbin/blockdev --setra 2048 /dev/xvda
/sbin/blockdev --setra 2048 /dev/xvdb
[root@ip-172-31-1-75 ~]# ll /etc/rc.local
lrwxrwxrwx. 1 root root 13 May  2 11:17 /etc/rc.local -> rc.d/rc.local
[root@ip-172-31-1-75 ~]# cat /etc/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

/sbin/blockdev --setra 2048 /dev/xvda
/sbin/blockdev --setra 2048 /dev/xvdb
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
    echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
[root@ip-172-31-1-75 ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
[root@ip-172-31-1-75 ~]#

Install Support Tools

Vertica suggests that the following tools are installed so support can assist in troubleshooting your system if any issues arise:

  • pstack (or gstack) package. Identified by issue S0040 when not installed.
    • On Red Hat 7 and CentOS 7 systems, the pstack package is installed as part of the gdb package.
  • mcelog package. Identified by issue S0041 when not installed.
  • sysstat package. Identified by issue S0045 when not installed.
[root@ip-172-31-1-75 ~]# yum install gdb
Last metadata expiration check: 1:58:12 ago on Fri 16 Aug 2019 04:25:53 PM UTC.
Dependencies resolved.
. . . SNIP SNIP . . .
Complete!
[root@ip-172-31-1-75 ~]#
[root@ip-172-31-1-75 ~]# yum install mcelog
Last metadata expiration check: 2:00:01 ago on Fri 16 Aug 2019 04:25:53 PM UTC.
Dependencies resolved.
. . . SNIP SNIP . . .
Installed:
  mcelog-3:159-1.el8.x86_64

Complete!
[root@ip-172-31-1-75 ~]#
[root@ip-172-31-1-75 ~]# yum install sysstat
Last metadata expiration check: 2:01:04 ago on Fri 16 Aug 2019 04:25:53 PM UTC.
Dependencies resolved.
. . . SNIP SNIP . . .
Complete!
[root@ip-172-31-1-75 ~]#

Now we have completed all required configuration changes/setup for Vertica installation. We’ll SFTP upload the Vertica installation package (CE) that I downloaded from Vertica Portal (requires login).

sudha@DESKTOP-61047H4 MINGW64 ~
$ sftp  -i Downloads/tspawskeyohio.pem  ec2-user@ec2-18-219-218-102.us-east-2.compute.amazonaws.com
Connected to ec2-user@ec2-18-219-218-102.us-east-2.compute.amazonaws.com.
s     ls
s     put Downloads/vertica-9.2.1-0.x86_64.RHEL6.rpm /tmp/.
Uploading Downloads/vertica-9.2.1-0.x86_64.RHEL6.rpm to /tmp/./vertica-9.2.1-0.x86_64.RHEL6.rpm
Downloads/vertica-9.2.1-0.x86_64.RHEL6.rpm                                                                                                                  100%  244MB   1.3MB/s   03:12
s     s     put Downloads/vertica-console-9.2.1-0.x86_64.RHEL6.rpm /tmp/.
Uploading Downloads/vertica-console-9.2.1-0.x86_64.RHEL6.rpm to /tmp/./vertica-console-9.2.1-0.x86_64.RHEL6.rpm
Downloads/vertica-console-9.2.1-0.x86_64.RHEL6.rpm                                                                                                          100%  239MB   1.4MB/s   02:55
s     quit;
Invalid command.
s     quit

sudha@DESKTOP-61047H4 MINGW64 ~

Installing Vertica

Now we’ll install Vertica server in the host.

[root@ip-172-31-1-75 ~]# ll /tmp
total 494744
drwx------. 3 root     root            17 Aug 14 15:42 systemd-private-60c405a5de6b4babbc838020964caebb-chronyd.service-nEfIPg
-rw-r--r--. 1 ec2-user ec2-user 255623796 Aug 16 18:58 vertica-9.2.1-0.x86_64.RHEL6.rpm
-rw-r--r--. 1 ec2-user ec2-user 250990248 Aug 16 19:12 vertica-console-9.2.1-0.x86_64.RHEL6.rpm
[root@ip-172-31-1-75 ~]# rpm -Uvh /tmp/vertica-9.2.1-0.x86_64.RHEL6.rpm
error: Failed dependencies:
        dialog is needed by vertica-9.2.1-0.x86_64
[root@ip-172-31-1-75 ~]#

One of the dependency package is missing: dialog! We’ll install it now.

[root@ip-172-31-1-75 ~]# yum install dialog
Last metadata expiration check: 2:55:08 ago on Fri 16 Aug 2019 04:25:53 PM UTC.
Dependencies resolved.
. . . SNIP SNIP . . .
Installed:
  dialog-1.3-13.20171209.el8.x86_64

Complete!
[root@ip-172-31-1-75 ~]#

Now we’ll retry installing Vertica Server…

[root@ip-172-31-1-75 ~]# rpm -Uvh /tmp/vertica-9.2.1-0.x86_64.RHEL6.rpm
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:vertica-9.2.1-0                  ################################# [100%]

Vertica Analytic Database V9.2.1-0 successfully installed on host ip-172-31-1-75.us-east-2.compute.internal

To complete your NEW installation and configure the cluster, run:
 /opt/vertica/sbin/install_vertica

To complete your Vertica UPGRADE, run:
 /opt/vertica/sbin/update_vertica

----------------------------------------------------------------------------------
Important
----------------------------------------------------------------------------------
Before upgrading Vertica, you must backup your database.  After you restart your
database after upgrading, you cannot revert to a previous Vertica software version.
----------------------------------------------------------------------------------

View the latest Vertica documentation at https://www.vertica.com/documentation/vertica/

[root@ip-172-31-1-75 ~]#

With RHEL 8 ALL initial verification succeeded!!!

Now, I’ll continue with installing Vertica SW on the HOST which will build the cluster in my next post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s