Create 1N Vertica 9.2 cluster in AWS EC2 RHEL 7.5 (Part 1)

In previous post I described the installation process of Vertica 9.2 on RHEL 8.0. I failed. RHEL 8.0 is NOT supported yet.

In this post I’ll describe the steps to create a single node Vertica (version 9.2) cluster on AWS using EC2 (t2.large instance type) with RHEL version 7.5. I have already instantiated an EC2 box with 300 GB additional HDD EBS.

sudha@DESKTOP-61047H4 MINGW64 ~
 ute.amazonaws.coms/tspawskeyohio.pem  ec2-user@ec2-18-191-186-157.us-east-2.compu
 Last login: Fri Aug 16 23:21:10 2019 from c-73-187-197-50.hsd1.pa.comcast.net
 [ec2-user@ip-172-31-10-2 ~]$ uname
 Linux
 [ec2-user@ip-172-31-10-2 ~]$ uname -a
 Linux ip-172-31-10-2.us-east-2.compute.internal 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
 [ec2-user@ip-172-31-10-2 ~]$ uname --kernel-name --kernel-release --machine
 Linux 3.10.0-862.el7.x86_64 x86_64
 [ec2-user@ip-172-31-10-2 ~]$ cat /etc/os-release
 NAME="Red Hat Enterprise Linux Server"
 VERSION="7.5 (Maipo)"
 ID="rhel"
 ID_LIKE="fedora"
 VARIANT="Server"
 VARIANT_ID="server"
 VERSION_ID="7.5"
 PRETTY_NAME="Red Hat Enterprise Linux Server 7.5 (Maipo)"
 ANSI_COLOR="0;31"
 CPE_NAME="cpe:/o:redhat:enterprise_linux:7.5:GA:server"
 HOME_URL="https://www.redhat.com/"
 BUG_REPORT_URL="https://bugzilla.redhat.com/"
 REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
 REDHAT_BUGZILLA_PRODUCT_VERSION=7.5
 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
 REDHAT_SUPPORT_PRODUCT_VERSION="7.5"
 [ec2-user@ip-172-31-10-2 ~]$

Confirm PERL 5 is installed

[ec2-user@ip-172-31-10-2 ~]$ perl --version
-bash: perl: command not found
[ec2-user@ip-172-31-10-2 ~]$

So PERLis NOT installed. Let us install and verify.

[root@ip-172-31-10-2 ~]# yum install perl
 Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
. . . SNIP SNIP . . . 
Complete!
[root@ip-172-31-10-2 ~]#
[root@ip-172-31-10-2 ~]# perl --version
 This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
 (with 39 registered patches, see perl -V for more detail)
 Copyright 1987-2012, Larry Wall
 Perl may be copied only under the terms of either the Artistic License or the
 GNU General Public License, which may be found in the Perl 5 source kit.
 Complete documentation for Perl, including FAQ lists, should be found on
 this system using "man perl" or "perldoc perl".  If you have access to the
 Internet, point your browser at http://www.perl.org/, the Perl Home Page.
 [root@ip-172-31-10-2 ~]#

Disk storage preparation

I created EBS backed instances, and EBS need to be mounted and formatted to the EC2 instance. Note below, the EBS availability and EC2 instance storage on AWS console.

[root@ip-172-31-10-2 ~]# lsblk
 NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 xvda    202:0    0   10G  0 disk
 ├─xvda1 202:1    0    1M  0 part
 └─xvda2 202:2    0   10G  0 part /
 xvdb    202:16   0  500G  0 disk
 [root@ip-172-31-10-2 ~]# file -s /dev/xvdb
 /dev/xvdb: data
 [root@ip-172-31-10-2 ~]# mkfs -t xfs /dev/xvdb
 meta-data=/dev/xvdb              isize=512    agcount=4, agsize=32768000 blks
          =                       sectsz=512   attr=2, projid32bit=1
          =                       crc=1        finobt=0, sparse=0
 data     =                       bsize=4096   blocks=131072000, imaxpct=25
          =                       sunit=0      swidth=0 blks
 naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
 log      =internal log           bsize=4096   blocks=64000, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
 realtime =none                   extsz=4096   blocks=0, rtextents=0
 [root@ip-172-31-10-2 ~]# mkdir /data
 mkdir: cannot create directory ‘/data’: File exists
 [root@ip-172-31-10-2 ~]# mkdir /vdata
 [root@ip-172-31-10-2 ~]# mount /dev/xvdb /vdata
 [root@ip-172-31-10-2 ~]# lsblk
 NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 xvda    202:0    0   10G  0 disk
 ├─xvda1 202:1    0    1M  0 part
 └─xvda2 202:2    0   10G  0 part /
 xvdb    202:16   0  500G  0 disk /vdata
 [root@ip-172-31-10-2 ~]# cp /etc/fstab /etc/fstab.orig
 [root@ip-172-31-10-2 ~]# blkid
 /dev/xvda2: UUID="50a9826b-3a50-44d0-ad12-28f2056e9927" TYPE="xfs" PARTUUID="cc8f8c5a-3a04-4a6a-aa62-ed173ee9aede"
 /dev/xvda1: PARTUUID="c907a1e1-acff-46f8-a441-e67e98945e91"
 /dev/xvdb: UUID="fb21b2b5-009c-4aa8-8650-8cc49ec0d951" TYPE="xfs"
 [root@ip-172-31-10-2 ~]# cat /etc/fstab
 #
 /etc/fstab
 Created by anaconda on Fri Mar 23 17:41:14 2018
 #
 Accessible filesystems, by reference, are maintained under '/dev/disk'
 See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
 #
 UUID=50a9826b-3a50-44d0-ad12-28f2056e9927 /                       xfs     defaults        0 0
 [root@ip-172-31-10-2 ~]# vi /etc/fstab
 [root@ip-172-31-10-2 ~]# cat /etc/fstab
 #
 /etc/fstab
 Created by anaconda on Fri Mar 23 17:41:14 2018
 #
 Accessible filesystems, by reference, are maintained under '/dev/disk'
 See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
 #
 UUID=50a9826b-3a50-44d0-ad12-28f2056e9927 /                       xfs     defaults        0 0
 UUID=fb21b2b5-009c-4aa8-8650-8cc49ec0d951 /vdata                  xfs     defaults,nofail 0 2
 [root@ip-172-31-10-2 ~]#

Let’s verify that reboot will retain the EBS (500 GB).

[root@ip-172-31-10-2 ~]# lsblk
 NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 xvda    202:0    0   10G  0 disk
 ├─xvda1 202:1    0    1M  0 part
 └─xvda2 202:2    0   10G  0 part /
 xvdb    202:16   0  500G  0 disk /vdata
 [root@ip-172-31-10-2 ~]# umount /vdata
 [root@ip-172-31-10-2 ~]# lsblk
 NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 xvda    202:0    0   10G  0 disk
 ├─xvda1 202:1    0    1M  0 part
 └─xvda2 202:2    0   10G  0 part /
 xvdb    202:16   0  500G  0 disk
 [root@ip-172-31-10-2 ~]# mount -a
 [root@ip-172-31-10-2 ~]# lsblk
 NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 xvda    202:0    0   10G  0 disk
 ├─xvda1 202:1    0    1M  0 part
 └─xvda2 202:2    0   10G  0 part /
 xvdb    202:16   0  500G  0 disk /vdata
 [root@ip-172-31-10-2 ~]#

Network Configuration Check

Vertica requires several ports to be available as documented here.

OS Configuration

SWAP file of 2GB is required for Vertica

I’ll verify that there no swap file/space defined by default and the define 4GB swap file.

$ ssh -i Downloads/tspawskeyohio.pem  ec2-user@ec2-18-191-186-157.us-east-2.compute.amazonaws.com
Last login: Sat Aug 17 00:48:52 2019 from c-73-187-197-50.hsd1.pa.comcast.net
[ec2-user@ip-172-31-10-2 ~]$ sudo su -
Last login: Sat Aug 17 00:56:38 UTC 2019 on pts/0
[root@ip-172-31-10-2 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        8008344      121928     7071540       16848      814876     7596716
Swap:             0           0           0
[root@ip-172-31-10-2 ~]# cat /proc/swaps
Filename                                Type            Size    Used    Priority
[root@ip-172-31-10-2 ~]# dd if=/dev/zero of=/swapfile bs=1G count=4
4+0 records in
4+0 records out
4294967296 bytes (4.3 GB) copied, 21.1704 s, 203 MB/s
[root@ip-172-31-10-2 ~]# chmod 600 /swapfile
[root@ip-172-31-10-2 ~]# mkswap /swapfile
Setting up swapspace version 1, size = 4194300 KiB
no label, UUID=0a9579da-77de-439f-b0bf-bbbe13e1aef9
[root@ip-172-31-10-2 ~]# swapon /swapfile
[root@ip-172-31-10-2 ~]# swapon -s
Filename                                Type            Size    Used    Priority
/swapfile                               file    4194300 0       -1
[root@ip-172-31-10-2 ~]# blkid
/dev/xvda2: UUID="50a9826b-3a50-44d0-ad12-28f2056e9927" TYPE="xfs" PARTUUID="cc8f8c5a-3a04-4a6a-aa62-ed173ee9aede"
/dev/xvdb: UUID="fb21b2b5-009c-4aa8-8650-8cc49ec0d951" TYPE="xfs"
/dev/xvda1: PARTUUID="c907a1e1-acff-46f8-a441-e67e98945e91"
[root@ip-172-31-10-2 ~]# free
              total        used        free      shared  buff/cache   available
Mem:        8008344      125648     2755044       16848     5127652     7534200
Swap:       4194300           0     4194300
[root@ip-172-31-10-2 ~]# vi /etc/fstab
[root@ip-172-31-10-2 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Fri Mar 23 17:41:14 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=50a9826b-3a50-44d0-ad12-28f2056e9927 /                       xfs     defaults        0 0
UUID=fb21b2b5-009c-4aa8-8650-8cc49ec0d951 /vdata                  xfs     defaults,nofail 0 2
/swapfile          swap            swap    defaults        0 0
[root@ip-172-31-10-2 ~]#

Now swap space definition will persist across reboot.

Automatic OS configuration

There are several OS configuration Vertica installation script will make during installation. They are documented here.

Manual OS configuration

There are several OS configuration that must be done manually. They are documented here. In this section I’ll go over each.

Disk Readahead

Vertica requires that Disk Readahead be set to at least 2048. The installer reports this issue with the identifier: S0020.

For each drive in the Vertica system, Vertica recommends that you set the readahead value to at least 2048 for most deployments. The command immediately changes the readahead value for the specified disk. The second line adds the command to /etc/rc.local so that the setting is applied each time the system is booted.

I’ll set the Read Ahead to 2048 MB.

[root@ip-172-31-10-2 ~]# lsblk
 NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 xvda    202:0    0   10G  0 disk
 ├─xvda1 202:1    0    1M  0 part
 └─xvda2 202:2    0   10G  0 part /
 xvdb    202:16   0  500G  0 disk /vdata
 [root@ip-172-31-10-2 ~]# /sbin/blockdev --setra 2048 /dev/xvda
 [root@ip-172-31-10-2 ~]# /sbin/blockdev --setra 2048 /dev/xvdb
 [root@ip-172-31-10-2 ~]#

To persist these values across reboot, I’ll modify the /etc/rc.local.

[root@ip-172-31-10-2 ~]# echo '/sbin/blockdev --setra 2048 /dev/xvda' >> /etc/rc.local
 [root@ip-172-31-10-2 ~]# echo '/sbin/blockdev --setra 2048 /dev/xvdb' >> /etc/rc.local
 [root@ip-172-31-10-2 ~]# cat /etc/rc.local
 !/bin/bash
 THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
 #
 It is highly advisable to create own systemd services or udev rules
 to run scripts during boot instead of using this file.
 #
 In contrast to previous versions due to parallel execution during boot
 this script will NOT be run after all other services.
 #
 Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
 that this script will be executed during boot.
 touch /var/lock/subsys/local
 /sbin/blockdev --setra 2048 /dev/xvda
 /sbin/blockdev --setra 2048 /dev/xvdb
 [root@ip-172-31-10-2 ~]#

I/O Scheduling

Vertica requires that I/O Scheduling be set to deadline or noop. The installer checks what scheduler the system is using, reporting an unsupported scheduler issue with identifier: S0150. If the installer cannot detect the type of scheduler in use (typically if your system is using a RAID array), it reports that issue with identifier: S0151.

If your system is not using a RAID array, then complete the following steps to change your system to a supported I/O Scheduler. If you are using a RAID array, then consult your RAID vendor documentation for the best performing scheduler for your hardware. More info here.

Default I/O scheduling in RHEL 7.5 is deadline, which is accepted by Vertica. I will leave the value as is.

[root@ip-172-31-10-2 ~]# cat /sys/block/xvdb/queue/scheduler
 noop [deadline] cfq
 [root@ip-172-31-10-2 ~]#

Enabling or Disabling Transparent Hugepages

Per Vertica documentation for RHEL 7.x that TH must be enabled, prior RHEL versions it must be disabled.

[root@ip-172-31-10-2 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
 [always] madvise never
 [root@ip-172-31-10-2 ~]#

Check for Swappiness

The swappiness kernel parameter defines the amount, and how often, the kernel copies RAM contents to a swap space. Vertica recommends a value of 1. We can check/set swappiness either of the following ways.

[root@ip-172-31-10-2 ~]# cat /etc/sysctl.conf
 sysctl settings are defined through files in
 /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
 #
 Vendors settings live in /usr/lib/sysctl.d/.
 To override a whole file, create a new file with the same in
 /etc/sysctl.d/ and put new settings there. To override
 only specific settings, add a file with a lexically later
 name in /etc/sysctl.d/ and put new settings there.
 #
 For more information, see sysctl.conf(5) and sysctl.d(5).
 [root@ip-172-31-10-2 ~]# vi /etc/sysctl.conf


 [root@ip-172-31-10-2 ~]# cat /etc/sysctl.conf
 sysctl settings are defined through files in
 /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
 #
 Vendors settings live in /usr/lib/sysctl.d/.
 To override a whole file, create a new file with the same in
 /etc/sysctl.d/ and put new settings there. To override
 only specific settings, add a file with a lexically later
 name in /etc/sysctl.d/ and put new settings there.
 #
 For more information, see sysctl.conf(5) and sysctl.d(5).
 vm.swappiness = 1
 [root@ip-172-31-10-2 ~]#

Enabling Network Time Protocol (NTP)

The network time protocol (NTP) OR Chrony must be running on all of the hosts in the cluster so that their clocks are synchronized. The spread daemon relies on all of the nodes to have their clocks synchronized for timing purposes. If your nodes do not have NTP/Chrony running, the installation can fail with a spread configuration error or other errors. I’ll check for chronyd in RHEL 7.5.

[root@ip-172-31-10-2 ~]# systemctl status chronyd
 ● chronyd.service - NTP client/server
    Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
    Active: active (running) since Fri 2019-08-16 22:55:06 UTC; 1 day 2h ago
      Docs: man:chronyd(8)
            man:chrony.conf(5)
  Main PID: 498 (chronyd)
    CGroup: /system.slice/chronyd.service
            └─498 /usr/sbin/chronyd
 Aug 16 22:55:06 localhost.localdomain systemd[1]: Starting NTP client/server…
 Aug 16 22:55:06 localhost.localdomain chronyd[498]: chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 +DEBUG)
 Aug 16 22:55:06 localhost.localdomain systemd[1]: Started NTP client/server.
 Aug 16 22:55:14 ip-172-31-10-2.us-east-2.compute.internal chronyd[498]: Selected source 165.227.216.233
 [root@ip-172-31-10-2 ~]#

SELinux Configuration

Vertica does not support SELinux except when SELinux is running in permissive mode. If it detects that SELinux is installed and the mode cannot be determined the installer reports this issue with the identifier: S0080. If the mode can be determined, and the mode is not permissive, then the issue is reported with the identifier: S0081. Let’s verify and disable SELinux if required.

[root@ip-172-31-10-2 ~]# cat /etc/selinux/config
 This file controls the state of SELinux on the system.
 SELINUX= can take one of these three values:
 enforcing - SELinux security policy is enforced.
 permissive - SELinux prints warnings instead of enforcing.
 disabled - No SELinux policy is loaded.
 SELINUX=enforcing
 SELINUXTYPE= can take one of three two values:
 targeted - Targeted processes are protected,
 minimum - Modification of targeted policy. Only selected processes are protected.
 mls - Multi Level Security protection.
 SELINUXTYPE=targeted
 [root@ip-172-31-10-2 ~]#

In RHEL 7.5, SELinux setting is enforcing and type is targeted.

[root@ip-172-31-10-2 ~]# cat /etc/selinux/config
 This file controls the state of SELinux on the system.
 SELINUX= can take one of these three values:
 enforcing - SELinux security policy is enforced.
 permissive - SELinux prints warnings instead of enforcing.
 disabled - No SELinux policy is loaded.
 SELINUX=Permissive
 SELINUXTYPE= can take one of three two values:
 targeted - Targeted processes are protected,
 minimum - Modification of targeted policy. Only selected processes are protected.
 mls - Multi Level Security protection.
 SELINUXTYPE=targeted
 [root@ip-172-31-10-2 ~]#

CPU Frequency Scaling

In general, if you do not require CPU frequency scaling, then disable it so as not to impact system performance. The installer allows CPU frequency scaling to be enabled when the cpufreq scaling governor is set to performance. If the cpu scaling governor is set to ondemand, and ignore_nice_load is 1 (true), then the installer fails with the error S0140. If the cpu scaling governor is set to ondemand and ignore_nice_load is 0 (false), then the installer warns with the identifier S0141. In general, if you do not require CPU frequency scaling, then disable it so as not to impact system performance.

For this install I’ll leave the CPU frequency governors to their default.

Disabling Defrag 

On all Red Hat and CentOS systems, you must disable the defrag utility to meet Vertica configuration requirements.

[root@ip-172-31-10-2 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
 [always] madvise never
 [root@ip-172-31-10-2 ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
 [root@ip-172-31-10-2 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
 always madvise [never]
 [root@ip-172-31-10-2 ~]#

Now I’ll modify /etc/rc.local to add the following code to set the value at reboot time.

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi 

Install Support Tools

Vertica suggests that the following tools are installed so support can assist in troubleshooting your system if any issues arise:

  • pstack (or gstack) package. Identified by issue S0040 when not installed.
    • On Red Hat 7 and CentOS 7 systems, the pstack package is installed as part of the gdb package.
  • mcelog package. Identified by issue S0041 when not installed.
  • sysstat package. Identified by issue S0045 when not installed.

I’ll install 4 support tools: gdb, mcelog, sysstat, and dialog

[root@ip-172-31-10-2 ~]# yum install gdb
 Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
 rhui-REGION-client-config-server-7                                                                                                                                     | 2.9 kB  00:00:00
 rhui-REGION-rhel-server-releases                                                                                                                                       | 3.5 kB  00:00:00
 rhui-REGION-rhel-server-rh-common                                                                                                                                      | 3.8 kB  00:00:00
 Resolving Dependencies
 --> Running transaction check
 ---> Package gdb.x86_64 0:7.6.1-115.el7 will be installed
 --> Finished Dependency Resolution
 Dependencies Resolved
 ==============================================================================================================================================================================================
  Package                            Arch                                  Version                                       Repository                                                       Size
 Installing:
  gdb                                x86_64                                7.6.1-115.el7                                 rhui-REGION-rhel-server-releases                                2.4 M
 Transaction Summary
 Install  1 Package
 Total download size: 2.4 M
 Installed size: 7.0 M
 Is this ok [y/d/N]: y
 Downloading packages:
 gdb-7.6.1-115.el7.x86_64.rpm                                                                                                                                           | 2.4 MB  00:00:00
 Running transaction check
 Running transaction test
 Transaction test succeeded
 Running transaction
   Installing : gdb-7.6.1-115.el7.x86_64                                                                                                                                                   1/1
   Verifying  : gdb-7.6.1-115.el7.x86_64                                                                                                                                                   1/1
 Installed:
   gdb.x86_64 0:7.6.1-115.el7
 Complete!
 [root@ip-172-31-10-2 ~]# yum install mcelog
 Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
 Resolving Dependencies
 --> Running transaction check
 ---> Package mcelog.x86_64 3:144-10.94d853b2ea81.el7 will be installed
 --> Finished Dependency Resolution
 Dependencies Resolved
 ==============================================================================================================================================================================================
  Package                           Arch                              Version                                                Repository                                                   Size
 Installing:
  mcelog                            x86_64                            3:144-10.94d853b2ea81.el7                              rhui-REGION-rhel-server-releases                             79 k
 Transaction Summary
 Install  1 Package
 Total download size: 79 k
 Installed size: 179 k
 Is this ok [y/d/N]: y
 Downloading packages:
 mcelog-144-10.94d853b2ea81.el7.x86_64.rpm                                                                                                                              |  79 kB  00:00:00
 Running transaction check
 Running transaction test
 Transaction test succeeded
 Running transaction
   Installing : 3:mcelog-144-10.94d853b2ea81.el7.x86_64                                                                                                                                    1/1
   Verifying  : 3:mcelog-144-10.94d853b2ea81.el7.x86_64                                                                                                                                    1/1
 Installed:
   mcelog.x86_64 3:144-10.94d853b2ea81.el7
 Complete!
 [root@ip-172-31-10-2 ~]# yum install sysstat
 Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
 Resolving Dependencies
 --> Running transaction check
 ---> Package sysstat.x86_64 0:10.1.5-18.el7 will be installed
 --> Processing Dependency: libsensors.so.4()(64bit) for package: sysstat-10.1.5-18.el7.x86_64
 --> Running transaction check
 ---> Package lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7 will be installed
 --> Finished Dependency Resolution
 Dependencies Resolved
 ==============================================================================================================================================================================================
  Package                                 Arch                           Version                                                Repository                                                Size
 Installing:
  sysstat                                 x86_64                         10.1.5-18.el7                                          rhui-REGION-rhel-server-releases                         316 k
 Installing for dependencies:
  lm_sensors-libs                         x86_64                         3.4.0-8.20160601gitf9185e5.el7                         rhui-REGION-rhel-server-releases                          42 k
 Transaction Summary
 Install  1 Package (+1 Dependent package)
 Total download size: 357 k
 Installed size: 1.2 M
 Is this ok [y/d/N]: y
 Downloading packages:
 (1/2): lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm                                                                                                       |  42 kB  00:00:00
 (2/2): sysstat-10.1.5-18.el7.x86_64.rpm                                                                                                                                | 316 kB  00:00:00
 Total                                                                                                                                                         2.3 MB/s | 357 kB  00:00:00
 Running transaction check
 Running transaction test
 Transaction test succeeded
 Running transaction
   Installing : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64                                                                                                                      1/2
   Installing : sysstat-10.1.5-18.el7.x86_64                                                                                                                                               2/2
   Verifying  : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64                                                                                                                      1/2
   Verifying  : sysstat-10.1.5-18.el7.x86_64                                                                                                                                               2/2
 Installed:
   sysstat.x86_64 0:10.1.5-18.el7
 Dependency Installed:
   lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7
 Complete!
 [root@ip-172-31-10-2 ~]# yum install dialog
 Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
 Resolving Dependencies
 --> Running transaction check
 ---> Package dialog.x86_64 0:1.2-5.20130523.el7 will be installed
 --> Finished Dependency Resolution
 Dependencies Resolved
 ==============================================================================================================================================================================================
  Package                             Arch                                Version                                          Repository                                                     Size
 Installing:
  dialog                              x86_64                              1.2-5.20130523.el7                               rhui-REGION-rhel-server-releases                              209 k
 Transaction Summary
 Install  1 Package
 Total download size: 209 k
 Installed size: 505 k
 Is this ok [y/d/N]: y
 Downloading packages:
 dialog-1.2-5.20130523.el7.x86_64.rpm                                                                                                                                   | 209 kB  00:00:00
 Running transaction check
 Running transaction test
 Transaction test succeeded
 Running transaction
   Installing : dialog-1.2-5.20130523.el7.x86_64                                                                                                                                           1/1
   Verifying  : dialog-1.2-5.20130523.el7.x86_64                                                                                                                                           1/1
 Installed:
   dialog.x86_64 0:1.2-5.20130523.el7
 Complete!
 [root@ip-172-31-10-2 ~]#

Now we have completed all required configuration changes/setup for Vertica installation. We’ll SFTP upload the Vertica installation package (CE) that I downloaded from Vertica Portal (requires login).

sudha@DESKTOP-61047H4 MINGW64 ~
 $ sftp  -i Downloads/tspawskeyohio.pem  ec2-user@ec2-18-191-186-157.us-east-2.compute.amazonaws.com
 Connected to ec2-user@ec2-18-191-186-157.us-east-2.compute.amazonaws.com.
 s     put Downloads/vertica-9.2.1-0.x86_64.RHEL6.rpm /tmp/.
 Uploading Downloads/vertica-9.2.1-0.x86_64.RHEL6.rpm to /tmp/./vertica-9.2.1-0.x86_64.RHEL6.rpm
 Downloads/vertica-9.2.1-0.x86_64.RHEL6.rpm                                                                                                                  100%  244MB   1.3MB/s   03:09
 s     put Downloads/vertica-console-9.2.1-0.x86_64.RHEL6.rpm /tmp/.
 Uploading Downloads/vertica-console-9.2.1-0.x86_64.RHEL6.rpm to /tmp/./vertica-console-9.2.1-0.x86_64.RHEL6.rpm
 Downloads/vertica-console-9.2.1-0.x86_64.RHEL6.rpm                                                                                                          100%  239MB   1.2MB/s   03:28
 s     quit
 sudha@DESKTOP-61047H4 MINGW64 ~

Installing Vertica

[root@ip-172-31-10-2 ~]# ll /tmp
 total 0
 drwx------. 3 root root 17 Aug 16 22:55 systemd-private-cdb93938cf4e43ed83b2ddd0e55a11c7-chronyd.service-YEuGgr
 [root@ip-172-31-10-2 ~]# ll /tmp
 total 494744
 drwx------. 3 root     root            17 Aug 16 22:55 systemd-private-cdb93938cf4e43ed83b2ddd0e55a11c7-chronyd.service-YEuGgr
 -rw-r--r--. 1 ec2-user ec2-user 255623796 Aug 18 01:31 vertica-9.2.1-0.x86_64.RHEL6.rpm
 -rw-r--r--. 1 ec2-user ec2-user 250990248 Aug 18 01:36 vertica-console-9.2.1-0.x86_64.RHEL6.rpm
 [root@ip-172-31-10-2 ~]# rpm -Uvh /tmp/vertica-9.2.1-0.x86_64.RHEL6.rpm
 Preparing…                          ################################# [100%]
 Updating / installing…
    1:vertica-9.2.1-0                  ################################# [100%]
 Vertica Analytic Database V9.2.1-0 successfully installed on host ip-172-31-10-2.us-east-2.compute.internal
 To complete your NEW installation and configure the cluster, run:
  /opt/vertica/sbin/install_vertica
 To complete your Vertica UPGRADE, run:
  /opt/vertica/sbin/update_vertica
 
 Important
 Before upgrading Vertica, you must backup your database.  After you restart your
 database after upgrading, you cannot revert to a previous Vertica software version.
 View the latest Vertica documentation at https://www.vertica.com/documentation/vertica/
 [root@ip-172-31-10-2 ~]#

Initial verification and installing the Vertica Server package is now complete.

Now I’ll continue with Vertica cluster building. I’ll use IP Address of the host instead of loop-back address. This will allow to add additional nodes to the cluster.

Install Run

[root@ip-172-31-10-2 ~]# /opt/vertica/sbin/install_vertica --hosts 172.31.10.2  --rpm /tmp/vertica-9.2.1-0.x86_64.RHEL6.rpm --data-dir /vdata  --dba-user dbadmin --dba-user-password KasaMusa --license CE --accept-eula
 Vertica Analytic Database 9.2.1-0 Installation Tool
 AWS Detected. Using AWS defaults.
     AWS Default: --dba-user-password-disabled was not specified,  disabling dba password by default while on AWS
     AWS Default: --point-to-point was not specified,  enabling point-to-point spread communication by default while on AWS
        Validating options…    
 Mapping hostnames in --hosts (-s) to addresses…
        Starting installation tasks.
     Getting system information for cluster (this may take a while)…    
 Default shell on nodes:
 172.31.10.2 /bin/bash
        Validating software versions (rpm or deb)…
     Beginning new cluster creation…    
 successfully backed up admintools.conf on 172.31.10.2
        Creating or validating DB Admin user/group…    
 Successful on hosts (1): 172.31.10.2
     Provided DB Admin account details: user = dbadmin, group = verticadba, home = /home/dbadmin
     Creating group… Adding group
     Validating group… Okay
     Creating user… Adding user, Setting credentials
     Validating user… Okay
        Validating node and cluster prerequisites…    
 Prerequisites not fully met during local (OS) configuration for
 verify-172.31.10.2.xml:
     HINT (S0305): https://www.vertica.com/docs/9.2.x/HTML/index.htm#cshid=S0305
         TZ is unset for dbadmin. Consider updating .profile or .bashrc
 System prerequisites passed.  Threshold = WARN
        Establishing DB Admin SSH connectivity…    
 Installing/Repairing SSH keys for dbadmin
        Setting up each node and modifying cluster…    
 Creating Vertica Data Directory…
 Updating agent…
 Creating node node0001 definition for host 172.31.10.2
 … Done
        Sending new cluster configuration to all nodes…    
 Starting agent…
        Completing installation…    
 Running upgrade logic
 No spread upgrade required: /opt/vertica/config/vspread.conf not found on any node
 Installation complete.
 Please evaluate your hardware using Vertica's validation tools:
     https://www.vertica.com/docs/9.2.x/HTML/index.htm#cshid=VALSCRIPT
 To create a database:
 Logout and login as dbadmin. (see note below)
 Run /opt/vertica/bin/adminTools as dbadmin
 Select Create Database from the Configuration Menu
 Note: Installation may have made configuration changes to dbadmin
 that do not take effect until the next session (logout and login). 
 To add or remove hosts, select Cluster Management from the Advanced Menu.
 [root@ip-172-31-10-2 ~]#

Now I’ll logout as root and login as dbadmin to create a database using admintools.

*** Creating database: tsptest ***
         Creating database tsptest
         Starting bootstrap node v_tsptest_node0001 (172.31.10.2)
         Starting nodes:
                 v_tsptest_node0001 (172.31.10.2)
         Starting Vertica on all nodes. Please wait, databases with a large catalog may take a while to initialize.
         Node Status: v_tsptest_node0001: (DOWN)
         Node Status: v_tsptest_node0001: (DOWN)
         Node Status: v_tsptest_node0001: (DOWN)
         Node Status: v_tsptest_node0001: (DOWN)
         Node Status: v_tsptest_node0001: (UP)
 Automatically installing extension packages
 Package: AWS
         Success: package AWS successfully installed
 Package: MachineLearning
         Success: package MachineLearning successfully installed
 Package: ParquetExport
         Success: package ParquetExport successfully installed
 Package: VFunctions
         Success: package VFunctions successfully installed
 Package: approximate
         Success: package approximate successfully installed
 Package: flextable
         Success: package flextable successfully installed
 Package: kafka
         Success: package kafka successfully installed
 Package: logsearch
         Success: package logsearch successfully installed
 Package: place
         Success: package place successfully installed
 Package: txtindex
         Success: package txtindex successfully installed
 Package: voltagesecure
         Success: package voltagesecure successfully installed

Now Database tsptest is created! Let use vsql to connect to DB.

[dbadmin@ip-172-31-10-2 ~]$ vsql
 Password:
 Welcome to vsql, the Vertica Analytic Database interactive terminal.
 Type:  \h or \? for help with vsql commands
        \g or terminate with semicolon to execute query
        \q to quit
 dbadmin=> select version();
               version
 Vertica Analytic Database v9.2.1-0
 (1 row)
 dbadmin=> \q
 [dbadmin@ip-172-31-10-2 ~]$

All SUCCESS! Now I’ll install Management Console on the same node.

Install Management Console

[root@ip-172-31-10-2 ~]# rpm -Uvh /tmp/vertica-console-9.2.1-0.x86_64.RHEL6.rpm
 Preparing…                          ################################# [100%]
 [preinstall] Starting installation….
 Updating / installing…
    1:vertica-console-9.2.1-0          ################################# [100%]
 [postinstall] copy vertica-consoled
 [postinstall] configure the daemon service
 Cleaning up temp folder…
 Starting the vertica management console….
 Vertica Console: 2019-08-18 01:54:18.447:INFO:cv.Startup:Attempting to load properties from /opt/vconsole/config/console.properties
 2019-08-18 01:54:18.448:INFO:cv.Startup:Starting Server…
 2019-08-18 01:54:18.467:WARN:oejs.AbstractConnector:Acceptors should be <=2*availableProcessors: SslSelectChannelConnector@0.0.0.0:5450 STOPPED
 2019-08-18 01:54:18.503:INFO:cv.Startup:starting monitor thread
 2019-08-18 01:54:18.508:INFO:oejs.Server:jetty-7.x.y-SNAPSHOT
 2019-08-18 01:54:18.535:INFO:oejw.WebInfConfiguration:Extract jar:file:/opt/vconsole/lib/webui.war!/ to /opt/vconsole/temp/webapp
 2019-08-18 01:54:23.198:INFO:/webui:Set web app root system property: 'webapp.root' = [/opt/vconsole/temp/webapp]
 2019-08-18 01:54:23.216:INFO:/webui:Initializing log4j from [classpath:log4j.xml]
 2019-08-18 01:54:23.228:INFO:/webui:Initializing Spring root WebApplicationContext
 ---- Upgrading /opt/vconsole/config/console.properties ----
 
 Please open the Vertica Management Console at https://ip-172-31-10-2.us-east-2.compute.internal:5450/webui
 
 2019-08-18 01:54:43.845:INFO:oejsh.ContextHandler:started o.e.j.w.WebAppContext{/webui,file:/opt/vconsole/temp/webapp/},file:/opt/vconsole/lib/webui.war
 2019-08-18 01:54:43.889:INFO:/webui:Initializing Spring FrameworkServlet 'appServlet'
 2019-08-18 01:54:46.039:INFO:oejdp.ScanningAppProvider:Deployment monitor /opt/vconsole/webapps at interval 2
 2019-08-18 01:54:46.068:INFO:oejhs.SslContextFactory:Enabled Protocols [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2] of [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2]
 2019-08-18 01:54:46.091:INFO:oejs.AbstractConnector:Started SslSelectChannelConnector@0.0.0.0:5450 STARTING
 start OK
 [postinstall] Changing permissions of /opt/vconsole        [  OK  ]
 [root@ip-172-31-10-2 ~]#

It is successful, however the URL is not accessible. Need to investigate!!!

MY AWS PORT via security group was open only for SSH an not for HTTPS/HTTP. Once I opened the port the following URL using public IP was accessible for MC.

https://ec2-18-191-186-157.us-east-2.compute.amazonaws.com:5450/webui

All good. Now I’ll configure the uidbadmin user and connect to MC.

This worked well. In my next post I’ll create an AMI and launch a cluster using the AMI.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s