Installing StarlingX with Centralize Node
Based on :
Preparation
Create Bootable USB
- Insert the bootable USB into a bootable USB port on the host you are configuring as controller-0.
- Power on the host.
- Attach to a console, ensure the host boots from the USB, and wait for the StarlingX Installer Menus.
- Make the following menu selections in the installer:
- First menu: Select ‘All-in-one Controller Configuration’
- Second menu: Select ‘Graphical Console’ or ‘Textual Console’ depending on your terminal access to the console port
- Wait for non-interactive install of software to complete and server to reboot. This can take 5-10 minutes, depending on the performance of the server.
Install Central Cloud
- After install login with user/pass sysadmin/sysadmin then change sysadmin password ie : Passw0rd123#
- Set OAM ip address, edit /etc/sysconfig/network-script/ifcfg-ens1
BOOTPROTO=none
IPADDR=10.0.0.31
PREFIX=24
GATEWAY=10.0.0.1
DEFROUTE=yes
ONBOOT=yes
- Set Management ip address, edit /etc/sysconfig/network-script/ifcfg-ens1
BOOTPROTO=none
IPADDR=172.16.3.31
PREFIX=24
ONBOOT=yes
- #reboot
- Test ping 8.8.8.8
- vi /etc/resolv.conf
nameserver 8.8.8.8
- Test Ping google.com
- Edit /etc/ansible/hosts:
all:
hosts:
localhost:
ansible_connection: local
vars:
ansible_ssh_user: sysadmin
ansible_ssh_pass: Passw0rd123#
ansible_become_pass: Passw0rd123#
- Set OS timezone
#timedatectl set-timezone Asia/Jakarta
- Edit file: /usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml
distributed_cloud_role: systemcontroller
timezone: Asia/Jakarta
# At least one DNS server is required and maximum 3 servers are allowed
dns_servers:
– 8.8.8.8
– 8.8.4.4
external_oam_subnet: 10.0.0.0/24
external_oam_gateway_address: 10.0.0.1
external_oam_floating_address: 10.0.0.31
management_subnet: 172.16.3.0/24
management_start_address: 172.16.3.200
management_end_address: 172.16.3.220
Bootstrap system on controller-0
- After install login with user/pass sysadmin/sysadmin then change sysadmin password ie : Passw0rd123#
- Set ip address, edit /etc/sysconfig/network-script/ifcfg-ens1
BOOTPROTO=none
IPADDR=10.0.0.31
PREFIX=24
GATEWAY=10.0.0.1
DEFROUTE=yes
ONBOOT=yes
- #reboot
- Test ping 8.8.8.8
- vi /etc/resolv.conf
nameserver 8.8.8.8
- Test $ping google.com
- Edit /etc/ansible/hosts:
all:
hosts:
localhost:
ansible_connection: local
vars:
ansible_ssh_user: sysadmin
ansible_ssh_pass: Passw0rd123#
ansible_become_pass: Passw0rd123#
- Set OS timezone
#timedatectl set-timezone Asia/Jakarta
- Edit file: /usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml
distributed_cloud_role: none
timezone: Asia/Jakarta
# At least one DNS server is required and maximum 3 servers are allowed
dns_servers:
– 8.8.8.8
– 8.8.4.4
external_oam_subnet: 10.0.0.0/24
external_oam_gateway_address: 10.0.0.1
external_oam_floating_address: 10.0.0.31
- Sync time :
sudo ntpdate 0.pool.ntp.org 1.pool.ntp.org
- Execute the bootstrap playbook :
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
Configure controller-0
- Acquire admin credentials:
source /etc/platform/openrc
- Configure the OAM interface of controller-0 and specify the attached network as “oam”. The following example configures the OAM interface on a physical untagged ethernet port, use OAM port name that is applicable to your deployment environment, for example eno1:
OAM_IF=eno1
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
- Sync time :
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
OpenStack-specific host configuration
- For OpenStack only: Assign OpenStack host labels to controller-0 in support of installing the stx-openstack manifest and helm-charts later.
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled
- For OpenStack only: Due to the additional OpenStack services running on the AIO controller platform cores, a minimum of 4 platform cores are required, 6 platform cores are recommended.
Increase the number of platform cores with the following commands:
# Assign 6 cores on processor/numa-node 0 on controller-0 to platform
system host-cpu-modify -f platform -p0 6 controller-0
- Due to the additional OpenStack services’ containers running on the controller host, the size of the Docker filesystem needs to be increased from the default size of 30G to 60G.
# check existing size of docker fs
system host-fs-list controller-0
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
system host-lvg-list controller-0
# if existing docker fs size + cgts-vg available space is less than
# 80G, you will need to add a new disk partition to cgts-vg.
# There must be at least 20GB of available space after the docker
# filesystem is increased.
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
# ( if not use another unused disk )
# Get device path of ROOT DISK
system host-show controller-0 | fgrep rootfs
# Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=60
system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
+————-+——————————————————-+
| Property | Value |
+————-+——————————————————-+
| device_path | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0-part1 |
| device_node | /dev/sdb1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 30720 |
| uuid | 6c9404b6-b3a2-4525-b035-83f8b42aa7c6 |
| ihost_uuid | 7ed27a92-611d-4b98-b574-4f97e5955e5d |
| idisk_uuid | 4b9f23c3-2b65-4877-ab72-b5291726d997 |
| ipv_uuid | None |
| status | Creating |
| created_at | 2022-06-29T17:39:37.513535+00:00 |
| updated_at | None |
+————-+——————————————————-+
# Add new partition to ‘cgts-vg’ local volume group
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added
#Verify the status
system host-pv-list controller-0
- To deploy the default containerized OVS:
system modify --vswitch_type none
- For OpenStack only: Set up disk partition for nova-local volume group, which is needed for stx-openstack nova ephemeral disks.
export NODE=controller-0
# Create ‘nova-local’ local volume group
system host-lvg-add ${NODE} nova-local
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
# CEPH OSD Disks can NOT be used
# For best performance, do NOT use system/root disk, use a separate physical disk.
# List host’s disks and take note of UUID of disk to be used
system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of
# ‘system host-show ${NODE} | fgrep rootfs’ )
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
# but is limited by the size and space available on the physical disk you chose above.
# The following example uses a small PARTITION size such that you can fit it on the
# root disk, if that is what you chose above.
# Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to ‘nova-local’ local volume group
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
sleep 2
- For OpenStack only: Configure data interfaces for controller-0. Data class interfaces are vSwitch interfaces used by vSwitch to provide VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the underlying assigned Data Network.
Important!
A compute-labeled AIO-controller host MUST have at least one Data class interface. Configure the data interfaces for controller-0.
export NODE=controller-0
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List host’s auto-configured ‘ethernet’ interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch ‘data’ interfaces will be connected to
DATANET0=’datanet0′
DATANET1=’datanet1′
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
- Add host-based Ceph backend:
system storage-backend-add ceph –confirmed
- Add an OSD on controller-0 for host-based Ceph:
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0
- Unlock to enable the system service
system host-unlock controller-0
Wait for it rebooting, or if 10 min its note rebooting, reboot manually #reboot
- Check the status:
system host-show controller-0
- Applied docker partition:
# Increase docker filesystem to 90G
system host-fs-modify controller-0 docker=90
Prepare Openstack Access
- Get the latest StarlingX OpenStack application (stx-openstack) manifest and helm charts.
wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/6.0.0/centos/flock/outputs/helm-charts/stx-openstack-1.0-140-centos-stable-versioned.tgz
- Upload to helm chart
system application-upload stx-openstack-1.0-140-centos-stable-versioned.tgz
Check the status,
system application-show stx-openstack
- After progress completed, Apply the stx-openstack application in order to bring StarlingX OpenStack into service. If your environment is preconfigured with a proxy server, then make sure HTTPS proxy is set before applying stx-openstack.
system application-apply stx-openstack
Check the status,
system application-show stx-openstack
*sometimes try multiple time
Or monitor it:
watch -n 5 system application-list
- Access Openstack CLI, set CLI Context and set up admin credentials, create/edit file /home/sysadmin/openrc.os:
unset OS_SERVICE_TOKEN
export OS_ENDPOINT_TYPE=internalURL
export CINDER_ENDPOINT_TYPE=internalURL
export OS_USERNAME=admin
export OS_PASSWORD=`TERM=linux /opt/platform/.keyring/21.12/.CREDENTIAL 2>/dev/null`
export OS_AUTH_TYPE=password
export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
export OS_INTERFACE=internal
if [ ! -z “${OS_PASSWORD}” ]; then
export PS1='[\u@\h \W(keystone_$OS_USERNAME)]\$ ‘
else
echo ‘Openstack Admin credentials can only be loaded from the active controller.’
export PS1=’\h:\w\$ ‘
fi
- Source it.
source ./openrc.os
- Test the Openstack CLI
openstack endpoint list
Openstack flavor list
- Access Openstack GUI
Login using admin/{admin password} in the configuration
# ADMIN CREDENTIALS
# =================
#
# WARNING: It is strongly recommended to store these settings in Ansible vault
# file named “secrets.yml” under override files directory. Configuration parameters
# stored in vault must start with vault_ prefix (i.e. vault_admin_username,
# vault_admin_password).
#
admin_username: admin
admin_password: St8rlingX*