Upgrading CloudBolt

CloudBolt upgraders are distributed as .tgz archives. They can be downloaded either from CloudBolt’s UI (Admin > Version & Upgrade Info) or from the pertinent post in the support portal at https://support.cloudbolt.io/hc/en-us. The upgrader is designed to keep customizations you have made safe and not lose information, but be sure to review the release notes for any changes you may need to be aware of.

  1. Back up your CloudBolt instance by duplicating the VM or taking a snapshot.
  2. Navigate in the CloudBolt UI to Admin > Version & Upgrade Info
  3. Decide which available release you would like to upgrade to and click the download link for that release
  4. Read the release notes
  5. ssh as root to the CloudBolt server and cd into the directory where the upgrader was downloaded (typically /var/tmp)
  6. Untar the file by running the command tar xzf name_of_cloudbolt_file.tgz (for example: tar xzf cloudbolt_upgrader_3.0-src-f893cd2.tgz)
  7. Go into the directory by running cd cloudbolt_upgrader*
  8. If the database that this CloudBolt instance is connecting to is on a different server, edit the install_config file to specify the mysql_host
  9. Run the command ./upgrade_cloudbolt.sh
  10. CloudBolt will upgrade to the new version.
  11. To verify the upgrade, open the CloudBolt web interface, click on the CloudBolt logo at the bottom of the page, and verify the version number shown in the dialog is the new version number.

Note

CloudBolt advises running yum update after an upgrade.

Upgrading to CloudBolt 9.X

9.X UPGRADE NOTES

  • Upgrading to 9.X requires installing a new CloudBolt instance.
  • We only support upgrading/migrating from CloudBolt 8.X to CloudBolt 9.X. If you wish to upgrade from a pre-CloudBolt 8.0 version to CloudBolt 9.X, then you must first upgrade to CloudBolt 8.X before following the steps to move to CloudBolt 9.X.

CloudBolt 8.8 and earlier run on CentOS 6.6, but CloudBolt 9.0 runs on CentOS 7.6. Unfortunately, it is not possible to upgrade from CentOS 6 to CentOS 7 in place. We have prepared command line tools to make migrating your CloudBolt instance from CentOS 6 to CentOS 7 as painless as possible. However, you should still read this documentation before attempting an export or import of your CloudBolt instance.

For Systems with Multiple Logical Volumes

  • If your CloudBolt 9.X instance is running on a system on which any of the /var, /etc, or /tmp directories are stored on separate logical volumes, then you should be sure to use the most recent version of the cb_import import utility. There is a bug in the 8.8 version of the utility that will cause imports to fail on a system that keeps these directories in separate logical volumes.

Outline

  1. Install CloudBolt 9.X
  2. Export CloudBolt 8.X to a .tar.gz compressed archive using the cb_export command line utility
  3. Copy the .tar.gz compressed archive to your new CloudBolt 9.X instance running on CentOS 7.6
  4. Import CloudBolt on 9.X using the cb_import command line utility
  5. Run Django Database Migrations
  6. Create New Database Objects

Install CloudBolt 9.X

You install CloudBolt 9.X in the same way that you install any CloudBolt, it will just be running on the CentOS 7.6 operating system instead of CentOS 6. See Installing CloudBolt.

Export CloudBolt 8.X

We have prepared a command line tool, cb_export.py, to take care of archiving and compressing all of the standard CloudBolt files and directories into a single .tar.gz compressed archive. The utility is modular enough to allow users to easily add additional files and directories to include in the export or to mark specific files or directories to be excluded from the export.

On our largest internal testing servers, a full export completed in approximately 60 minutes.

cb_export Usage

The export utility will not run if either the Apache web server (httpd) or the job engine is active. This is to prevent an active CloudBolt from making changes to files and directories as they are being compressed, which could break the export.

  • service httpd stop will stop Apache

  • There are 3 potential ways to stop the job engine, depending on your version of CloudBolt

    • supervisor-based job engine: supervisorctl stop jobengine:*

    • celery-based job engine: supervisorctl stop celeryd:*

    • cron-based job engine: service crond stop

      Note: service crond stop only stops cron from automatically restarting the job engine and does not kill running job engine processes, but running job engine processes will die either within 60 seconds or once they complete all of their jobs, whichever is later.

cb_export can be called as a Python script from anywhere on the filesystem of your CloudBolt instance (python /tmp/cb_export.py [options]) or it can be called as a Django management command (python manage.py cb_export [options]). Either way, the utility will be run as a Django management command behind the scenes because it needs access to various settings defined in Django.

You can run cb_export with the --help or -h option to print the help text, but the same information is also provided here in a different format.

The export utility will include the following directories:

  • VARDIR/opt/cloudbolt where VARDIR is defined in Django settings
  • STATIC_ROOT, MEDIA_ROOT, and STATIC_TESTS_DIR, where each is defined in Django settings

The export utility will enable maintenance mode when it starts and disable it when the export completes, much like an upgrade.

Options

  • Default Files

    You can view what will be included in the default export with the -l or --list-defaults option. This will also list the sizes of those files and directories.

  • Dry Run

    You can run this utility without making any changes to examine its output messages by running it with the -d or --dry-run option.

  • Job Logs

    By default, the export will include the job logs directory defined by the JOBTHREAD_LOGPATH Django setting, which defaults to /var/log/cloudbolt/jobs/. This can be turned off with the -nj or --no-job-logs option. If your job log directory is too large, you might wish to handle transferring your job logs outside of this export process. The job logs need to be moved to the new CloudBolt instance so that links to view or download a job log from the job details page will still work for jobs that were run on the old CloudBolt instance.

  • MySQL

    By default, the export will include a database dump of the MySQL database this CloudBolt instance uses. This utility uses the mysqldump command for the database dump. If including the database dump in the export, the exporter will use the ‘default’ database defined in the DATABASES Django setting for database, username, and password. If the password is not defined in those settings, then the user will be prompted for this password as this utility runs.

    The utility also includes the default MySQL configuration files, if they exist: /etc/my.cnf and /etc/mysql/my.cnf.

    All the MySQL functionality (database dump and config files) can be excluded with the -nm or --no-mysql option.

  • Apache

    By default, the utility includes the following configuration files for Apache: /etc/httpd/conf/httpd.conf, /etc/httpd/conf.d/ssl.conf, and /etc/httpd/conf.d/wsgi.conf.

    These files can be excluded with the -na or --no-apache option.

  • CloudBolt Secrets

    By default, the CloudBolt secrets directory (/var/opt/cloudbolt/secrets) is excluded from the export, but it can be included with the -is or --include-secrets option. The directory contains secret keys used to encrypt and decrypt sensitive fields stored on the CloudBolt database. Your new CloudBolt instance must have these keys to properly access sensitive fields included in the database dump. If you do not include the secrets directory in this export, then you must transfer it to your new CloudBolt in a different manner.

  • SSL Certificates

    Users can include the -ic or --include-certs option to add certificates to the export. Certificates will be included from the following directories: /etc/pki/tls/certs and /etc/pki/tls/private.

    Including certificates in an unencrypted .tar.gz file has security implications because secret information will be saved in a format that anyone can read, so be sure you want to include this information before using this option.

  • Extra Files and Directories

    Users can specify their own files and directories to include as arguments after the options. The user-specified directories and files will be moved to the same paths on the destination machine by the cb_import utility.

    For example, if you wanted to include certain pet-related data from a CloudBolt 8.X machine in the export, you could run python cb_export.py /tmp/dog_photos/ /var/opt/cat_stories/*.txt.

  • Exclude Files and Directories

    Users can specify files and directories to exclude from the import (including default CloudBolt directories or their children) using the -ex or --exclude option. Users can include this option as many times as they want. Excluded directories that are within included directories will be temporarily moved to /var/tmp/cb_export_tmp while their parent directory is archived and compressed. Paths to exclude will always override default paths and paths to include.

    For example, if you wanted to include all logs CloudBolt logs in an export except for the job engine log, then you could run python manage.py cb_export /var/log/cloudbolt/ -ex /var/log/cloudbolt/jobengine.log

  • Output Directory

    By default, the final compressed archive will be saved to the /var/tmp/ directory. This can be overridden with the -o or --output option. The compressed archive will be named cbdumpv1_YYYYMMDD_HHMMSS.tgz, where the YYYYMMDD_HHMMSS is a timestamp.

Import CloudBolt on 9.X

We have prepared a command line tool, cb_import.py, to take care of extracting and moving the various files and directories contained within the compressed archive created by cb_export.py.

cb_import Usage

You must verify values for the following Django settings before running this utility: DATABASES, VARDIR, STATIC_ROOT, MEDIA_ROOT, and STATIC_TESTS_DIR. The default values can be found in /opt/cloudbolt/settings.py and any overrides can be found in /var/opt/cloudbolt/proserv/customer_settings.py. This utility will use these settings to determine which database to connect to and where to extract certain directories.

cb_import can be called as a Python script from anywhere on the filesystem of your CloudBolt instance, (python /tmp/cb_import.py cbdump...tar.gz) or it can be called as a Django management command (python manage.py cb_import cbdump...tar.gz). Either way, the utility will be run as a Django management command behind the scenes because it needs access to various settings defined in Django.

The import utility will extract the compressed archive specified in the command invocation to the system’s temporary directory as determined by the Python Standard Library function tempfile.gettempdir. The system’s temporary directory can be altered from the default of /tmp by setting a new directory in the TMPDIR environment variable. As the import runs it will copy the files and directories from the extracted import to their destinations. The extracted import will be deleted when the importer exits, successfully or otherwise.

The compressed archive will be extracted to temporary directory that will be deleted once this command finishes running. Files in the extracted archive are deleted as they are processed to conserve space.

Space requirements are estimated cautiously to prevent a failure mid-import that leaves CloudBolt in a broken state. If the utility determines that your system lacks enough disk space to run the import safely, you can fix it with one or a combination of the following options:

  • Add more disk space.
  • Free unused space on the CentOS 7.6 machine. The /tmp/ and /var/tmp directories might be good options for cleaning.
  • Decrease the size of your export. Try excluding the job logs from the export with the -nj option or excluding the MySQL configuration files and dump with the -nm option. All of this can be copied over after the the import command completes without issue.

All files and directories that are written by this utility will maintain owner, group, permission, and timestamp information. Existing files that match those in the export will be overwritten. Files within existing directories that the import utility is writing to will not be deleted, only updated or created.

You can run cb_import with the --help or -h option to print the help text, but the same information is also provided here in a different format.

Options

  • Dry Run

    You can run this utility without making any permanent changes to examine its output messages by running it with the -d or --dry-run option. This will extract the compressed archive to a temporary directory to examine its contents, but it will be deleted once the utility finishes running (or if it encounters an exception).

  • Backups

    By default, the utility will create backups of existing files and directories before potentially overwriting them. Files will be backed up in place by moving the existing file to <FILE_NAME>.bak. Directories will be backed up in place as compressed archives with the name <DIR_NAME>.bak.tar.gz. Backups can be turned off with the -nb or --no-backups option.

  • MySQL Load

    If the extracted directory contains a MySQL database dump (the file is cb_mysql_dump.sql), then that will be used to load the database, overwriting any existing database that matches the default defined in the DATABASES Django setting. The database load can be skipped with the -nd or --no-database-load option. If the database load fails for any reason, cb_mysql_dump.sql will be copied to the /tmp/ directory so that the database load can be run manually with mysql < /tmp/cb_mysql_dump.sql.

Run Django Database Migrations

If your export included a database dump and load, you will need to run database migrations. This is because the database on your new CloudBolt 9.X instance will be in the exact same state as the database on your CloudBolt 8.X instance, but the CloudBolt 9.X instance will have code and schema changes that the CloudBolt 8.X instance lacks.

Fortunately, running Django database migrations is straightforward.

  1. Log in to your CloudBolt 9.X machine.
  2. Run the Django migration management command python /opt/cloudbolt/manage.py migrate. /opt/cloudbolt/ is the default directory for CloudBolt, but your root CloudBolt directory could be somewhere else.
  3. Wait for the migrations to complete, then restart httpd systemctl restart httpd.

Create New Database Objects

To pull in new functionality that is dependent on out-of-the-box database objects (such as new Orchestration Actions and Recurring Jobs), run the create_objects Python script, which is run behind the scenes as part of every CloudBolt upgrade.

  1. Log in to your CloudBolt 9.X machine.
  2. Run the create_objects.py script. By default, this script can be run with python /opt/cloudbolt/initialize/create_objects.py, but it might be in a different directory if you have overridden the default CloudBolt directory structure.

This is necessary because the import process completely overwrites the database of your new CloudBolt 9.X instance with data from your old 8.X instance, which wipes out any new functionality that depends on database objects. The create_objects script will not overwrite changes that you have made to existing out-of-the-box Actions or Plugins.