swift/doc/source/install/storage-install-obs.rst
Michele Valsecchi 875a142980 docs: Encourage usage of UUID inside /etc/fstab in examples
Update doc examples to use explicit UUID of devices, instead of using
unstable device names.

Change-Id: I3a2eb7bbe4b4091d2567897904d939df1df6b251
Closes-Bug: #1817966
2020-04-04 19:03:12 +09:00

3.9 KiB

Install and configure the storage nodes for openSUSE and SUSE Linux Enterprise

This section describes how to install and configure storage nodes that operate the account, container, and object services. For simplicity, this configuration references two storage nodes, each containing two empty local block storage devices. The instructions use /dev/sdb and /dev/sdc, but you can substitute different values for your particular nodes.

Although Object Storage supports any file system with extended attributes (xattr), testing and benchmarking indicate the best performance and reliability on XFS. For more information on horizontally scaling your environment, see the Deployment Guide.

This section applies to openSUSE Leap 42.2 and SUSE Linux Enterprise Server 12 SP2.

Prerequisites

Before you install and configure the Object Storage service on the storage nodes, you must prepare the storage devices.

Note

Perform these steps on each storage node.

  1. Install the supporting utility packages:

    # zypper install xfsprogs rsync
  2. Format the /dev/sdb and /dev/sdc devices as XFS:

    # mkfs.xfs /dev/sdb
    # mkfs.xfs /dev/sdc
  3. Create the mount point directory structure:

    # mkdir -p /srv/node/sdb
    # mkdir -p /srv/node/sdc
  4. Find the UUID of the new partitions:

    # blkid
  5. Edit the /etc/fstab file and add the following to it:

    UUID="<UUID-from-output-above>" /srv/node/sdb xfs noatime,nodiratime,logbufs=8 0 2
    UUID="<UUID-from-output-above>" /srv/node/sdc xfs noatime,nodiratime,logbufs=8 0 2
  6. Mount the devices:

    # mount /srv/node/sdb
    # mount /srv/node/sdc
  7. Create or edit the /etc/rsyncd.conf file to contain the following:

    uid = swift
    gid = swift
    log file = /var/log/rsyncd.log
    pid file = /var/run/rsyncd.pid
    address = MANAGEMENT_INTERFACE_IP_ADDRESS
    
    [account]
    max connections = 2
    path = /srv/node/
    read only = False
    lock file = /var/lock/account.lock
    
    [container]
    max connections = 2
    path = /srv/node/
    read only = False
    lock file = /var/lock/container.lock
    
    [object]
    max connections = 2
    path = /srv/node/
    read only = False
    lock file = /var/lock/object.lock

    Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

    Note

    The rsync service requires no authentication, so consider running it on a private network in production environments.

  8. Start the rsyncd service and configure it to start when the system boots:

    # systemctl enable rsyncd.service
    # systemctl start rsyncd.service

Install and configure components

Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain.

Note

Perform these steps on each storage node.

  1. Install the packages:

    # zypper install openstack-swift-account \
      openstack-swift-container openstack-swift-object python-xml
  2. Ensure proper ownership of the mount point directory structure:

    # chown -R swift:swift /srv/node