============================================================== Instructions for a Multiple Server Swift Installation (Ubuntu) ============================================================== Prerequisites ------------- * Ubuntu Server 10.04 LTS installation media .. note: Swift can run with other distros, but for this document we will focus on installing on Ubuntu Server, ypmv (your packaging may vary). Basic architecture and terms ---------------------------- - *node* - a host machine running one or more Swift services - *Proxy node* - node that runs Proxy services; also runs TempAuth - *Storage node* - node that runs Account, Container, and Object services - *ring* - a set of mappings of Swift data to physical devices This document shows a cluster using the following types of nodes: - one Proxy node - Runs the swift-proxy-server processes which proxy requests to the appropriate Storage nodes. The proxy server will also contain the TempAuth service as WSGI middleware. - five Storage nodes - Runs the swift-account-server, swift-container-server, and swift-object-server processes which control storage of the account databases, the container databases, as well as the actual stored objects. .. note:: Fewer Storage nodes can be used initially, but a minimum of 5 is recommended for a production cluster. This document describes each Storage node as a separate zone in the ring. It is recommended to have a minimum of 5 zones. A zone is a group of nodes that is as isolated as possible from other nodes (separate servers, network, power, even geography). The ring guarantees that every replica is stored in a separate zone. For more information about the ring and zones, see: :doc:`The Rings `. To increase reliability, you may want to add additional Proxy servers for performance which is described in :ref:`add-proxy-server`. Network Setup Notes ------------------- This document refers to two networks. An external network for connecting to the Proxy server, and a storage network that is not accessibile from outside the cluster, to which all of the nodes are connected. All of the Swift services, as well as the rsync daemon on the Storage nodes are configured to listen on their STORAGE_LOCAL_NET IP addresses. .. note:: Run all commands as the root user General OS configuration and partitioning for each node ------------------------------------------------------- #. Install the baseline Ubuntu Server 10.04 LTS on all nodes. #. Install common Swift software prereqs:: apt-get install python-software-properties add-apt-repository ppa:swift-core/release apt-get update apt-get install swift python-swiftclient openssh-server #. Create and populate configuration directories:: mkdir -p /etc/swift chown -R swift:swift /etc/swift/ #. On the first node only, create /etc/swift/swift.conf:: cat >/etc/swift/swift.conf </etc/swift/proxy-server.conf <> /etc/fstab mkdir -p /srv/node/sdb1 mount /srv/node/sdb1 chown swift:swift /srv/node/sdb1 #. Create /etc/rsyncd.conf:: cat >/etc/rsyncd.conf </etc/swift/account-server.conf </etc/swift/container-server.conf </etc/swift/object-server.conf <' #. Check that ``swift`` works (at this point, expect zero containers, zero objects, and zero bytes):: swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass stat #. Use ``swift`` to upload a few files named 'bigfile[1-2].tgz' to a container named 'myfiles':: swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass upload myfiles bigfile1.tgz swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass upload myfiles bigfile2.tgz #. Use ``swift`` to download all files from the 'myfiles' container:: swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass download myfiles #. Use ``swift`` to save a backup of your builder files to a container named 'builders'. Very important not to lose your builders!:: swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass upload builders /etc/swift/*.builder #. Use ``swift`` to list your containers:: swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass list #. Use ``swift`` to list the contents of your 'builders' container:: swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass list builders #. Use ``swift`` to download all files from the 'builders' container:: swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass download builders .. _add-proxy-server: Adding a Proxy Server --------------------- For reliability's sake you may want to have more than one proxy server. You can set up the additional proxy node in the same manner that you set up the first proxy node but with additional configuration steps. Once you have more than two proxies, you also want to load balance between the two, which means your storage endpoint also changes. You can select from different strategies for load balancing. For example, you could use round robin dns, or an actual load balancer (like pound) in front of the two proxies, and point your storage url to the load balancer. See :ref:`config-proxy` for the initial setup, and then follow these additional steps. #. Update the list of memcache servers in /etc/swift/proxy-server.conf for all the added proxy servers. If you run multiple memcache servers, use this pattern for the multiple IP:port listings: `10.1.2.3:11211,10.1.2.4:11211` in each proxy server's conf file.:: [filter:cache] use = egg:swift#memcache memcache_servers = $PROXY_LOCAL_NET_IP:11211 #. Change the storage url for any users to point to the load balanced url, rather than the first proxy server you created in /etc/swift/proxy-server.conf:: [filter:tempauth] use = egg:swift#tempauth user_system_root = testpass .admin http[s]://:/v1/AUTH_system #. Next, copy all the ring information to all the nodes, including your new proxy nodes, and ensure the ring info gets to all the storage nodes as well. #. After you sync all the nodes, make sure the admin has the keys in /etc/swift and the ownership for the ring file is correct. Troubleshooting Notes --------------------- If you see problems, look in var/log/syslog (or messages on some distros). Also, at Rackspace we have seen hints at drive failures by looking at error messages in /var/log/kern.log. There are more debugging hints and tips in the :doc:`admin_guide`.