2014cdb906
Preventing access to expired objects ------------------------------------ Re-enabled accepting X-Delete-At and X-Delete-After headers. During a GET on an expired object, DiskFileExpired is raised by DiskFile class. This will result in object-server returning HTTPNotFound (404) to the client. Tracking objects to be deleted ------------------------------ Objects to be deleted are tracked using "tracker objects". These are PUT into a special account(a volume, for now). These zero size "tracker objects" have names that contain: * Expiration timestamp * Path of the actual object to be deleted Deleting actual objects from GlusterFS volume --------------------------------------------- The object-expirer daemon runs a pass once every X seconds. For every pass it makes, it queries the special account for "tracker objects". Based on (timestamp, path) present in name of "tracker objects", object-expirer then deletes the actual object and the corresponding tracker object. To run object-expirer forever: swift-init object-expirer start To run just once: swift-object-expirer -o -v /etc/swift/object-expirer.conf Caveat/Limitation: Object-expirer needs a separate account(volume) that is not used by other services like gswauth. By default, this volume is named "gsexpiring" and is configurable. More info about object expiration: http://docs.openstack.org/developer/swift/overview_expiring_objects.html Change-Id: I876995bf4f16ef4bfdff901561e0558ecf1dc38f Signed-off-by: Prashanth Pai <ppai@redhat.com> Reviewed-on: http://review.gluster.org/6891 Tested-by: Chetan Risbud <crisbud@redhat.com> Reviewed-by: pushpesh sharma <psharma@redhat.com> Tested-by: pushpesh sharma <psharma@redhat.com> Reviewed-by: Chetan Risbud <crisbud@redhat.com>
56 lines
2.2 KiB
Plaintext
56 lines
2.2 KiB
Plaintext
[DEFAULT]
|
|
#
|
|
# Default gluster mount point to be used for object store,can be changed by
|
|
# setting the following value in {account,container,object}-server.conf files.
|
|
# It is recommended to keep this value same for all the three services but can
|
|
# be kept different if environment demands.
|
|
devices = /mnt/gluster-object
|
|
#
|
|
# Once you are confident that your startup processes will always have your
|
|
# gluster volumes properly mounted *before* the object-server workers start,
|
|
# you can *consider* setting this value to "false" to reduce the per-request
|
|
# overhead it can incur.
|
|
mount_check = true
|
|
bind_port = 6010
|
|
#
|
|
# Maximum number of clients one worker can process simultaneously (it will
|
|
# actually accept N + 1). Setting this to one (1) will only handle one request
|
|
# at a time, without accepting another request concurrently. By increasing the
|
|
# number of workers to a much higher value, one can prevent slow file system
|
|
# operations for one request from starving other requests.
|
|
max_clients = 1024
|
|
#
|
|
# If not doing the above, setting this value initially to match the number of
|
|
# CPUs is a good starting point for determining the right value.
|
|
workers = 1
|
|
# Override swift's default behaviour for fallocate.
|
|
disable_fallocate = true
|
|
|
|
[pipeline:main]
|
|
pipeline = object-server
|
|
|
|
[app:object-server]
|
|
use = egg:gluster_swift#object
|
|
user = root
|
|
log_facility = LOG_LOCAL2
|
|
log_level = WARN
|
|
# The following parameters are used by object-expirer and needs to be same
|
|
# across all conf files!
|
|
auto_create_account_prefix = gs
|
|
expiring_objects_account_name = expiring
|
|
#
|
|
# For performance, after ensuring things are running in a stable manner, you
|
|
# can turn off normal request logging for the object server to reduce the
|
|
# per-request overhead and unclutter the log files. Warnings and errors will
|
|
# still be logged.
|
|
log_requests = off
|
|
#
|
|
# Adjust this value to match the stripe width of the underlying storage array
|
|
# (not the stripe element size). This will provide a reasonable starting point
|
|
# for tuning this value.
|
|
disk_chunk_size = 65536
|
|
#
|
|
# Adjust this value match whatever is set for the disk_chunk_size initially.
|
|
# This will provide a reasonable starting point for tuning this value.
|
|
network_chunk_size = 65536
|